Wednesday, December 21, 2005

The Ginsparg-Wilson relation

Time for another post in our series about lattice fermions.

In the previous post in this series we had a look at the Nielsen-Ninomiya theorem, which stated that any acceptable lattice fermion action for which the Dirac operator anticommuted with $$\gamma^5$$ had to have doubler fermions. On the face of it that seems to imply a stark choice between chiral symmetry and freedom from doublers.

There is, however, an interesting way around this apparent dilemma. This was discovered by Ginsparg and Wilson in a 1982 paper, were they studied the result of performing a spin-blocking step on a chirally symmetric continuum fermion action. What they discovered was that the Dirac operator of the blocked theory obeyed the anticommutation relation
$$\{\gamma^5,D\} = 2 a D \gamma^5 D$$
now known as the Ginsparg-Wilson relation.

This relation has a number of interesting consequences: Firstly, it implies that the propagator $$\tilde{S}(p)=\tilde{D}(p)^{-1}$$ obeys the anticommutation relation
$$\{\gamma^5,\tilde{S}(p)\} = 2 a \gamma^5$$
and hence in coordinate space
$$\{\gamma^5,S(x-y)\} = 2 a \gamma^5\delta(x-y)$$
i.e. the propagator is chirally invariant at all non-zero distances. Secondly, Lüscher discovered in 1998 that the Ginsparg-Wilson relation leads to a non-standard realization of chiral symmetry in the theory, which is invariant under the infinitesimal transformations
$$\psi & \mapsto & \psi+\epsilon\gamma^5\left(1-aD\right)\psi\\bar{\psi} & \mapsto & \bar{\psi} +<br />\epsilon\bar{\psi}\left(1-aD\right)\gamma^5$$
The fermion measure, however, transforms anomalously under this symmetry, and a little calculation shows that this gives precisely the correct chiral anomaly.

On the other hand, since the Wilson operator no longer anticommutes with $$\gamma^5$$, the conditions of the Nielsen-Ninomiya theorem no longer apply, and there is hence no reason to expect the existence of any doubler fermions.

What all this means is that the correct chiral physics can be obtained from a lattice theory, provided one is able to find a solution to the Ginsparg-Wilson relation. The next post in this series will look at some of the fermion actions that arise from this.

Monday, December 12, 2005

Analytical results for the glueball spectrum

In a recent paper, Leigh, Minic and Yelnikov present an analytical result for the glueball spectrum in (2+1) dimensions. They employ a Hamiltonian formalism pioneeered in a series of papers by Karabali, Kim and Nair. The main result is that the glueball spectrum of (2+1)-dimensional pure Yang-Mills theory can be expressed in terms of the zeros of the Bessel function $$J_2(z)$$. In particular, the masses of 0++ states can be written as the sum of two Bessel zeros:
$$m(0^{++*^r}) = (j_{2,n_1}+j_{2,n_2})\frac{g^2 N}{4\pi}$$
where n1 and n2 can be determined from r, and it is to be noted that the gauge coupling in (2+1) dimensions has the dimension of $$\sqrt{Mass}$$. Similarly, the masses of 0-- states can be written as the sum of three Bessel zeros:
$$m(0^{--*^r}) = (j_{2,n_1}+j_{2,n_2}+j_{2,n_3})\frac{g^2 N}{4\pi}$$
Their results agree reasonably well with lattice simulations of (2+1)-dimensional pure Yang-Mills theory.

There are some interesting implications of their results which are not discussed in their paper (they say they are going to publish another, more detailed, one). In particular, since for large m the Bessel zeros go like
$$j_{m,n}\simeq\left(n+\frac{m}{2}+\frac{1}{4}\right)\pi$$
for large excitation numbers, there will be almost degenerate states separated by gaps of $$g^2N/4$$, with the (almost) degeneracy of the r-th state given by the number of ways to partition (r+1), or (r+2), into two or three integers, respectively.

Another interesting implication of their results is that the mass difference between successive states of even parity and that of successive states of odd parity should be the same. This does not quite agree with what is found on the lattice, where the mass difference for the ++ states is about 1.6 times that for the -- states (which is similar to the difference in results obtained for the gluonic mass in (2+1) dimensions using self-consistent resummation methods with parity-even and parity-odd mass terms, respectively). From the analytical results this parity-dependence of the mass gap would appear to be some sort of artifact.

It will be interesting to see what is in Leigh, Minic and Yelnikov's detailed paper, in particular how the higher-spin glueballs turn out.

Thursday, December 08, 2005

Lattice QCD in the News

Lattice QCD made the news again. The AIP's Top Physics Stories in 2005 include the Most Precise Mass Calculation For Lattice QCD, an unquenched determination of the B_c mass by members of the HPQCD, Fermilab lattice and UKQCD collaborations published in Physical Review Letters this May.

Monday, December 05, 2005

The Nielsen-Ninomiya theorem

In recent posts in this series, we have been looking at naive, Wilson and staggered fermions. One of the things we have seen is how difficult it is to get rid of the doubler fermions; staggering did a good job at this, but still retained some of the doublers with all the problems they bring, while the Wilson term got rid of the doublers, but only at the expense of spoiling chiral symmetry, which brought on even worse problems. Why should the discretisation of fermions be so hard?

The answer lies in a theorem about lattice fermions, the celebrated Nielsen-Ninomiya no-go theorem, which states that it is impossible to have a chirally invariant, doubler-free, local, translation invariant, real bilinear fermion action on the lattice. The theorem comes from topological arguments: A real bilinear fermion action can be written as
$$S = \sum_{x,y} \bar{\psi}(x)M(x,y)\psi(y)$$
with hermitean $$M$$. Translation invariance means that
$$M(x,y)=D(x-y)$$
and locality requires that the Fourier transform $$\tilde{D}(p)$$ of $$D(z)$$ be a regular function of p throughout the Brillouin zone. Chiral symmetry
$$\{\tilde{D}(p),\gamma^5\}=0$$
requires that
$$\tilde{D}(p)=\sum_\mu \gamma_\mu d_\mu(p)$$
Since the Brillouin zone has the topology of a 4-torus, we thus have a vector field $$d_\mu$$ on the torus. Now it is possible to assign an "index" of +1 or -1 to every zero of this vector field, and the Hopf-Poincare index theorem states that the sum over the indices of the zeros of a vector field on a manifold is equal to the Euler characteristic of the manifold. The Euler characteristic of any n-torus is zero, and therefore the zeros of $$d_\mu$$ must come in pairs of opposite index, which is precisely the origin of the doublers.

OK, so what does all this mathematics mean? Well, prima facie it seems to leave us with the choice between chiral symmetry and freedom from doublers (since locality, translation invariance and hermiticity are too important to abandon). There is, however, a clever way around this, which will be the topic of our next post.

Friday, December 02, 2005

More about staggered quarks

A while back, Matthew was running a number of pedagogical articles on fermions on the lattice. Since I think that those articles were a good idea, I will endeavour to continue them. Obviously there may be some differences in outlook and style, but that is the beauty of diversity.

Matthew's last post in the series was about staggered quarks. To remind ourselves, when we put fermions on the lattice naively, we find that the fermion propagator has extra poles at momenta of order $$\pi/a$$, leading to the emergence of 16 degenerate quark flavours, or "doublers", from a single quark action. Staggering gets rid of some of those doublers by redistributing the fermionic degrees of freedom across different lattice sites. In the end, one is left with 4 degenerate quark flavours, usually referred to as "tastes" to distinguish them from physical quark flavours, with the added bonus of retaining a remnant of chiral symmetry that forbids the generation of an additive mass renormalisation.

There is a downside to all this, however. Since the different components of the staggered quark field live on different lattice sites, they experience a slightly different gauge field, which leads to a breaking of their naive degeneracy. This becomes even clearer when looking at it from a momentum space point of view: A pair of quarks with momenta close to 0 can exchange a gluon with momentum around $$\pi/a$$ to change into a pair of quarks with opposite momenta of order $$\pm\pi/a$$, and these correspond to quarks of a different taste from the original pair. The interaction has changed the taste of the quarks!

These taste-changing interactions are the source of a number of problems: naively, we would expect a theory of four degenerate quark flavours to have 16 degenerate pions. These pions, however, are mixed by the taste-changing interactions, and their degeneracy is therefore lifted. Only one of the 16 pions will be the (pseudo-)Goldstone boson whose mass goes to zero with the quark mass; the others will remain massive in the chiral limit. This also adversely affects the discretisation errors from the finite lattice spacing $$a$$.

The influence of the taste-changing interactions can be suppressed by adding additional terms to the lattice action. This leads to improved staggered quarks, and we will hear more about those in a future post on improved actions.

Another potentially problematic feature of staggered quarks is that they come always in four tastes. Nature, however, has not been so generous as to provide us with four degenerate, or even nearly degenerate, quark flavours. So how do we simulate a single flavour with staggered quarks?

Remember that the fermionic path integral could be done analytically:
$$
\int DU D\bar{\psi} D\psi exp(-S_{G}-S_{F}) = \int DU det(M[U]) exp(-S_{G})
$$
The fermionic determinant can be put back into the exponent as
$$
\int DU det(M[U]) exp(-S_{G}) = \int DU exp(-S_{G}-S_{GF})
$$
where
$$
S_{GF} = - log( det(M[U]) )
$$
incorporates the fermionic contributions to the action. This is additive in the number of quark flavours, so we can get from four staggered tastes to one physical flavour by dividing $$S_{GF}$$ by four, which is equivalent to taking the fourth root of the fermion determinant.

Taking the fourth root of the determinant introduces a nonlocality, and currently nobody kows with certainty whether that nonlocality will go away in the continuum limit $$a\to 0$$, but empirical evidence suggesting that it does is accumulating.

Thursday, December 01, 2005

Two-loop Lamb shift in U^{89+}

Researchers at Lawrence Livermore National Laboratory have used U^{89+} ions (that's Uranium stripped of all but three electrons) to measure the two-loop Lamb shift in Lithium-like ions at large Z, providing one of the most stringent tests of QED in strong fields so far. There are no theoretical predictions for Lithium-like ions, but when extrapolated to the Hydrogen-like case of U^{91+}, their results are in excellent agreement with theoretical predictions. It is always nice to see experiment agree with theory.

Thursday, November 24, 2005

Dirac eigenvalues

In a recent talk entitled "Fun with Dirac eigenvalues", Michael Creutz discusses some issues arising in the study of the Dirac spectrum. The discussion involves a number of deceptively simple arguments on a rather complicated matter, and you should read it (and think about it) for yourself. The chiral condensate and the Banks-Casher relation, in particular, are discussed in a way that is obviously intended to first confuse, then astonish and finally enlighten the reader. Other points which I never thought about before are how the number of flavours influences the density of low-lying eigenvalues via the effects of the high eigenvalues on the gauge fields, and why topologically non-trivial configurations' contributions to correlation functions can be a problem in numerical simulations.

The discussion is kept in the context of the overlap operator, which makes sense for an analytical discussion of chiral properties. For an investigation of many of these issues in the context of the more widely used staggered quarks, see this paper by members of the HPQCD and UKQCD collaborations, where they show that, with improvement, staggered quarks exhibit all the properties expected of the Dirac spectrum, including obeying the Atiyah-Singer index theorem.

Sunday, November 20, 2005

Comment spam

After removing all the comment spam that had accumulated on this blog (one post had received 83 spam comments!), I feel I need a little amusement. So let me for a moment entertain the thought that all this spam wasn't posted by mindless bots, but was manually produced by actual human individuals responding to the article in question.

First conclusion: the literacy crisis is far worse than anybody is willing to admit! I mean, if somebody replies to "This is just a test of my new gnome blogger app" with "great post, keep up the good work," they either have a very warped sense of humor, or their reading comprehension caps out at two-word sentences, to say nothing of the poor people who seem to read messages about cheap mortgages, anatomical "enhancements" or cracked software downloads into this blog, and whose problems might require the attention of not just an elementary school teacher, but also that of a clinical psychologist.

Second conclusion: some people seem to feel sexually excited by Lattice QCD! The number of self-professed "hot girls" offering themselves for dates and more in reply to posts about fermion discretization issues is truly staggering ... but Matthew is married, and I still have certain standards, which include requiring a basic command of orthography and punctuation (or at least the ability to use a spelling checker!) from any potential date. Sorry, girls!

Third conclusion: The world is even weirder than I thought. One comment advertised photography courses, and was kept entirely in Greek; numerous comments consisted of fairly random French words in alphabetical order; and one person repeatedly urged the world to visit his ##KEYWORD HERE## site (located at http://www.yoururlhere/). What is that supposed to tell us? (Other than that some people are too stupid even to spam - remember, we are pretending these comments were real!)

Hmm, this counterfactual leads us to a bad, half a good (who doesn't dream of a world in which top models chase after theoretical physicists? -- but where's the fun if they are too dumb to form a coherent sentence?) and an outright weird difference to the real world. I suppose that can be counted as empirical evidence that Leibniz was right when he postulated that the real world was the best of all possible worlds. Oh, well ...

Friday, November 18, 2005

Was Einstein right? -- Tests of General Relativity

This is not directly related to Lattice QCD, but it is interesting nevertheless.

There recently was a public lecture entitled "Was Einstein right?" here at Regina, given by Clifford Will. Unfortunately bad weather meant he couldn't be here in time to give the technical version of this to the physics department, but a printed version of what he probably would have said can be found in this review. It appears that General Relativity is beginning to approach the same kinds of precision in its tests that we know from QED.

A new determination of light quark masses

In a recent paper (hep-ph/0511160), members of the HPQCD collaboration have presented the most precise determination of the light (up, down and strange) quark masses to date.

This required both extensive unquenched simulations of QCD using some of the lightest (and hence hardest to work with) quark masses used so far, and a massive perturbative calculation at the two-loop order. The perturbative calculation is needed in order to connect the lattice-regularized bare quark masses to the masses as defined in the usually quoted MSbar scheme. The bare-quark masses required as input to the perturbative calculation come from simulations performed by the MILC collaboration, who use a highly-efficient formalism with so-called ``staggered'' quarks, with three flavors of light quarks in the Dirac sea.

Putting all these ingredients together, they find the MSbar masses at a scale of 2 GeV to be $$m_s = 87(0)(4)(4)(0)$$ MeV, $$m_u = 1.9(0)(1)(1)(2)$$ MeV and $$m_d = 4.4(0)(2)(2)(2)$$ MeV. The respective uncertainties are from statistics, simulation systematics, perturbation theory, and electromagnetic/isospin effects.

This means that the errors on the still rather contentious strange quark mass, for which a number of incompatible results exist, have been greatly reduced. This is a very major result, and a great success for Lattice QCD.

Thursday, November 17, 2005

New poster

Announcing a new lattice QCD blogger. Everybody please welcome Life on the Lattice's new co-blogger Georg von Hippel. Georg is a postdoc at the University of Regina, with similiar research interests to myself. This is now officially the worlds first group blog devoted to Lattice QCD.

Enjoy!

Wednesday, November 09, 2005

A couple of brief things

It's been busier than usual the past few days. So this will be a cheap "links only" post. Hat tip to Georg for pointing out this nice writeup of a public lecture by Frank Wilczek. There's lots of good stuff in there, the highlight is when he refers to the calculation of the hadron spectrum from lattice QCD as "is one of the greatest scientific achievements of all time." Who am I to disagree with a nobel prize winner?

Also, for those interested in the computing behind lattice QCD, there's a short writeup in the latest Symmetry magazine, here. There's some details on both the efforts to create custom machines, and the use of off the shelf PC clusters.

Wednesday, October 12, 2005

On The Road

Well, things have been rather stagnant lately on the blogging front (that should be obvious). One reason is that I'm on the road a lot these days. Two weeks ago I was in Rochester, giving a seminar there. Last week I was at a conference at Jefferson Lab, and this week I'm visiting the theory group here at Fermilab. While I'm here I'm giving a talk, and also I'm giving a seminar at Argonne national lab. All the while, I'm trying to get work done.

However, I've discovered one very important reason to keep blogging with a little more regularity. It took a non-trivial amount of time to clean up all the stupid comment spam that had piled up since my last post.

Friday, July 29, 2005

Lattice 2005, day four

Well, if day two was the long day, day three was the short one. There
were no plenary sessions, just a short moring parallel session, then
excursions in the afternoon. It was a very nice day to be out on the
Irish countryside, so that was very nice.

Day four had the plenary sessions I was most interested in, a close
collaborator, Quentin Mason, started off the day talking about lattice
perturbaiton theory, which is what I do. Quentin has completed the
two-loop calculation of the light quark mass, which allows for a much
more accurate determination. Quentin also reviewed the determination
of the strong coupling constand, which I've covered in a previous
post.

Next up was Zoltan Ligeti, who reveiwed progress in heavy quark
physics from a non-lattice perspective. There's been lots of activity
on this front over the past few years, with the development of a new
expansion, the Soft Colinear Expansion. This theory is complicated
enough that I won't even try to explain it.

Finally, another collaborator, Masataka Okamoto, gave a very nice
overview of the status of lattice calculations of the CKM matrix.
This the the matrix which tells you how the various types (or
flavours) of quarks interact in the standard model. It has nine
entries (not all of them are independent) most of which can be
computed from lattice QCD + an experimental result. Masataka has done
a large amount of work, both doing many of the calculations himself,
and collecting everything into a coherent picture.

In the standerd model, the CKM matrix is unitary. If you accept that
assumption, the Masataka has produced a complete determination of the
CKM matrix from lattice QCD, experimental measurements, and the
unitarity of the matrix. Of course, it would be nice to test the
unitarity, from just theory+experiment with no extra assumption. In
that case, you can check row by row in the matrix. Masataka showed
how one row is completly determined without assuming unitarity. And
in that row, the matrix is unitary, up to the errors. It will be a
big challenge to repeat this for the other two rows.

Wednesday, July 27, 2005

Lattice 2005, day two

Hello again from Dublin. Day two of Lattice 2005 was the "busy" day,
with three plenary sessions, a parallel session, and the poster
session all in one day. Twelve solid hours of physics, which is
rather tiring, particularly since I was presenting a poster. As such,
today's update of the Plenary sessions will be brief.

The morning started off with a talk by Herbert Neuberger on simulations
of large N field theory. As discovered by 't Hooft, SU(N) gauge
theory simplifies as you take the limit N -> infinity. However, this
is hard to do in Lattice QCD, as you would need infinite sized
matrices. However, there are techniques for attacking the problem.
Neuberger reviewed the interesting phase structure you see in this
system. The lattice version of large N QCD has 6 different phases.

Next up was Simon Catterall, who reviewed his work on Lattice
supersymmetry. In the early days of both supersymmetry and lattice
QCD, it wasn't thought possible to put a supersymetric theory on the
lattice without badly breaking the supersymmetry. However in recent
years, a few different methods have been discovered. The basic idea
is you construct a continuum theory with lots of supersymmetry and
arrange things such that when you put the theory on the lattice, a
remmenant of the supersymmetry remains. Simon reviewed his method for
doing this, and briefly touched on some of the possible applications
of these methods.

After a coffee break, the sessions shifted in focus a little bit.
Chris Dawson reviewed the state of Kaon Phenomenology on the lattice.
The focus here is on kaon decays, which are hard to do in lattice
QCD. For example, a kaon can decay into two pions. This is extremely
hard to compute, since pions are very large, it's hard to fit two of
them inside your finite lattice box.

We swapped last names for the next talk, Chris Dawson became Chris
Michael, who gave a lively review of the state of hadronic decays on
the lattice. The people I work with are interested in doing very high
precision calculations. This is good, however, it limits you to a
small number of thing you can calculate. However lattice QCD can, in
principle, calculate many many more interesting strong interaction
processes. Chris gave an update of the state of some of these
calculations, which are very very hard to do. You have an unstable
particle in the intitial state, two or more hadrons in the final
state, and a transition at some point between. I

Tuesday, July 26, 2005

Lattice 2005, day one

Hello from Dublin. As promised, I'm going to try to deliver daily
reports from the plenary sessions. Unfortunately, getting wireless
internet access in the conference room has proved problematic, so
it'll have to be after the fact reports, rather than live blog
updates. These comments are subjective, and I can cover every talk,
so that's that.

After the usual introductory speechs the conference got off to a bang
with a talk by Julius Kuti, from the University of California San
Deigo. The topic was Lattice QCD and String Theory, which is a
growing field. There is a lot of interesting problems in the field,
from more abstract things to practical things. Julius spent most of
his talk on a practical goal, namely using lattice QCD simulations to
understand effective string models of QCD.

In some sense this is a return to the orgins of string theory. The
original idea was to model the gluon field connecting two quarks as a
peice of relativistic string. The naive application of this idea
didn't work, and so string theory went off in a totally different
direction. However, with all the things that have been learned about
it, effective (four dimensional) string models can now be
constructed. And lattice QCD is the ideal tool to test these models
against. There are some issues, as there always are, but the results
here were promising, and offer a lot of new territory to explore.

Next up was one of the best field theorists in the world, Martin
Luscher. He talked about effeciently simulating a certain type of
dynamical fermions (Wilson quarks, for the experts) much more
effeciently than they've been done before.

His idea was to split the lattice up into smaller hypercubic blocks,
about 0.5 fm on a side. Then you split your update algorithim into
three parts,

gluon part + inside block quark part + block boundry part

Now, in the standard way of doing things, all of these parts are
computed the same number of times (say 2000 times per lattice
point). What Luscher (and his collaborators) do is take advantage of
the physics of the system to drastically reduce the number of times
you have to compute the block boundry part, which is the most
expensive bit. The essential bit of physics is that the correlation
between points on the boundry, and points deep inside the cube is very
weak. This means you don't have to compute it's effects nearly as
often as when you compute the gluon effects.

As Luscher mentioned, comparing computer algorithms is a tricky
business, however his simulations with this new method seem to be a
factor of ten or more faster than comperable simulations with the
standard methods.

In the second plenary session we had a talk by Jim Napolitano, who is
an experimentalist working on the CLEO-C experiment. CLEO-C is
currently studying D meson physics in great detail at the CESR
accelertor at Cornell. One of the main motivations for CLEO-C is to
test lattice QCD predictions in the charm system, so that results in
the B meson system can be confidently predicted. Jim ran over a
number of new results from CLEO including the leptonic decay constant
fD, and the masses of two new mesons, the h_c and the \Upsilon(1D).

These measurements are very tough, they involve looking for rare
raditive transitions in decays of highly excited mesons. The reason
that the can be done at all is because CLEO has very good control over
the initial state. Basically, they're colliding electrons and
positrons right on top of a charm anti-charm quark resonance. This
resonance decays to a pair of D mesons, almost at rest. In most
cases, both D's decay in a shower of crap (pions, kaons, etc). But
sometimes one decays into a shower of crap, and one does something
rare. When this happens you're happy, because, from the shower of
crap you can learn everything about one of the D's that decayed. And
sinc the total momentum is nearly zero, conservation of momentum tells
you that it's the same for the D that decayed in a rare way. With
that information, and the final state of the rare decay, you can very
accurately reconstruct what happened. As usual, listening to an
experimental talk made me glad I'm in theory. What they do is really
hard :)

So there's lots going on here. I'll update tomorrow with the next
round of talks.

Tuesday, July 19, 2005

Update

Been a while since I posted an update, so I thought I'd check in. The physics blog world is abuzz with the creation of a new physics group blog cosmic variance which features Sean Carroll, Mark Trodden, JoAnne Hewett, Clifford Johnson, and Risa Wechsler. A nice mix of cosmology, particle phenomenology and string theory.

I've been busy calculating, and preparing for Lattice 2005, which is the big annual conference for lattice field theory. This year it's being held at Trinity college in Dublin. They have wireless around, so I should be able to liveblog at least some of the sessions. I'm also going to try to post nightly updates.

An amusing note, the built in spellchecker for blogger flags "blog" and "liveblog" as spelling errors.

Friday, June 03, 2005

The most accurate theory we have

It's often said that Quantum Electrodynamics is the most accurate theory we have in physics. This is based on a few things, first it agrees with a large number of experiments with good precision. It also reduces to standard classical electromagnetism in the classical limit, and classical E&M is a very well tested theory. Finally, there are a handful of things where the agreement between theory and experiment is truly spectacular. Of these, the crown jewel are the QED predications of the magnetic moments anomaly (MMA) of the muon and the electron.

For example, the best current experimental value for the muon MMA is

amu(exp) = 11 659 208 (6) x 10^{-10}.

First off, it's amazing that this can be measured so precisely. What's even more amazing is that the theory prediction for this quantity is

amu(theory) = 11 659 187(8) x 10^{-10}.

The difference is

2.1(1.0) X 10^{-9}.

Now, it's not zero, and that might be meaningful, but it's far more likely that the theory part has a larger error than suspected. In particular, the dominant part of the error in the theory is the low energy QCD contribution. That's something that can, in principle, bet computed using lattice QCD. But what's more remarkable than the small difference is the agreement. To a very high level of accuracy, the theory agrees with experiment. The same is true for the electron.

Obviously, to produce theoretical predictions like this requires a lot of work. This work started in the forties, when Schwinger computed the first approximation for the MMA. Since then, people have computed, order by order in perturbation theory, this quantity. And no person has done more than Tom Kinoshita to improve the prediction. Tom gave a talk last week on his work (with collaborators) in this field.

At the first order in perturbation theory, the exact solution (\alpha/(2\pi)) is easy to find. It's a standard exercise. As you go to higher and higher orders though, the calculation quickly becomes very hard. The second order calculation was completed a few years after the first, and the third was many years after that. The final results for the fourth order calculation weren't finalized until last year.

Tom is still at it. With the fourth order calculation done, he and his collaborators have moved on to the fifth order electron MMA. A new generation of experiments should make this calculation necessary. For those in the know, the fifth order calculation requires evaluation of around 12000 five loop Feynman diagrams. Currently the only practical way you can do this is to evaluate them numerically. Like my own work, much of this invovles figuring out ways to automate as much of the computation as possible.

This doesn't have much to do with lattice QCD, but it's certainly inspirational to know that theory and experiment agree so well. And it's nice to know that when I'm sweating over a two-loop calculation, somebody else is dealing with five-loops.

Wednesday, May 18, 2005

Once a month post

Well, I've let blogging slip once again, but I figured I should check in at least once a month. Life is pretty busy, very busy in fact. Next month I'm off to Columbus, Ohio, for a group meeting. Following that, I'm going to Dublin at the end of July for the annual lattice QCD conference. Of course, I want to have lots to talk about for the lattice conference, so that will require a lot of work. Hence, no blogging.

Thursday, April 14, 2005

A bit about staggered quarks...

I really need to get into a more regular habit, posting things here. Of course, I've been busy, but it'd be nice to get into a regular habit.

I've been meaning to continue discussing various ways of simulating fermions in lattice QCD, so today I'll say a something about staggered fermions. Way back when, I talked about the basic problem with fermions. If you take the Dirac equation, and construct a "naive" approximation to it, you find that the theory you though was describing a single quark is actually describing 16 identical copies of quark. Each of these copies is known as a "taste". So we want to simulate one taste, but, at least naively, we get 16, that's the basic problem.

Now one way to get around this problem is to use something called Wilson fermions. I covered this in another post, so I won't say much about it, the basic idea is you construct a "non-naive" approximation in a clever way. In this setup you still have
16 tastes, but the are no longer all identical. Fifteen of them get masses that are
inversely proportional to the lattice spacing. The disadvantage here is you maul something called "chiral symmetry" which bites you when you try to simulate with light quarks.

There is another approach you can use to reduce the number of tastes, called "staggering". It's called that because you can think of it as putting the four spin components of the Dirac spinor on different sites of you lattice. There is a somewhat easier way to understand this. The Dirac action looks something like this
$$
\sum_x \bar{\psi} (Dslash + m) \psi
$$
Now \psi is a four component spinor and (Dslash + m) is some matrix that acts on it.
What would happen if we diagonalized this matrix?

If you know some quantum field theory you can do this exercise yourself, if not, take my word for it, when you diagonalise the matrix you get
$$
\sum_x \sum__{i=1,4} \chi_i^{*} (f(x) D + m) \chi_i
$$
Where \chi is now a *one* component field and f(x) is some function that depends on x. But notice, for each "i" in the sum, nothing changes. That is, we've produced four copies of the same thing. So what we'll do is just keep one. This is "staggered fermions". This procedure reduces the number of tastes from 16 to 4, which is better. There's another trick, to reduce the number of tastes to 1, but I'll save that for another day.

Now why might we like staggered fermions? Well, it turns out, they're much faster than Wilson fermions. Way faster. What's the reason? The first reason is that we're working with one component fields, rather than 4 component fields, so we save a factor of four right there. But the real savings comes when thinking about the chiral symmetry. Chiral symmetry is a symmetry between left and right handed fermions. For massless fermions it's exact, for small masses, it's violated in a small way.

No matter what the fermion mass is though chiral symmetry has the effect of "protecting" the fermion mass for additive shifts. In quantum field theory your particle properties get changed by the interactions. In a theory with chiral symmetry the mass can only be changed by multiplication by a constant. So if you start with some mass m0, the interactions can change it to Z*m0, but that's it. And Z is typically a number around 1. Now in a theory without chiral symmetry (like Wilson fermions) you can have an additive mass shift, so m0 might get changed to

Z*m0 + M1

where M1 can be large, and have either sign.

Now what does this have to do with the speed of lattice QCD simulations? Well, it is very important, because the speed at which the fermions can be simulated is inversely proportional to the interaction mass. What can happen with Wilson fermions is that the interaction mass gets very small (if M1 gets close to -Z*m0). Then the amount of time you spend processing shoots up. This never happens with staggered quarks. What's worse, the additive bit M1, can actually make the interaction mass go negative! This is even worse, because negative mass fermions have properties you don't want. Again, this cannot happen with staggered quarks.

It's not all sunshine though. Staggered quarks have a deep dark secret, which I will return to in another post.

Thursday, March 17, 2005

The Strong Coupling Constant

First off, before I get to the substance of the post, let me welcome Mark Trodden to the physics blog world. He's part of the even more elite "Central New York physics" blog world.

This post is going to be an outline of how one computes a value for the strong coupling constant ($\alpha_{s}$) using lattice QCD. Now the strong coupling constant isn't really constant at all. It depends on two things, the regulation scheme you use, and the energy you determine it at. The Particle Data Group always quotes the values in the MSbar scheme (a particular type of dimensional regulation, don't worry if you don't know what it is) at the energy scale $\mu=M_{Z}=91$ GeV. So that's our goal, to get a number for $\alpha$ from lattice QCD. The world average, which includes an older lattice calculation, is $\alpha_{s}(M_{Z}) = 0.1187(20)$, where the number in brackets is the error on the last two digits.

How do we get this number from lattice QCD? Well, let's think about the inputs to a lattice QCD simulation, they are 5 quark masses, and the lattice spacing (the finite volume doesn't matter in this calculation). Now, in the language of quantum field theory, these are bare parameters, quantum effects will renormalize them, so the actual physical values that the simulation predicts will be different. For example, you might put 75 MeV in for the strange quark mass, but if you turn the crank, and then look at a physical quantity which tells you m_{s} you might find that it's 72.3 GeV. The same is true of the lattice spacing $a$. The spacing you put in, say $a=0.09$ fm ($1$ fm = $10^{-15}$m), will get renormalized, we'll call the renormalized spacing $a'$.

Unlike quark masses, which are very hard to extract, it's pretty easy to extract the renormalized spacing. What you do is compute some mass differences of heavy mesons. The ideal case, that our group uses, is the bound state of a b and an anti-b quark, called an Upsilon. Like the hydrogen atom (or better, positronium) the Upsilon has a ground state, and a whole spectrum of excited states. In addition, because the b quark is so heavy, the dynamics of the system is basically non-relativistic. The latter fact means that we can compute masses of all the excited states fairly cheaply. Even better, we can compute the mass differences between the excited states. This is better because many systematic errors cancel in the differences.

The general procedure then is to start with gauge configurations generated by the MILC collaboration. These are configurations of gauge fields which have the non-perturbative effects of the light quarks "folded in" (or in lattice QCD jargon, they're unquenched). With these configurations we can determine the mass splittings in the Upsilon system, using our non-relativistic quark formalism. Of course, we don't get the actual mass differences out, we're working on a computer, which will only give us dimensionless numbers, so what we get is DM * a', that is the mass difference, multiplied by the renormalized lattice spacing.

Hopefully, you can see where this is going, since we know DM * a', and Upsilon mesons are things you can actually make in the lab, we can use our calculation, and a measurement of DM, to extract a'. We're really fortunate, because several groups (including CLEO here at Cornell) have measured DM very precisely. With that, we can get very precise values for a'.

Now what does this have to do with the strong coupling constant? Remember, I said that the coupling constant depends on the scale at which we measure it, well a' sets the scale for everything on the lattice. And now we know the scale.

The next step is to actually get a value for \alpha, which takes some doing. The first thing you do is take your non-perturbative gauge field configuurations and compute something that you expect to be perturbative. A popular choice is the average value of the gauge fields around 1x1 square on the lattice (which we'll call P). This is a very short distance thing, around 1 fm per side. Now remember, in QCD if something is short distance, we ought to be be able to compute in in perturbation theory. So we fire up our perturbation theory codes and compute P.

My part in this calculation was computing P (and some other quantities) to second order, that is computing P1 and P2 in the expansion,
$$
P = 1 + P1 \alpha_{V} + P2 \alpha_{V}^{2} + P3 \alpha_{V}^{3} + ...
$$
In this expression $\alpha_{V}$ is the QCD coupling constant, evaluated in a lattice regulated theory (not the MS bar thing we want, but close) and at the scale a' (which we know). Unfortunately, for really high precision second order calculations are not enough, and through heroic efforts two coworkers (Howard Trottier and Quentin Mason) computed the third order coefficient P3.

Now we have everything we need, we've determined P from our simulation, and we also know what it is in perturbation theory. With that we can solve for $\alpha_{V}$ at the scale a'! There are a couple of extra things we have to do after than, first we convert from the lattice regulated coupling to the MSbar one we want, and then we run the scale up to M_{Z}. These steps require more perturbation theory at third order (Howard and Quentin had to do the former calculation, the latter (scale running) was already known), so they're also non-trivial.

What's more, if you just do this using one lattice spacing a', and one short distance quantity (P) you don't get a very precise answer. One reason is that the perturbation theory is not very convergent. That is, contributions from the P4 term (which we haven't computed) are not very small. There are two things we did to get around this problem. One is to run at three different lattice spacings. This allows one to estimate P4 (and P5) from fits, which helps control the error. The other is to use more than one short distance quantity. In the paper there are 28 different short distance quantities used. It turns out that the first trick, using multiple lattice spacings, is the one that really cuts down the error.

So what's the final result? We find
$$
\alpha_{s}(M_{Z}) = 0.1177(13)
$$
which is more accurate than the world average, and is the single most accurate determination of $\alpha_{s}$.

Of all lattice QCD calculations, I think this one is the most elegant. It nicely mixes perturbative and non-perturbative physics in a non-trivial way. To quote the paper, this calculation "demonstrates that the QCD of confinement is the same theory as the QCD of jets; lattice QCD is full QCD, encompassing both its perturbative and nonperturbative aspects."

Tuesday, March 15, 2005

Physics at the south pole

I learned about two really cool experiments today, which I wanted to highlight. I don't follow experimental physics nearly as well as I should, given that I agree with what David Polizter said in his Nobel lecture: "I must say that I do regard theoretical physics as a fundamentally parasitic profession, living off the labors of the real physicists." In general, experiments are what drives the field. Certainly areas with active experimental programs are usually healthier than those without. Condensed matter physics is one example, another is neutrino physics, which is what I'm going to post about.

The colloquim speaker today was Buford Price from Berkley. Dr. Price has many interests, it would be impossible to sum up his whole talk here, even if I could have taken notes at the pace he was going. Instead, I'll say a few words about the amazing neutrino experiment that he was involved in.

Neutrinos are very weakly interacting particles, they can travel vast distances without interacting with anything. The are copiously produced in many astronomical objects that we are interested in studying, from the "mundane" (supernovae) to the more exotic (gamma ray bursters). Many groups are now engaged in studying neutrinos, produced in all sorts of situations. But traditional detectors have trouble studying the highest energy neutrinos. Enter AMANDA.

The AMANDA-II Telescope consists of 19 strings of optical modules buried between 1 and 1.5 miles beneath the snow surface of the geographic south pole. The total number of OMs in the array is 680, although only about 600 are fully operational at any given moment. Cherenkov light is converted to electrical signals by the photomultiplier tubes within each OM.


Buried under the south pole, experimentalists have turned ultra-pure ice into the worlds largest telescope. Because of its size, AMANDA is able to see neutrinos that have much higher energies than more modest, man-made detectors. So far, they've only seen an isotropic distribution of high energy neutrinos, no point source can be resolved.

This of course raises the question: What do you do when your giant telescope made of ice isn't sensitive enough? Yup, you guessed it, you build a bigger, more sensitive, telescope. This one is called ICECUBE. The idea is basically the same as AMANDA, you drill holes in the ice, two or three kilometers down. Then you lower your detectors down on strings. What's different is the scale, ICECUBE occupies a cubic kilometer of space. To put that in perspective, if you stacked up every human being living or dead (all of them) they'd only take up about a third of it. With its size ICECUBE will be sensitive to neutrinos with PeV energies.

Monday, March 07, 2005

Hans Bethe, 1906-2005

Hans Bethe, one of the Twentieth century's greatest theoretical physicists passed away peacefully last night, at the age of ninety-eight. For Cornell, this is a very sad day. Bethe was very a very important part of the Cornell community. He basically put Cornell on the map as far as physics goes. Today, thanks to his work, Cornell is a world leading physics school, with strong programs in virtually all areas of physics.

The strength and diversity of the department here is a direct reflection of Hans Bethe's strength and diversity as a theorist. John Bacall's quote is appropriate: "If you know his work, you're inclined to think that he is many different people, all of whom have gotten together and formed a conspiracy to sign their papers under the same name."

In an era of increasing specialization, Bethe worked in many fields. For example, he won the Nobel Prize for his work in stellar astrophysics. Basically, he worked out why stars shine. Many quantum field theorists will know about another important piece of work, his back of the envelope calculation of the Lamb shift. This was the first calculation which showed that renormalization might work, and spurred the development of renormalization theory by Schwinger and Feynman. He did pioneeing work in condensed matter physics, quantum mechanics, and astrophysics, staying active well into his nineties.

It would be totally impossible to summarize his career here, Bethe's contributions to modern physics have been too wide. Indeed his biographer, Sam Schweber has been working for years on a multi-volume biography. For those who are interested in learning a bit more about Hans Bethe, Cornell has a webpage with some biographical information, and videos. The video "I can do that", linked from the "reading" page, offers a nice overview.

Thursday, March 03, 2005

A crisis in particle theory?

Peter Woit is worried that particle theory is in trouble, since very few theory papers have made the top 50 most cited list since 1999. According to Peter,

Even more so than last year, this data shows that particle theory and string theory flat-lined around 1999, with a historically unprecedented lack of much in the way of new ideas ever since. Among the top 50 papers, the only particle theory ones written since 1999 are a paper about pentaquarks by Jaffe and Wilczek from 2003 at number 20, the KKLT flux vacua paper at number 29 and a 2002 paper on pp waves at number 32.

How many more years of this will it take before leaders of the particle theory community are willing to publicly admit that there's a problem and start a diiscussion about what can be done about it?

It would seem Peter is living in a different world than me. The problem isn't new ideas, it's the ability to test them which is lacking. Particle theory, particularly of the phenomenological bent, is actually pretty active. Just off the top of my head, here are three very new, very active ideas people have had in particle physics in the last couple of years

1) Split supersymmetry
2) Little Higgs models
3) Warped fermions

Search the arxiv for them, you'll get lots of results. Of course, without an experiment to tell us which new idea is right (or, more likely, that they're all wrong) none of them will get hundreds of citations, but that doesn't imply a lack of new ideas. In fact, it's quite the opposite, there are too many new ideas!

Now assume the LHC turns on and finds solid evidence for option number 2, a little Higgs model, then you can be sure that the original little Higgs papers will rocket up the citition list. Until the LHC turns on, though, we're stuck in a "propose a model, explore some of its consequences, move on" sort of mode. That's precisely the sort of thing that leads to lots of new ideas, and not a lot of citations.

My advice to those who think particle theory is in trouble, read hep-ph on a regular basis :) Of course, particle theory could well be in trouble shortly, it all depends on what the LHC sees, but for now, I'd say it's really quite active. Certainly more active than 5/10 years ago, when there was pretty much SUSY, in the form of the MSSM, and nothing else.

Thursday, January 20, 2005

Self Promotion

Obviously my posting has fallen off somewhat over the last while. Hopefully at some point I'll be less busy, and can write some more about dynamical fermions in lattice QCD. Until then, you can read this short note I wrote about the effect of heavy dynamical quarks on lattice QCD simulations. It turns out that unlike light quarks, it's easy to include heavy dynamical quarks. This was well known before, but nobody had sat down and compute the effects exactly.