## Monday, November 29, 2004

### Wilson Quarks

This is the second post about fermions on the lattice. In this post we're going look at how the 16 extra "tastes" of lattice fermions show up, and one way that we can correct for them. The nice thing about this calculation is we don't really need to worry about the gauge fields in all this. We'll work in the free quark case, everything will apply to the interacting case as well.

In Euclidean space we can write the Dirac Action as
$$S = \int dx \bar{\psi} [ \sum_{mu} \gamma_{mu} d_{mu} + m ] \psi$$
If we transform to momentum space using
$$\psi(x) = \int dk/(2\pi)^4 exp(ipx) \psi(p)$$
we'll find the fermion propagator
$$S = (-i \sum{mu} \gamma_{\mu} p_{\mu + m)/(p^2 + m^2)$$
Of crucial importance is the presence of a single pole at p^{2} = - m^{2}. This tells us that the propagator is describing a single particle. This pole only fixes one componant of p, for example we could take p_{4} = im.

Now, lets put this theory on the lattice, discretizing the derivative in the most naive way we can,
$$d_{\mu} \psi(x) = (\psi(x + \hat{a\mu}) - \psi(x-a\hat{\mu}))/(2a)$$
$a$ is the lattice spacing. Our conversion to momentum space is now
$$\psi(x) = \sum_{k} exp[ i px ] \psi(p)$$
We're in a finite box, so the integration over all momenta has become a sum. With this expansion it's easy to read off the propagator
$$S_{L} = (-i \sum_{\mu} \gamma_{\mu} pbar_{mu} + ma)/(pbar^2 + (ma)^2)$$
where
$$pbar_{mu} = sin(p_{\mu} a )$$
and
$$pbar^2 = \sum_{\mu} p_{\mu}^{2}$$
This *looks* okay. We've just replaced the momenta in the continuum propagator with some trig functions. In the small $a$ limit, they go over into the correct factors, so we should be fine. But we're not.

We have poles in our lattice propagators at
$$pbar^{2} = -(ma)^2$$
Let's try p_{4} being the only non-zero momenta, then our condition is
$$sin(p_{4}a) = i ma$$
or
$$p_{4}a = i arcsinh(ma)$$
Again, this looks okay, for small $a$ $arcsinh(ma) ~ ma$ so we recover our continuum condition. But, we *could* have just as easly taken
$$p_{4}a = i arcsinh(ma) + pi$$
Since $sin(x+pi) = sin(x)$ this will be equivalent to my original situation. We've found two poles in the fermion propagator. We can carry this further, noting that for the spatial momenta we can take $p_{i} = 0$ or $p_{i} = \pi$. So overall there are $2^{4}=16$ different combinations of momenta that produce poles in the propagator. So this propagator describes not one, but 16 different types (tastes) of quarks.

This is the famous fermion doubling problem. As we discussed in the last post, there are a three primary ways around this. The first (and oldest) is due to Wilson. The idea here is simple, add some term to the action to change the propagator, so it has only one pole. Wilson added the following term
$$- a/2 \laplacian$$
Acting on a lattice field we have
$$\laplacian \psi(x) = \sum_{\mu} [ \psi(x + a\hat{\mu}) + \psi(x - a\hat{\mu}) - 2 \psi(x) ]$$
Notice that this term is explictly supressed by a power of the spacing, so it shouldn't cause problems in the continuum limit.

This new term does have a big effect on the fermion propagator. The denominator of the propagator becomes
$$pbar^2 + (ma + a/2 phat^2)^2$$
where
$$phat^2 = \sum_{\mu} 4(sin(0.5 p_{\mu} a))^2$$
Setting the spatial momenta to zero, and solving for $denominator = 0$ we find
$$(sin(p_{4} a))^2 = -[ma + 2(sin(p_{4}a/2))^2]^2$$
If we put $p_{4}a = i arcsinh(ma)$ we'll find that as $a \to 0$ this equation is satisfied (ie we'll find m = m), but for $p_{4}a = i arcsinh(ma) + \pi$ the new term well produce a term which destroys the solution. So this equation has only *one* solution in the continuum limit, not two. The same is true for the spatial momenta.

In more concrete terms what is happening is that the Wilson term gives the doubler fermions a mass proportional to the inverse lattice spacing
$$M_{d} ~ m/a$$
so in the continuum limit they become infinitly massive and decouple.

The Wilson solution to the doubling problem is easy to understand, implement and compute with. But it suffers from a number of flaws. The first is that the Wilson term breaks Chiral symmetry (CS). Without CS the quark mass is not protected from additive mass renormalizations. This makes it difficult to simulate at small quark masses. You need to preform large subtractions to keep the quark mass you get out (the renormalized mass) close to the mass you put in (the bare mass). Furthermore, these large additive renormalizations can push you into a region where the renormalized mass is negative. When this happens you have to throw out the run. This problem (jargon: "exceptional configurations") slows Wilson quark simulations down by a large factor. Another problem with Wilson quarks is that the Wilson term introduces a linear error. This can be very large.

For these reasons, people have looked for other ways of simulating quarks. In the next post we'll look at Kogut-Susskind (or Staggered) fermions.

## Monday, November 22, 2004

### In case you have nothing to do

Here [1.5 Mb pdf] are the slides for a talk I gave Friday at Syracuse University. It went pretty well, lots of questions, which is always good (since I could answer them). The talk ranged pretty wide, I tried to cover our whole lattice field theory program, rather than bore people with technical details of lattice peturbation theory. I think it was the right approach for a non-lattice field theory crowd.

I like to go give talks to other groups. It is a good way of seeing how your ideas play with people who don't work with you on a day by day basis.

## Thursday, November 18, 2004

### Fermions on the lattice, part one of four

This is a "fessing up" post, before I talk about all the great things that lattice QCD can do. I'm going to talk in general about fermions on the lattice, and discuss the ways in which you can simulate them. This will take us down three roads, and at the end we'll find pros and cons to each approach. I'll focus on an outstanding issue with staggered fermions at the end.

The formulation of fermion fields on the lattice has been a huge problem. Compared to Wilson's elegant formulation of gauge fields on the lattice fermions look downright ugly. To top it off, they're slow, so it's hard to deal with them. We'll look briefly at the latter problem, the speed first, and then talk about why lattice fermions are ugly.

Fermion fields in quantum field theory are represented by anti-commuting numbers, known as Grassman numbers. This is fine when you're working with a pencil and paper, but it's impossible to put a Grassman number into a computer. Computers are good for reals, integers and complex numbers (really two reals), but they can't do Grassman numbers.

This is not quite as big a problem as it sounds. The reason is every fermion action that people use can be written as
$$S_{F} = \bar{\psi} M[U] \psi$$
where $M[U]$ is a complex matrix functional of the gauge fields. This is happy, since we can do path integrals of this form,
$$\int D\bar{\psi} D\psi exp(-S_{F}) = \det(M[U])$$
and we're done! We now have fermion fields on the lattice. The trouble is that we need to evaluate the determinant. And that is mad slow. So slow, in fact, that until the mid-nineties it was essentially impossible. This led people to just drop it, using something called the quenched approximation. This causes uncontrollable ten or twenty percent errors, but it makes things faster. Physically, it amounts to neglecting dynamical fermions in your simulation.

As people started to remove the quenched approximation another problem came up. Evaluating the determinant gets slower as you go to smaller and smaller quark masses. This forces you to work at unphysical values for the up and down quark masses. That's the situation today. Simulations are forced to work at masses that are larger than the physical up and down quark masses. Then you have to extrapolate to the physical limit (which is near m=0).

That's the present situation. It is possible to simulate with dynamical fermions, but, for the most part, you have to simulate at unphysical masses.

Now why are lattice fermions ugly? Well, it turns out that if you take the Dirac action
$$S = \bar{\psi} ( \gamma \cdot D + m) \psi$$
and naively discretize it (by replacing D with a covariant difference operator) you get an action which describes 16 degenerate types of fermions. These degenerate types are called "tastes". So the naive discritization of lattice fermions induces 16 tastes. Obviously we don't want to simulate 16 degenerate quarks, we want to simulate one quark, so we need to figure out some way of getting rid of the other 15 tastes. In the next installment we'll see how the extra tastes show up, and how to remove them by adding a Wilson term to the action.

## Monday, November 08, 2004

### The scale of things

This is the final installment in this series. We've been seeing how to make perturbation theory work, using a lattice cutoff. The main point so far has been that the bare lattice coupling gets large renormalizations from lattice tadpole diagrams. This makes it a very poor expansion parameter for perturbative series'. We can correct this by using a "natural" coupling, in practice we use the "V" coupling, define through the static quark potential. We could also use the standard MSbar coupling, but that isn't really a "lattice thing".

There's one final thing we need to do to make the perturbation series as accurate as possible. That is to set the scale. Remember, the quantities we compute perturbativly on the lattice are all short distance, so we expect that they will be dominated by momentum scales of the order of the cutoff \pi/a. But they're not *zero*distance* quantities. A 1x1 square is still a finite sized object, so the scale won't be exactly \pi/a. How do we determine the scale?

Let's consider what we do when we calculate something to one loop. We'll get a loop integral over some integrand
$$\int dq f(q)$$
times the coupling constant at some scale
$$\alpha_{V}(q*)$$
I've used the "V" scheme here, and called the optimal scale we're looking for $q*$. Remember the QCD coupling varies with $q$. This suggests that the ideal scale might be found by averaging the coupling, weighted by the integrand $f(q)$. That is
$$\alpha_{V}(q*) \int dq f(q) = \int dq \alpha_{V}(q) f(q)$$
This would seem to be a natural definition.

The trouble is that this definition, if naively applied, doesn't work. The reason is that the running coupling is usually written as
$$\alpha_{V}(q) = \frac{\alpha_{V}(\mu)}{1 - \beta\log(q^2/\mu^2) \alpha_{V}(\mu)}$$
and this function has a pole in it. A couple of points, $\beta$ is the one loop beta function. For those of you who don't know what that is, just imagine that it's a negative constant. $\mu$ is the renormalization scale, it'll go away soon. If I jam this into the integral on the right hand side it'll blow up. There's a simple, and sensible, fix to this, expand the running coupling to the correct order in $\alpha_{V}(\mu)$ on both sides, and solve. I'll spare you the algebra, if you take
$$\alpha_{V}(q) = \alpha_{V}(\mu) + \beta \log(q^2/\mu^{2}) \alpha_{V}^{2}(\mu)$$
and the same for $\alpha_{V}(q*) and jam them into the defining equation for$q*$you'll find (trust me!) $$\log(q*^{2}) = (\int dq \log(q) f(q)) / \int dq f(q)$$ Viola! This is a sensible, convergent definition. And that's how you set the scale. So let's review how we do lattice perturbation theory. 1) Use tadpole improved actions and operators. This will remove large perturbative coefficients that come from lattice tadpole diagrams 2) Use a sensible coupling$\alpha_{V}$. This coupling is not too small, so the convergence of the perturbation series is pretty good 3) Use a correctly set scale. The quantities we're going to compute are short distance, not zero distance, so the scale is not \pi/a Just to be clear, none of this is new. It's all in a famous paper by Lepage and Mackenzie (PRD48, 2250 1993) so if you want to learn more, and see examples of how it all works in practice, go look there. Next post (which I promise will be sooner than this one came in) I'll talk a little bit about some of the non-perturbative results that our group has calculated. ## Monday, November 01, 2004 ### The life of a lattice tadpole At the end of the last post we saw that the bare coupling in lattice gauge theory is a bad coupling to do a perturbative expansion in. If we use a good coupling, like$\alpha_{V}$we will find a much more convergent perturbative series. This is crucially important if we want to determine things like improvement coefficients to good accuracy. An open question at the end of the last post was *why* the bare lattice coupling is bad, that's the subject of this post. To understand this, we need to understand how gauge fields are represented in the lattice. In lattice QCD, the matter fields are assigned to the sites of the lattice, and the gauge fields "live" on the links. This is very much in keeping with the geometric picture of gauge theory, where you have a parallel transporter ($U$) which allows you to compare matter fields at two different points. That is, $$\psi(x+\hat{\mu}) = U(x+\mu,x) \psi(x)$$ In continuum gauge theory we have $$U(x+\mu,x) = P exp(ig \int_{x}^{x+\mu} dz^{\nu} A_{\nu}(z))$$ Here P stands for path ordering. This is actually much simpler on the lattice. On the lattice, there is a "shortest path" namely one link. So the parallel transporter simplifies to $$U(x+\mu,x) = U_{\mu}(x) = exp(i g a A_{\mu}(x))$$ These parallel transporters (or link fields) are what one builds lattice actions out of. For example, a simple forward covariant derivative on a fermion field might be $$D_{\mu} \psi(x) = U_{\mu}(x)\psi(x+\mu) - \psi(x).$$ Notice that the connection between the link field$U$and the gauge field$A$is non-linear. This is where the problems come in. Consider the expectation value of a single link field$(U)$. Expanding the expression for$U$we find $$(U) = 1 - 2 \pi \alpha(\pi/a) a^{2} (A_{\mu}(x) A_{\mu}(x)) + ...$$ I've used (A) = 0, and evaluated the strong coupling at the cutoff$\pi/a$. This looks fine, in the formal continuum limit (a->0) the lattice corrections vanish. But all is not well! There's a dangerous beast lurking in the loop integral (AA). Let's look at what it is. The expectation value (A A) can be written as the integral over the momentum space gluon propagator $$(A A) = \int_{0}^{2\pi/a} d^{4}q / qhat^{2}$$ where $$qhat^{2} = 4/a^2 \sum_{1}^{4} sin^2 (q_{\mu} a/2).$$ In the limit where$a$is small, we can replace$qhat^{2}$by$q^{2}$, and we find that the integral is $$(A A) ~ 1/a^{2} + O(1)$$ This is the disaster, because when we substitute this back into our expression for the expectation value of$U$we find $$(U) = 1 - 2\pi \alpha(\pi/a) a^{2} (1/a^{2} + O(1)) + ...$$ The$1/a^{2}$from the loop integral cancels the$a^{2}$from the expansion of the exponential leaving a lattice artifact suppressed only by the coupling constant, rather than the lattice spacing. The coupling constant is small (0.2) but not really small (like$\alpha a^{2} ~ 0.01).

It is factors like these (called gluon tadpoles) which spoil lattice perturbation theory in the bare coupling. The $\alpha_{V}$ coupling doesn't suffer from these corrections because tadpole effects come in an 2nd order in $\alpha$. It would be nice to have a way to systematically account for these effects. This method, called tadpole improvement (or mean-field improvement) was proposed by Lepage and Mackenzie, and is now in wide use.

The idea is to use link fields that have the tadpole effects largely canceled out of them. To do this you take all the link fields you have $U$ and multiple and divide by some average link field $u$. That is
$$U \to u U/u = u \tilde{U}$$
The new fields $\tidle{U}$ are much closer to their continuum values, since the average link field division cancels out most of the tadpole effects. The factors in the numerator $u$ can be absorbed into couplings and masses. When combined with a good coupling (such as $\alpha_{V}$) the agreement between perturbation theory and non-perturbative simulations (of short distance quantities) is much better.

There is one last thing one can do to improve the perturbative series, namely set the scale of the coupling in a more sensible way. That will be the subject of my next post.

## Tuesday, October 26, 2004

Let's pick up where we left off. We saw in the last post that lattice perturbation theory, when applied naively, didn't work. That is, one and two loop estimates of short distance quantities didn't agree with the non-perturbativly measured values. There are three reasons for this: The first is the suitability of the bare lattice coupling $\alpha_0$ as a perturbative expansion parameter.

Forget about lattice perturbation theory for a minute. Just imagine we've computed something in standard QCD perturbation theory, as a series in the MSbar coupling
$$F = f_0 + f_1 \alpha + f_2 \alpha^2 + \cdots$$
We'll assume that this series is perfectly convergent, with each coefficient of order 1. If we assume they are one, and use $\alpha = 0.1$ we find
$$F ~ 1.11 + O(0.001)$$
We have a good series. Notice the one loop estimate would have been 1.1.

Now, what if we were perverse, and did the following
$$\alpha = \alpha_b ( 1 + 1000 \alpha_b)$$
Remember, in QCD the coupling constant is scheme dependant. So there's some scheme in which this is true. Solving this with $\alpha= 0.1$ shows that $\alpha_b ~ 0.0095$ so we've made the coupling really small.

In terms of our new coupling we have
$$F = f_0 + f_1 \alpha_b + (f2 + 1000 f_1) \alpha_b^2 + \cdots$$
The series is still formally correct, but we've made the two loop contribution huge, and the one loop contribution tiny. So if we had only had the one loop contribution at hand, we'd get
$$F ~ 1.0095$$
Likewise, even with the two loop result, we'll still get the wrong answer because the three loop term is going to be messed up. What all this illustrates is that it's important to pick a coupling that's not abnormally small. If it is, the convergence of the perturbation series is very slow.

This is exactly what happens in lattice perturbation theory. The bare lattice coupling is quite small (the 1000 in the relationship above is more like 10, but it's still enough to screw things up), so perturbation theory really doesn't work well. The solution is to find a different coupling constant, and work with that.

There are any number of choices one could make. For example, you could use the MSbar coupling (which is good), but it makes more sense to use a coupling defined in some "latticey" sense. The coupling Lepage and Mackenzie proposed using is one defined from the short distance static quark potential $\alpha_V$. This coupling is defined such that the short distance part of the potential (the Coulomb part) is given in momentum space by
$$V(q) = -4/3 \alpha_V(q)/q$$
With this definition you can derive the connection between the bare coupling $\alpha_0$ and $\alpha_V$ by computing V(q) as an expansion in the former, then demanding it has the form given above. Quantities expressed in $\alpha_V$ tend to agree with non-perturbative data much better than those expressed in terms of the bare coupling.

One thing that we'd like to understand is the origin of the large numbers in the connection between the bare coupling, and the good (V) coupling. That is, what could give rise to the factor of 1000 in the example above, or the factor of 10 that occurs in practice? That'll
be the subject of the next post on tadpole diagrams.

UPDATE: Well, the cool "post by email" thing is nice, except the formatting gets all messed up. Here's hoping this is better.

## Monday, October 25, 2004

I'm still getting used to doing this, and I think I prefer the blogger interface/experiance to livejournal. It's still not a mathML enabled blog, but I really don't have the time to set up a proper server, etc, yet. So for now, I'm sticking to this.

In the last post I talked about determining the strong coupling constant from lattice QCD. You'll recall that I talked about having to use a special coupling, determined from the perturbative static quark potential. I also mentioned that the perturbative scale for the average plaquette was not $\pi/a$, but was instead some scale $q^{*}$. The next four posts will explain those cryptic comments. They will basically be a summery of a famous 1992 paper by Lepage and Mackenzie, in which they saved lattice perturbation theory.

We'll start by introducing the problem.

The procedure I outlined, for determining the strong coupling constant, could also be run in reverse. That is, I'll start out knowing the coupling constant at $M_{Z}$ run it down to the lattice scale $\pi/a$ and convert it to the bare lattice coupling. Then I use that to compute the second order value of the average plaquette. I get some number from this, X. Now I fire up my Monte-Carlo simulation, and I compute the average plaquette that way, and I get some number Y. The trouble is X and Y are not equal. Even if I allow for an error coming from third order perturbation theory, it's way, way off.

This is a disaster. It means that peturbation theory on the lattice is suspect. You'll recall that the coefficients in improved actions were to be determined perturbativly, and using improved actions was crucial in error reduction. If lattice perturbation theory fails, we can no longer do this, and we'll be forced to wait many years until computers get faster. We need to figure out what is wrong with lattice perturbation theory, and, if possible, fix it.

The next post will talk about problem #1, a bad choice of coupling constant.

## Thursday, October 21, 2004

### REPOST: Determining the strong coupling constant with lattice QCD

This is a repost from Matthew's old blog.

One of the most interesting calculations in Lattice QCD is the determination of the strong coupling constant. This calculation also nicely demonstrates the need for perturbation theory in lattice QCD in a context outside of improving actions, which I talked about previous postings.

Okay, so what do we want to do? Well, we want to determine the strong coupling constant \alpha renormalized in the MS-bar scheme and at the scale set by the Z mass (M_Z ~ 90 GeV). So we fire up our computers, and get to work. The first thing we want to do is a non-perturbative simulation of QCD, using lattice Monte-Carlo methods. In order to do this, we need to tune 5 input parameters in our simulation.

1) The light quark mass (we take the up and down quarks to have the same mass

2) The strange quark mass

3) The charm quark mass

4) The bottom quark mass

5) The lattice spacing

To do this we proceed in exactly the same way as any other field theory. We pick 5 measurements, and tune our inputs (the “bare” quantities) until the 5 results we get agree with the 5 measurements. In our case, we’ll use the pion mass to fix the light quark mass and the K mass for the strange. For the other three we use some combination of spin splittings in heavy quark bound states. We don’t use meson masses, as they are more sensitive to lattice errors. The important point here is that we have 5 measured quantities, and we tune the inputs to agree with them. After that, no other input from experiment is needed. That is, this is a first principles QCD calculation.

With our parameters tuned, we run the simulation, go away for some period of time (a long time if we are using dynamical fermions), and finally get our result. The result of a Monte-Carlo simulation is a set of gauge fields {A} which we can measure quantities. There are lots of quantities we can measure, for example, the self energy of a static quark, sitting at the spatial origin boils down to computing

U(x=0,t=0)U(0,1)U(0,2)...U(0,L)

where

U = exp(iA)

and L is the length of the lattice. Obviously if we had a bunch of different sets of gauge fields we could average over them. It’s this average which would be the result we want. The more sets of gauge fields we have the more accurate our results.

Now the static quark self energy is interesting, but it is not quite what we want to measure for our purposes. What we want to do is measure some short distance quantity. Why? Well, recall that QCD is perturbative at short distances, so if we measure a short distance quantity *and* have a perturbative expansion for it, we can solve for the coupling. At one loop it looks like this: we measure the average on the lattice, then we compute in perturbation theory = 1 + \alpha O1 + ... Set them equal, and solve

\alpha = [ - 1 ] / O1

We have now determined the strong coupling on the lattice!

In practice, the short distance quantity of choice has been the average plaquette, which is the product of gauge fields on a 1x1 square on the lattice. Often one also uses large squares and rectangles in order to get a few different determinations, as a check. As well, one loop is not enough, you need at least second order calculations. My supervisor Howard Trottier, and a colleague Quentin Mason, have calculated these short distance quantities out to three loops, which greatly improves the accuracy.

We now have in hand a value for the strong coupling. The value we have depends on what we used in our perturbative expansion. Normally, one uses a definition based on the perturbative expression for the static quark potential (this will be the subject of another posting). The coupling has been evaluated at a scale that is close to the lattice cutoff pi/a but not quite. We’ll call that scale q* (more about this in a later posting as well, for now, think of q* as being within 10% of pi/a). So what we have extracted is

\alpha_{V}(q*)

and what we want is \alpha_{MS-bar}(M_Z).

We go from point a to point b in two steps. First we convert
\alpha_{V}(q*) to \alpha_{MS-bar}(q*), then we run that number up to M_Z using the known three loop running. The latter step is well known (and not lattice specific) so I’ll just say how to do the former.

To convert from \alpha_{V} to \alpha_{MS-bar} means computing some quantity perturbativly in both schemes, and matching them. In practice, the quantity that is used is the two point function for background field gluons. This quantity is used because the combination of the background field and the coupling is not renormalized. So you can compute this in the MS-bar scheme and with a lattice regulator, and equate the two, solving for \alpha_{V} as a series in \alpha_{MS-bar}.

And that’s how it’s done! The results are pretty good, using a modern, unquenched simulation, a preliminary number is

\alpha_{MS-bar}(M_Z) = 0.1181(15)

which agrees with, and has lower errors than, the PDG average.

## Monday, October 18, 2004

### REPOST: Lattice Perturbation Theory

This is a repost from Matthew's old blog.

Well, the whole “work” thing is not really going well today, so I guess I’ll continue my little story.

Recall that we were worried about finite spacing errors in lattice field theory. As an example we were using a scalar field coupled to gluons. The basic action was

\phi D^{2} \phi

and this has a^2 errors. I said that we could use

\phi (D^2 + C a^2 D^4) \phi

to reduce these errors. Clearly this involves picking some value for C, but how do we do that?

It pays to remember what the lattice is doing for us. It’s cutting the theory off at the small distance a, or in momentum space at high energy/momentum. So the spacing errors are reflecting a problem with the high energy (short distance) part of the theory. Now way back at the start of the first post we noted that QCD is perturbative at high energy. So we ought to be able to correct for the spacing errors perturbativly, by matching our lattice theory to the continuum theory to some order in perturbation theory. We pick some scattering amplitude, and fiddle with C, order by order. Done properly, this lowers the spacing errors, at a modest performance cost.

That’s where I come it. The trouble is doing perturbation theory on a lattice is rather hard. The vertex rules in lattice gauge theory are miserably complicated, so even *deriving* them is hard. Then you have to compute something, which is hard because the lattice cutoff violates Poincare symmetry, so you can’t use all the textbook tricks.

Our group is developing tools to do these calculations to one, two and (for a special operator) three loop order. It’s a daunting task, my rule of thumb is that lattice perturbation theory is “one-loop” more complicated than continuum. That is a two loop lattice calculation is roughly as much effort as a three loop continuum one. One reason for this is that the toolkit for continuum perturbation theory has a lot more tools in it.

These calculations are absolutely crucial for getting high precision results from lattice simulations. The reason is the coupling constant is around 0.2 at current lattice spacings. So a one loop calculation corrects a 20% error, and a two loop calculation corrects a 4% error. If you want to produce 5% accurate results, you must have two loop perturbation theory.

### REPOST: The big time

This is a repost from Matthew's old blog.

Well, I’ve been mentioned by Jacques Distler (http://golem.ph.utexas.edu/~distler/blog/archives/000452.html), so in the Physics blog world that means I’ve hit the big time. In celebration, I suppose I ought to say something about my current work, and the state of the art in Lattice QCD in general.

QCD is the theory of quark-gluon interactions. It ought to be able to provide precise answers to questions in hadron physics (like “what is the mass of the proton”). There is a significant snag though, related to this years Nobel prize. Politzer, Gross and Wilczek showed that at high energies (or momentum transfers) QCD is perturbative. And the higher the energy, the more perturbative it gets. This is a happy fact, since you can use perturbative QCD to make predictions as to what you ought to see in high energy collisions. The unfortunate flipside is that at low energy QCD is strongly coupled, and you cannot use perturbation theory. As a result, we know a lot more about something obscure like the parton distribution functions at high energy (measured very precisely at HERA) then we do about the classic problems in hadron physics (like the aforementioned mass of the proton).

Without perturbation theory, it is difficult to work with QCD. It’s fairly straightforward to show that in the limit of vanishing u and d quark mass, you can recover the old current algebra framework. So all those results remain valid. You can develop effective field theory descriptions in certain limits (Chiral perturbation theory, heavy quark effective theory) but these have low energy bits that must be either fixed by experiments, or matched to a QCD calculation. In practice, the only first principles way to compute general things in low energy QCD is Lattice QCD.

Lattice QCD was invented in the mid-seventies by Kenneth Wilson. He was thinking about quantum field theory in general and decided that to understand it better it would be helpful to have a formulation of it that could be put on a computer. What he did was take infinite continuous spacetime and approximate it by a hypercubic grid with volume L^4 and grid spacing a. Remarkably he found a way to preserve the exact local gauge invariance in this approximation. Up to some troubles with the quarks (more about that below) he found that he could write down an action that describes QCD in the naive limit a->0 and L->Infinity.

With the problem discretized it was now amenable to solution on a computer, just as Wilson planned. However it quickly became apparent that the numerical cost of this method was huge. Working on very expensive supercomputers in the early 80s the first lattice gauge theorists were able to derive some interesting results, but precision QCD predictions remained out of reach.

There are a number of reasons for the failure to get high precision results. The most important one is the difficulties with Fermions. There are actually two difficulties in simulating fermions on a computer. The first is something known as the doubling problem. Basically if you start with the Dirac action describing a quark, and naively discretize it, you end up introducing a 16fold symmetry, which produces 16 degenerate types of quarks (these are called (unfortunately) tastes of quark). This is a disaster, since you only want 1 “taste” of quark.

There are three general ways around this problem. The first solution was proposed by Wilson. He added an extra term to the fermion action, which vanishes in the continuum (a->0) limit but lifted the degeneracy of the quarks. In the continuum only one taste has a small mass, the others acquire a mass M/a, and so decouple as a goes to zero. Another solution to this problem, proposed by Kogut and Susskind is to spin-diagonalize the Dirac action. Doing this reduces the 16 tastes to 4, so you’ve gained a bit. A further theoretical trick can reduce the number of tastes to one. The final method, called overlap, is to construct a much more complicated operator to approximate the fermions.

Each of these methods suffers from some set of problems, and each has its own advantages. Before discussing them I’ll mention the second problem with simulating fermions: It’s extremely slow. The problem is that one cannot represent anticommuting numbers in a computer. Fortunately most fermion actions have the form

\bar{\psi} M[A] \psi

Where M[A] is some functional of the gauge fields. Actions of this form can be exactly path-integrated, you just get

det(M[A]).

Simple, no? All you have to do is calculate the determinant at each step, and you’re good to go. Alas, that’s numerically very (very very very ...) expensive. It involves inverting the matrix M, which is large and sparse. So you need supercomputers to do it. This problem was so intensive that until very recently (late 90s) most people would just set det(M[A]) =1. This amounts to neglecting dynamical fermions in your calculation and is known as the quenched approximation.

With the quenched approximation going away the strengths and weaknesses of the various fermion actions really show. For example, the Wilson approach suffers from a problem known as exceptional configurations. These occur when the matrix M acquires a very small eigenvalue. The overlap fermions, while theoretically very nice, are many times (hundreds) slower than any other approach. The fastest option is the Kogut-Susskind one. Since we’ve spin diagonalized the problem, it’s automatically 4 times faster than the others, and because of the problems with the other two approaches, it ends up being 50 or more times faster than the Wilson approach. The problem is the “theoretical trick” I mentioned above. Recall that the KS approach reduced the number of tastes from 16 to 4. To get down to 1 you take the 4th root of the determinant in your numerical simulations. This is not a well defined procedure, though in works in several limits (free field, weak coupling, chiral). Despite this problem, the most accurate lattice results to date have been generated using these KS fermions.

Where do I come in? Well, the problems with fermions are only the beginning of the difficulties with lattice QCD. Another substantial problem is the errors induced by finite lattice spacing. The difficultly here is that reducing the lattice spacing in numerical simulations is very expensive (everything in this game is very expensive). This means that brute force reducing your spacing from current values (around 0.1 fm) to values where the naive finite spacing errors would be around 1% (roughly a=0.005fm) would take another 10 years. If we want timely results, this will not do.

There is a better way of compensating for this problem, proposed by Symanzik. A simple example is provided by the finite difference approximation to the ordinary derivate. I could write

df/dx ~ 1/a [f(x a) - f(a)]

which has linear errors in a. Alternatively, and for almost no extra cost, I could use

df/dx ~ 1/(2a) [f(x a) - f(x-a)]

which has quadratic errors in a. If my problem scaled poorly with a, I would be foolish not to use the latter approximation.

Symanzik’s idea for lattice theories is similar. Say I have an action for a scalar field

\phi D^2 \phi

where D^2 is some lattice laplacian operator. This action has a^2 errors. I can make them a^4 by using the action

\phi (D^2 C a^2 D^4) \phi.

In this action C is a constant, which we need to determine. How do we do that?

Well, I’ll tell you how we do that, but later :) This post is getting long, so I’ll pick it up later today, or tomorrow...