Monday, October 18, 2004

REPOST: The big time

This is a repost from Matthew's old blog.

Well, I’ve been mentioned by Jacques Distler (http://golem.ph.utexas.edu/~distler/blog/archives/000452.html), so in the Physics blog world that means I’ve hit the big time. In celebration, I suppose I ought to say something about my current work, and the state of the art in Lattice QCD in general.

QCD is the theory of quark-gluon interactions. It ought to be able to provide precise answers to questions in hadron physics (like “what is the mass of the proton”). There is a significant snag though, related to this years Nobel prize. Politzer, Gross and Wilczek showed that at high energies (or momentum transfers) QCD is perturbative. And the higher the energy, the more perturbative it gets. This is a happy fact, since you can use perturbative QCD to make predictions as to what you ought to see in high energy collisions. The unfortunate flipside is that at low energy QCD is strongly coupled, and you cannot use perturbation theory. As a result, we know a lot more about something obscure like the parton distribution functions at high energy (measured very precisely at HERA) then we do about the classic problems in hadron physics (like the aforementioned mass of the proton).

Without perturbation theory, it is difficult to work with QCD. It’s fairly straightforward to show that in the limit of vanishing u and d quark mass, you can recover the old current algebra framework. So all those results remain valid. You can develop effective field theory descriptions in certain limits (Chiral perturbation theory, heavy quark effective theory) but these have low energy bits that must be either fixed by experiments, or matched to a QCD calculation. In practice, the only first principles way to compute general things in low energy QCD is Lattice QCD.

Lattice QCD was invented in the mid-seventies by Kenneth Wilson. He was thinking about quantum field theory in general and decided that to understand it better it would be helpful to have a formulation of it that could be put on a computer. What he did was take infinite continuous spacetime and approximate it by a hypercubic grid with volume L^4 and grid spacing a. Remarkably he found a way to preserve the exact local gauge invariance in this approximation. Up to some troubles with the quarks (more about that below) he found that he could write down an action that describes QCD in the naive limit a->0 and L->Infinity.

With the problem discretized it was now amenable to solution on a computer, just as Wilson planned. However it quickly became apparent that the numerical cost of this method was huge. Working on very expensive supercomputers in the early 80s the first lattice gauge theorists were able to derive some interesting results, but precision QCD predictions remained out of reach.

There are a number of reasons for the failure to get high precision results. The most important one is the difficulties with Fermions. There are actually two difficulties in simulating fermions on a computer. The first is something known as the doubling problem. Basically if you start with the Dirac action describing a quark, and naively discretize it, you end up introducing a 16fold symmetry, which produces 16 degenerate types of quarks (these are called (unfortunately) tastes of quark). This is a disaster, since you only want 1 “taste” of quark.

There are three general ways around this problem. The first solution was proposed by Wilson. He added an extra term to the fermion action, which vanishes in the continuum (a->0) limit but lifted the degeneracy of the quarks. In the continuum only one taste has a small mass, the others acquire a mass M/a, and so decouple as a goes to zero. Another solution to this problem, proposed by Kogut and Susskind is to spin-diagonalize the Dirac action. Doing this reduces the 16 tastes to 4, so you’ve gained a bit. A further theoretical trick can reduce the number of tastes to one. The final method, called overlap, is to construct a much more complicated operator to approximate the fermions.

Each of these methods suffers from some set of problems, and each has its own advantages. Before discussing them I’ll mention the second problem with simulating fermions: It’s extremely slow. The problem is that one cannot represent anticommuting numbers in a computer. Fortunately most fermion actions have the form

\bar{\psi} M[A] \psi

Where M[A] is some functional of the gauge fields. Actions of this form can be exactly path-integrated, you just get

det(M[A]).

Simple, no? All you have to do is calculate the determinant at each step, and you’re good to go. Alas, that’s numerically very (very very very ...) expensive. It involves inverting the matrix M, which is large and sparse. So you need supercomputers to do it. This problem was so intensive that until very recently (late 90s) most people would just set det(M[A]) =1. This amounts to neglecting dynamical fermions in your calculation and is known as the quenched approximation.

With the quenched approximation going away the strengths and weaknesses of the various fermion actions really show. For example, the Wilson approach suffers from a problem known as exceptional configurations. These occur when the matrix M acquires a very small eigenvalue. The overlap fermions, while theoretically very nice, are many times (hundreds) slower than any other approach. The fastest option is the Kogut-Susskind one. Since we’ve spin diagonalized the problem, it’s automatically 4 times faster than the others, and because of the problems with the other two approaches, it ends up being 50 or more times faster than the Wilson approach. The problem is the “theoretical trick” I mentioned above. Recall that the KS approach reduced the number of tastes from 16 to 4. To get down to 1 you take the 4th root of the determinant in your numerical simulations. This is not a well defined procedure, though in works in several limits (free field, weak coupling, chiral). Despite this problem, the most accurate lattice results to date have been generated using these KS fermions.

Where do I come in? Well, the problems with fermions are only the beginning of the difficulties with lattice QCD. Another substantial problem is the errors induced by finite lattice spacing. The difficultly here is that reducing the lattice spacing in numerical simulations is very expensive (everything in this game is very expensive). This means that brute force reducing your spacing from current values (around 0.1 fm) to values where the naive finite spacing errors would be around 1% (roughly a=0.005fm) would take another 10 years. If we want timely results, this will not do.

There is a better way of compensating for this problem, proposed by Symanzik. A simple example is provided by the finite difference approximation to the ordinary derivate. I could write

df/dx ~ 1/a [f(x a) - f(a)]

which has linear errors in a. Alternatively, and for almost no extra cost, I could use

df/dx ~ 1/(2a) [f(x a) - f(x-a)]

which has quadratic errors in a. If my problem scaled poorly with a, I would be foolish not to use the latter approximation.

Symanzik’s idea for lattice theories is similar. Say I have an action for a scalar field

\phi D^2 \phi

where D^2 is some lattice laplacian operator. This action has a^2 errors. I can make them a^4 by using the action

\phi (D^2 C a^2 D^4) \phi.

In this action C is a constant, which we need to determine. How do we do that?

Well, I’ll tell you how we do that, but later :) This post is getting long, so I’ll pick it up later today, or tomorrow...

No comments: