Saturday, March 31, 2007

The Quantum Vacuum, Loops and Lattice Artifacts

This post was written for a general audience, and hence is written in a rather more popular language than our usual fare at Life on the Lattice. If you are familiar with the basic ideas behind perturbative improvement, you may want to skip this post.

When we think about the vacuum in classical physics, we think of empty space unoccupied by any matter, through which particles can move unhindered and in which fields are free from any of the non-linear interaction effects which make e.g. electrodynamics in media so much more difficult.

In Quantum Field Theory, the vacuum turns out to be quite different from this inert stage on which things happen; in fact the vacuum itself is a non-linear medium, a foamy bubble bath of virtual particles popping into and out of existence at every moment, a very active participant in the strange dance of elementary particles that we call the universe.

A metaphor which may make this idea a little clearer could be to think of the vacuum as a sheet of paper on which you write with your pen. Looked at on a large scale, the paper is merely a perfectly flat surface on which the pen moves unhindered. On a smaller scale, the paper is actually a tangle of individual fibers going in all directions and against which the pen keeps hitting all the time, thus finding the necessary friction to allow efficient writing.

In the case where the paper is the vacuum, the analogue of the paper fibres are the bubbles of virtual particle pairs that are constantly being created and annihilated in the quantum vacuum, the analogue of the pen is a particle moving through the vacuum, and the analogue of friction is the modification of the particle's behavior as compared with the classical theory which happens as a result of the particle interacting with virtual particle pairs.

At first sight, this description of the vacuum may appear like wild speculation, but it has in fact very observable consequences. In Quantum Electrodynamics (QED), the famous Lamb shift is a consequence of the interactions of the electron in a hydrogen atom with virtual photons, as are the anomalous magnetic moment of the electron and the scattering of light by light in the vacuum. In fact, none of the amazingly accurate predictions of QED (the most accurate theory we have) would work without taking into account the effects of the quantum vacuum.

In lattice QCD, we care about the vacuum because it affects how the discrete lattice theory relates to its continuum limit. By discretising a continuum theory, we introduce a discretisation error: When comparing an observable Oa measured on a lattice with lattice spacing a with the same observable in the continuum O0, we find that they are related as

O_a=O_0+c_1(\mu a)+c_2(\mu a)^2+\dots

where μ is some energy scale that is typical of the reactions contributing to the observable O. In the classical theory (or at "tree level" as we say because the Feynman diagrams corresponding to classical physics have no loops in them), we can then tune the lattice theory so that as many of the ci as we want to get rid of become zero, and the discrepancy between lattice and continuum becomes small.

At the quantum level, however, we get Feynman diagrams with loops in them that describe how particles traveling through the quantum vacuum interact with virtual particles; the problem with these is that the virtual particles exist at very short distances and hence can have very large momenta by virtue of Heisenberg's uncertainty relation. At very large momenta, the deviation of the lattice theory from the continuum becomes very evident, and hence the loops on the lattice contribute terms that differ a lot from what the same loops would contribute in the continuum. And then we find that this difference reintroduces the a-dependence that we got rid of classically by tuning our theory!

This is clearly no good. What we need to do is to get rid of the a-dependence (up to some order in a) in the quantum theory, too. There are a number of ways how to go about this, but the one most commonly used is called perturbative improvement. In perturbative improvement, we calculate the effect of the virtual particle loops by evaluating Feynman diagrams (a Feynman diagram isn't just a pretty picture: there is a well-defined mathematical expression corresponding to each Feynman diagram) on the lattice and extracting their contribution to the lattice artifacts ci to some order in a. Once we have these contributions, we can then tune our theory again so that these contributions to the ci are cancelled, and the discrepancy between lattice and continuum becomes small again.

Unfortunately, evaluating Feynman diagrams on the lattice is much harder than in the continuum in many ways, so that we need some rather advanced methods to do this, and there aren't very many people doing it. So this is an area where progress has been slow for a while. The next post will tell you how a group of collaborators including myself recently made some pretty significant progress in this field.

Monday, March 12, 2007

Fitness and Fitting

I promised there were going to be some interesting posts, and I feel this is one of them. I want to talk about harnessing the power of evolution for the extraction of excited state masses from lattice QCD simulations.

OK, this sounds just outright crazy, right? Biology couldn't possibly have an impact on subnuclear physics (other than maybe by restricting the kinds of ideas our minds can conceive by the nature of our brains, which could of course well mean that the ultimate theory, if it exists, is unthinkable for a human being, but that is a rather pessimist view; I am also talking about QCD here). Well, biology doesn't have any impact on what is after all a much more fundamental discipline, obviously, but Darwin's great insight has applications far beyond the scope of mere biology. This insight, which I will roughly paraphrase as "starting from a set of entities which are subject to random mutations and from which those least adapted to some external constraints are likely to be removed and displaced by new entities derived from and similar to those not so removed, one will after a large enough time end up with a set of entities that are close to optimally adapted to the external constraints", is of course the basis of the very active field of computer science known as evolutionary algorithms. And optimisation is at the core of extracting results from lattice simulations.

What people measure in lattice simulations are correlators of various lattice operators at different (euclidean) times, and these can be expanded in an eigenbasis of the Hamiltonian as

$C(t)=\left\langle O(t)O(0)\right\rangle = \sum_n c_n e^{-E_n t}$

(for periodic boundary conditions in the time direction the exponential becomes a cosh instead, but let's just ignore that for now), where the cn measure the overlap between the eigenstates of the operator and those of the Hamiltonian, and the En are the energies of the Hamiltonian's eigenstates. Of course only states that have quantum numbers compatible with those of the operator O will contribute (since otherwise cn=0).

In order to extract the energies En from a measurement of the correlator <O(ti)O(0)>, one needs to fit the measured data with a sum of exponentials, i.e. one has to solve a non-linear least-squares fitting problem. Now, there are of course a number of algorithms (such as Levenberg-Marquardt) that are excellent at solving this kind of problem, so why look any further? Unfortunately, there are a number of things that an algorithm such as Levenberg-Marquardt requires as input that are unknown in a typical lattice QCD data analysis situation: How many exponentials should the fitting ansatz use (obviously we can't fit all the infinitely many states)? Which range of times should be fitted (and which should be disregarded as dominated by noise or disregarded higher states)? A number of Bayesian techniques designed to deal with this problem have sprung up over time (such as constrained fitting), and some of those deserve a post of their own at some point.

From the evolutionary point of view, one can simply allow evolution to find the optimal values for difficult-to-optimise parameters like the fitting range and number of states to fit. Basically, one sets up an ecosystem consisting of organisms that encode a fitting function complete with the range over which it attempts to fit the data. The fitness of each organism is taken to be proportional to minus its χ2/(d.o.f.); this will tend to drive the evolution both towards increased fitting ranges and lower numbers of exponentials (to increase the number of degrees of freedom), but this tendency is counteracted by the worsening of χ2. The idea is that if one subjects these organisms to a regimen of mutation, cross-breeding and selection, evolution will ultimately lead to an equilibrium where the competing demands for small χ2 and large number of degrees of freedom balance in an optimal fashion.

After Rob Petry here in Regina brought up this idea, I have been toying around with it for a while, and so far I am cautiously optimistic that this may lead somewhere: for the synthetic data sets that I let this method look at, it did pretty well in identifying the right number of exponentials to use when there was a clear-cut answer (such as when only finitely-many were present to start with). So the general method is sound; it remains to be seen how well it does on actual lattice data.

Friday, March 02, 2007

Around the blogs

Bee at Backreaction has a post on snow, which is indeed an important topic here in Canada. Regina doesn't even get that much snow by Canadian standards, and it still is very snowy around here (although it is worse when it doesn't snow, because that is when it gets really cold).

Christine Dantas, well known for her background independence, has a new blog called Theorema Egregium, presumably in homage to Gauss.

Also new to our blogroll is Resonaances, by an anonymous particle physicist known as "Jester", who blogs from CERN and allows everybody who is interested to obtain a glimpse into CERN's seminars.