Thursday, March 17, 2005

The Strong Coupling Constant

First off, before I get to the substance of the post, let me welcome Mark Trodden to the physics blog world. He's part of the even more elite "Central New York physics" blog world.

This post is going to be an outline of how one computes a value for the strong coupling constant ($\alpha_{s}$) using lattice QCD. Now the strong coupling constant isn't really constant at all. It depends on two things, the regulation scheme you use, and the energy you determine it at. The Particle Data Group always quotes the values in the MSbar scheme (a particular type of dimensional regulation, don't worry if you don't know what it is) at the energy scale $\mu=M_{Z}=91$ GeV. So that's our goal, to get a number for $\alpha$ from lattice QCD. The world average, which includes an older lattice calculation, is $\alpha_{s}(M_{Z}) = 0.1187(20)$, where the number in brackets is the error on the last two digits.

How do we get this number from lattice QCD? Well, let's think about the inputs to a lattice QCD simulation, they are 5 quark masses, and the lattice spacing (the finite volume doesn't matter in this calculation). Now, in the language of quantum field theory, these are bare parameters, quantum effects will renormalize them, so the actual physical values that the simulation predicts will be different. For example, you might put 75 MeV in for the strange quark mass, but if you turn the crank, and then look at a physical quantity which tells you m_{s} you might find that it's 72.3 GeV. The same is true of the lattice spacing $a$. The spacing you put in, say $a=0.09$ fm ($1$ fm = $10^{-15}$m), will get renormalized, we'll call the renormalized spacing $a'$.

Unlike quark masses, which are very hard to extract, it's pretty easy to extract the renormalized spacing. What you do is compute some mass differences of heavy mesons. The ideal case, that our group uses, is the bound state of a b and an anti-b quark, called an Upsilon. Like the hydrogen atom (or better, positronium) the Upsilon has a ground state, and a whole spectrum of excited states. In addition, because the b quark is so heavy, the dynamics of the system is basically non-relativistic. The latter fact means that we can compute masses of all the excited states fairly cheaply. Even better, we can compute the mass differences between the excited states. This is better because many systematic errors cancel in the differences.

The general procedure then is to start with gauge configurations generated by the MILC collaboration. These are configurations of gauge fields which have the non-perturbative effects of the light quarks "folded in" (or in lattice QCD jargon, they're unquenched). With these configurations we can determine the mass splittings in the Upsilon system, using our non-relativistic quark formalism. Of course, we don't get the actual mass differences out, we're working on a computer, which will only give us dimensionless numbers, so what we get is DM * a', that is the mass difference, multiplied by the renormalized lattice spacing.

Hopefully, you can see where this is going, since we know DM * a', and Upsilon mesons are things you can actually make in the lab, we can use our calculation, and a measurement of DM, to extract a'. We're really fortunate, because several groups (including CLEO here at Cornell) have measured DM very precisely. With that, we can get very precise values for a'.

Now what does this have to do with the strong coupling constant? Remember, I said that the coupling constant depends on the scale at which we measure it, well a' sets the scale for everything on the lattice. And now we know the scale.

The next step is to actually get a value for \alpha, which takes some doing. The first thing you do is take your non-perturbative gauge field configuurations and compute something that you expect to be perturbative. A popular choice is the average value of the gauge fields around 1x1 square on the lattice (which we'll call P). This is a very short distance thing, around 1 fm per side. Now remember, in QCD if something is short distance, we ought to be be able to compute in in perturbation theory. So we fire up our perturbation theory codes and compute P.

My part in this calculation was computing P (and some other quantities) to second order, that is computing P1 and P2 in the expansion,
$$
P = 1 + P1 \alpha_{V} + P2 \alpha_{V}^{2} + P3 \alpha_{V}^{3} + ...
$$
In this expression $\alpha_{V}$ is the QCD coupling constant, evaluated in a lattice regulated theory (not the MS bar thing we want, but close) and at the scale a' (which we know). Unfortunately, for really high precision second order calculations are not enough, and through heroic efforts two coworkers (Howard Trottier and Quentin Mason) computed the third order coefficient P3.

Now we have everything we need, we've determined P from our simulation, and we also know what it is in perturbation theory. With that we can solve for $\alpha_{V}$ at the scale a'! There are a couple of extra things we have to do after than, first we convert from the lattice regulated coupling to the MSbar one we want, and then we run the scale up to M_{Z}. These steps require more perturbation theory at third order (Howard and Quentin had to do the former calculation, the latter (scale running) was already known), so they're also non-trivial.

What's more, if you just do this using one lattice spacing a', and one short distance quantity (P) you don't get a very precise answer. One reason is that the perturbation theory is not very convergent. That is, contributions from the P4 term (which we haven't computed) are not very small. There are two things we did to get around this problem. One is to run at three different lattice spacings. This allows one to estimate P4 (and P5) from fits, which helps control the error. The other is to use more than one short distance quantity. In the paper there are 28 different short distance quantities used. It turns out that the first trick, using multiple lattice spacings, is the one that really cuts down the error.

So what's the final result? We find
$$
\alpha_{s}(M_{Z}) = 0.1177(13)
$$
which is more accurate than the world average, and is the single most accurate determination of $\alpha_{s}$.

Of all lattice QCD calculations, I think this one is the most elegant. It nicely mixes perturbative and non-perturbative physics in a non-trivial way. To quote the paper, this calculation "demonstrates that the QCD of confinement is the same theory as the QCD of jets; lattice QCD is full QCD, encompassing both its perturbative and nonperturbative aspects."

Tuesday, March 15, 2005

Physics at the south pole

I learned about two really cool experiments today, which I wanted to highlight. I don't follow experimental physics nearly as well as I should, given that I agree with what David Polizter said in his Nobel lecture: "I must say that I do regard theoretical physics as a fundamentally parasitic profession, living off the labors of the real physicists." In general, experiments are what drives the field. Certainly areas with active experimental programs are usually healthier than those without. Condensed matter physics is one example, another is neutrino physics, which is what I'm going to post about.

The colloquim speaker today was Buford Price from Berkley. Dr. Price has many interests, it would be impossible to sum up his whole talk here, even if I could have taken notes at the pace he was going. Instead, I'll say a few words about the amazing neutrino experiment that he was involved in.

Neutrinos are very weakly interacting particles, they can travel vast distances without interacting with anything. The are copiously produced in many astronomical objects that we are interested in studying, from the "mundane" (supernovae) to the more exotic (gamma ray bursters). Many groups are now engaged in studying neutrinos, produced in all sorts of situations. But traditional detectors have trouble studying the highest energy neutrinos. Enter AMANDA.

The AMANDA-II Telescope consists of 19 strings of optical modules buried between 1 and 1.5 miles beneath the snow surface of the geographic south pole. The total number of OMs in the array is 680, although only about 600 are fully operational at any given moment. Cherenkov light is converted to electrical signals by the photomultiplier tubes within each OM.


Buried under the south pole, experimentalists have turned ultra-pure ice into the worlds largest telescope. Because of its size, AMANDA is able to see neutrinos that have much higher energies than more modest, man-made detectors. So far, they've only seen an isotropic distribution of high energy neutrinos, no point source can be resolved.

This of course raises the question: What do you do when your giant telescope made of ice isn't sensitive enough? Yup, you guessed it, you build a bigger, more sensitive, telescope. This one is called ICECUBE. The idea is basically the same as AMANDA, you drill holes in the ice, two or three kilometers down. Then you lower your detectors down on strings. What's different is the scale, ICECUBE occupies a cubic kilometer of space. To put that in perspective, if you stacked up every human being living or dead (all of them) they'd only take up about a third of it. With its size ICECUBE will be sensitive to neutrinos with PeV energies.

Monday, March 07, 2005

Hans Bethe, 1906-2005

Hans Bethe, one of the Twentieth century's greatest theoretical physicists passed away peacefully last night, at the age of ninety-eight. For Cornell, this is a very sad day. Bethe was very a very important part of the Cornell community. He basically put Cornell on the map as far as physics goes. Today, thanks to his work, Cornell is a world leading physics school, with strong programs in virtually all areas of physics.

The strength and diversity of the department here is a direct reflection of Hans Bethe's strength and diversity as a theorist. John Bacall's quote is appropriate: "If you know his work, you're inclined to think that he is many different people, all of whom have gotten together and formed a conspiracy to sign their papers under the same name."

In an era of increasing specialization, Bethe worked in many fields. For example, he won the Nobel Prize for his work in stellar astrophysics. Basically, he worked out why stars shine. Many quantum field theorists will know about another important piece of work, his back of the envelope calculation of the Lamb shift. This was the first calculation which showed that renormalization might work, and spurred the development of renormalization theory by Schwinger and Feynman. He did pioneeing work in condensed matter physics, quantum mechanics, and astrophysics, staying active well into his nineties.

It would be totally impossible to summarize his career here, Bethe's contributions to modern physics have been too wide. Indeed his biographer, Sam Schweber has been working for years on a multi-volume biography. For those who are interested in learning a bit more about Hans Bethe, Cornell has a webpage with some biographical information, and videos. The video "I can do that", linked from the "reading" page, offers a nice overview.

Thursday, March 03, 2005

A crisis in particle theory?

Peter Woit is worried that particle theory is in trouble, since very few theory papers have made the top 50 most cited list since 1999. According to Peter,

Even more so than last year, this data shows that particle theory and string theory flat-lined around 1999, with a historically unprecedented lack of much in the way of new ideas ever since. Among the top 50 papers, the only particle theory ones written since 1999 are a paper about pentaquarks by Jaffe and Wilczek from 2003 at number 20, the KKLT flux vacua paper at number 29 and a 2002 paper on pp waves at number 32.

How many more years of this will it take before leaders of the particle theory community are willing to publicly admit that there's a problem and start a diiscussion about what can be done about it?

It would seem Peter is living in a different world than me. The problem isn't new ideas, it's the ability to test them which is lacking. Particle theory, particularly of the phenomenological bent, is actually pretty active. Just off the top of my head, here are three very new, very active ideas people have had in particle physics in the last couple of years

1) Split supersymmetry
2) Little Higgs models
3) Warped fermions

Search the arxiv for them, you'll get lots of results. Of course, without an experiment to tell us which new idea is right (or, more likely, that they're all wrong) none of them will get hundreds of citations, but that doesn't imply a lack of new ideas. In fact, it's quite the opposite, there are too many new ideas!

Now assume the LHC turns on and finds solid evidence for option number 2, a little Higgs model, then you can be sure that the original little Higgs papers will rocket up the citition list. Until the LHC turns on, though, we're stuck in a "propose a model, explore some of its consequences, move on" sort of mode. That's precisely the sort of thing that leads to lots of new ideas, and not a lot of citations.

My advice to those who think particle theory is in trouble, read hep-ph on a regular basis :) Of course, particle theory could well be in trouble shortly, it all depends on what the LHC sees, but for now, I'd say it's really quite active. Certainly more active than 5/10 years ago, when there was pretty much SUSY, in the form of the MSSM, and nothing else.