Friday, June 09, 2006

Quarkonia and MEM

On the arXiv today is a paper by Peter Petreczky about the spectral functions of heavy quarkonia at finite temperature.

People generally expect that at high temperatures, heavy quarkonia will be suppressed, because the gluons will be screened by thermal effects (Debye screening, and possibly chromomagnetic screening as well), leading to an exponential fall-off of the interquark potential at large distances and hence allowing the heavy quarks to drift apart. This suppression of quarkonia is supposed to be an important signature of the formation of a quark-gluon plasma, and hence confirming it in a model-independent way is important. One way to do this is to look at the spectral functions for the corresponding correlators and to see whether the peaks in the spectral function that correspond to the bound states in that channel will broaden and eventually vanish as the temperature is increased.

The results in this case are that the 1P charmonia (the $$J/\psi$$ and its kin) do dissolve just above the deconfinement transition, whereas other quarkonia appear to persist up to considerably higher temperatures.

Now how do people obtain these kinds of results? The spectral function is the function σ(ω) appearing in the Euclidean periodic-time equivalent of the Källén-Lehmann spectral representation

$$D(t) = \int_0^\infty d\omega \sigma(\omega)\frac{\cosh(\omega(t-\beta/2))}{\sinh(\omega\beta/2)}$$

where the latter expression is the correlator for a free particle of mass ω, with β being the extent in the Euclidean time direction. So if you have measured the correlator D(t), you just invert this to get the spectral function, which contains all the information of the spectrum of the theory.

There is one lie in this last sentence, and that lie is the little word "just". The reason is that you are trying to reconstruct a continuous function σ(ω) from a small number of measured data points D(β i/Nt), making this an ill-posed problem.

The way around that people use lies in a method called Maximum Entropy Method (MEM) image restoration, which is also used to restore noisy images in astronomy. (Unfortunately it is bound by the rules of logic and hence cannot do all the wonderful and impossible things, such as looking through opaque foreground objects or enlarging a section to reveal details much smaller than an original pixel, that the writers of CSI or Numb3rs are so fond of showing to an impressionable public in the interest of deterrence, but it is still pretty amazing -- just google and look at some of the "before and after" pictures.)

The basis for MEM is Bayes' theorem

$$P(A|B) = \frac{P(B|A)P(A)}{P(B)}$$

which relates the conditional probability for A given B to that for B given A. Using Bayes' theorem, the probability to have the spectral function σ given the data D and fundamental assumptions H (such as positivity and high-energy asymptotics) is

$$P(\sigma|D,H) = P(D|\sigma,H) P(\sigma|H)$$

where conventionally P(D|σ,H) is known as the likelihood function (it tells you how likely your data are under the assumptions), and P(σ|H) is known as the prior probability (it tells you how probable a given σ is prior to any observation D). The likelihood function may be taken to be

$$P(D|\sigma,H) = Z \exp\left(-\frac{1}{2}\chi^2\right)$$

where χ2 is the standard χ2 statistic for how well the D(t) given by σ fits your data D(β i/Nt), and Z is a normalisation factor. For the prior probability, on takes the exponential

$$P(\sigma|H) = Z' \exp(\alpha S)$$

of the Shannon-Jaynes entropy

$$S = \int_0^\infty d\omega \left[\sigma(\omega)-m(\omega)-\sigma(\omega)\log\left(\frac{\sigma(\omega)}{m(\omega)}\right)\right]$$

where m is a function called the default model, and α is a positive real parameter.

The most probable "image" σα for given α (and m) is then the solution to the functional differential equation

$$\frac{\delta Q_\alpha}{\delta \sigma_\alpha} = 0$$

where

$$Q_\alpha = \left(\alpha S - \frac{1}{2}\chi^2\right)$$

The parameter α hence parametrises a tradeoff between minimising χ2 and maximising S, which corresponds to making σ close to m. Some MEM methods take α to be an arbitrary tunable parameter, whereas in others, to get the final output σMEM, one still has to average over α with the weight P(α|D,H,m), which can be computed using another round of Bayes' theorem. In practice, people appear to use various kinds of approximations. It should be noted that the final result

$$\sigma_{MEM}(\omega) = \int d\alpha \sigma_\alpha(\omega) P(\alpha|D,H,m)$$


still depends on m, although this dependence should be small if m was a good default model.

This is pretty cool stuff.

1 comment:

Tommaso said...

Hi Georg,
About J/psi suppression.
I have always wondered whether when one observes fewer J/psi decays to muon pairs at a nucleon-nucleon collider experiment, one should really attribute it to a suppression of _production_ .
What one sees is a suppression of the product production x Branching Ratio (mu mu). Why could it not be a suppression of the muon BR ? The high temperature would make it easier for gluons to recombine without violating the OZI rule, thus increasing the hadronic BR and decreasing the observed mu-mu final state...

cheers
T.