On the arXiv today is a paper by Peter Petreczky about the spectral functions of heavy quarkonia at finite temperature.
People generally expect that at high temperatures, heavy quarkonia will be suppressed, because the gluons will be screened by thermal effects (Debye screening, and possibly chromomagnetic screening as well), leading to an exponential fall-off of the interquark potential at large distances and hence allowing the heavy quarks to drift apart. This suppression of quarkonia is supposed to be an important signature of the formation of a quark-gluon plasma, and hence confirming it in a model-independent way is important. One way to do this is to look at the spectral functions for the corresponding correlators and to see whether the peaks in the spectral function that correspond to the bound states in that channel will broaden and eventually vanish as the temperature is increased.
The results in this case are that the 1P charmonia (the and its kin) do dissolve just above the deconfinement transition, whereas other quarkonia appear to persist up to considerably higher temperatures.
Now how do people obtain these kinds of results? The spectral function is the function σ(ω) appearing in the Euclidean periodic-time equivalent of the Källén-Lehmann spectral representation
where the latter expression is the correlator for a free particle of mass ω, with β being the extent in the Euclidean time direction. So if you have measured the correlator D(t), you just invert this to get the spectral function, which contains all the information of the spectrum of the theory.
There is one lie in this last sentence, and that lie is the little word "just". The reason is that you are trying to reconstruct a continuous function σ(ω) from a small number of measured data points D(β i/Nt), making this an ill-posed problem.
The way around that people use lies in a method called Maximum Entropy Method (MEM) image restoration, which is also used to restore noisy images in astronomy. (Unfortunately it is bound by the rules of logic and hence cannot do all the wonderful and impossible things, such as looking through opaque foreground objects or enlarging a section to reveal details much smaller than an original pixel, that the writers of CSI or Numb3rs are so fond of showing to an impressionable public in the interest of deterrence, but it is still pretty amazing -- just google and look at some of the "before and after" pictures.)
The basis for MEM is Bayes' theorem
which relates the conditional probability for A given B to that for B given A. Using Bayes' theorem, the probability to have the spectral function σ given the data D and fundamental assumptions H (such as positivity and high-energy asymptotics) is
where conventionally P(D|σ,H) is known as the likelihood function (it tells you how likely your data are under the assumptions), and P(σ|H) is known as the prior probability (it tells you how probable a given σ is prior to any observation D). The likelihood function may be taken to be
where χ2 is the standard χ2 statistic for how well the D(t) given by σ fits your data D(β i/Nt), and Z is a normalisation factor. For the prior probability, on takes the exponential
of the Shannon-Jaynes entropy
where m is a function called the default model, and α is a positive real parameter.
The most probable "image" σα for given α (and m) is then the solution to the functional differential equation
The parameter α hence parametrises a tradeoff between minimising χ2 and maximising S, which corresponds to making σ close to m. Some MEM methods take α to be an arbitrary tunable parameter, whereas in others, to get the final output σMEM, one still has to average over α with the weight P(α|D,H,m), which can be computed using another round of Bayes' theorem. In practice, people appear to use various kinds of approximations. It should be noted that the final result
still depends on m, although this dependence should be small if m was a good default model.
This is pretty cool stuff.