Saturday, June 30, 2012

Lattice 2012, Day Five

Hello for a final time from Cairns. The first plenary session of the morning had a somewhat reduced occupation number, as is usual the morning after the banquet. The first speaker was Maria Paola Lombardo, who spoke about high-temperature QCD on the lattice. Finite-T results are still being dominated by the staggered results, although there is a noticeable discrepancy in the equation of state between HISQ and stout-smeared quarks, and Wilson simulations are beginning to catch up. There are still many open issues in this field, including the fate of the U(1)A symmetry at high temperature and the effects of a θ term and of magnetic fields. On the other hand, quarkonium suppression is predicted well by the lattice, and for fluctuations the lattice measurements and hard thermal loop calculations meet up at around 200 MeV.

The second talk was on strategies for finite chemical potential by Gert Aarts. At finite chemical potential, the fermionic determinant is complex, which precludes a simple probability interpretation, rendering ordinary Markov Chain-based Monte Carlo simulations impossible (the "sign problem"). Replacing the complex determinant by its absolute value, a technique known as phase quenching, leads to poor overlap and the so-called "Silver Blaze" problem, i.e. that extreme cancellations of highly oscillatory integrands are required to get the correct behaviour. It is therefore of interest to study models that have no sign problem, and these include two-colour QCD, and QCD with the gauge group G2 (one of the exceptional simple Lie groups). For real-world QCD, which does have a sign problem, there are a number of approaches to avoiding it: some groups simulate at zero chemical potential and measure susceptibilities to perform a Taylor expansion in μ, others use an imaginary chemical potential (where the fermion determinant is real) and try to analytically continue to real μ. A completely different approach is given by complex Langevin dynamics, where all field variables are complexified and subjected to Langevin evolution. This method seems to work well in resolving the Silver Blaze problem for many models; however, it is known to sometimes converge to the wrong limit, so further theoretical work is certainly needed.

The second plenary began with a talk by Kim Splittorff about chiral dynamics with Wilson fermions. Here there are two competing scenarios for approaching vanishing quark mass, the Aoki phase and the Sharpe-Singleton scenario, where in the latter case the pion mass never vanishes. In the quenched case, only the Aoki phase exists, but in unquenched simulations both scenarios have been observed. In Wilson chiral perturbation theory, it turns out that the sign of a given combination of low-energy constants parametrising the breaking of chiral symmetry by the Wilson term decides which scenario occurs. The eigenvalue density of the Dirac operator can also be determined analytically using Wilson χPT in the &epsilom;-regime, and the analytical results agree with simulations, finding an a/V1/2 scaling for the lowest eigenvalue.

Next was Masanori Hanada speaking about Monte Carlo approaches to string/M theory. Via the AdS/CFT correspondence, supergravity/string theories can be related to Super-Yang-Mills theories. In some regimes, the string theory is easier to calculate with, and hence string calculations can be used to make statements about some aspects of gauge theories. In other regimes, which apparently are of particular interest to string theorists, the SYM theory is easier to work with, and hence lattice simulations can be used to make predictions about aspects of string theory. In particular, a specific kind of Chern-Simons theory with matter (the ABJM theory) may apparently be the definition of M theory, the elusive unifying description of string theory. There also seems to be the possibility that simulations of certain zero-dimensional models may contain the key to why there are three spatial dimensions and the Universe is expanding.

After this, the Ken Wilson Lattice Award 2012 was announced: it goes to Blum et al. for their paper on K->ππ decays.

Then an invitation was given to a summer school in Brazil, and finally your correspondent could invite the conference participants to Mainz for next year.

After the lunch break, there were parallel sessions, and after the coffee break, there was a final plenary session. The first speaker of the latter was Peter Boyle presenting the BlueGene/Q system. Lattice QCD presents a special design challenge to a designer of HPC systems, since in order to achieve scalability it requires that the network bandwidth and the memory bandwidth be about equal and closely matched to the FPU speed. With input from lattice physicists, this was realised in the BG/Q system. As a result, the BG/Q has been able to scale to unprecedented performances, smashing the Petaflop barrier by achieving 3.07 PFlop/s sustained performance, while being the most energy efficient computer in the world.

After this, Gilberto Colangelo presented the FLAG-2 group and its work. FLAG-2 has moved beyond FLAG by also including physicists from the US and Japan, and by broadening its mandate to include also heavy-quark observables and αs. FLAG-2 expects to publish a review of results published up to the end of 2012 in early 2013, and every two years thereafter. End users will always be reminded to cite not just the FLAG review, but also the original paper(s).

The last plenary talk was given by Tom Blum, who spoke about the anomalous magnetic moment of the muon. The 3.5σ tension (which is about two times the size of the electroweak corrections) between current theory and experiment is one of the biggest hints of BSM physics that exists so far. However, progress is hindered by the theoretical uncertainties, the leading contribution to which is the uncertainty on the hadronic effects. The leading hadronic effect is the hadronic vacuum polarisation, on which much work is being done, including by the Mainz group and ETMC, with updated and improved results presented at this conference. Tom Blum presented another avenue towards improving the precision of the lattice predictions by using all-mode-averaging. The next-largest contribution is hadronic light-by-light scattering, which naively would be an infeasible O(V2) calculation, but which can be attacked using simulations of QCD+QED with muons. This is particularly important, since reducing the error on this contribution to 10% would increase the tension (assuming the means remained the same) to the 5σ (="discovery") level.

After the last plenary, Derek Leinweber spoke a few closing words and the lattice community scattered again, to reconvene next year in Mainz.

This ends our coverage of Lattice 2012. I will be putting up a summary of what I learned from Cairns for organising Lattice 2013 in Mainz later, and I will keep you updated on the preparations for Lattice 2013 as it approaches.