Saturday, June 30, 2012

Lattice 2012, Day Five

Hello for a final time from Cairns. The first plenary session of the morning had a somewhat reduced occupation number, as is usual the morning after the banquet. The first speaker was Maria Paola Lombardo, who spoke about high-temperature QCD on the lattice. Finite-T results are still being dominated by the staggered results, although there is a noticeable discrepancy in the equation of state between HISQ and stout-smeared quarks, and Wilson simulations are beginning to catch up. There are still many open issues in this field, including the fate of the U(1)A symmetry at high temperature and the effects of a θ term and of magnetic fields. On the other hand, quarkonium suppression is predicted well by the lattice, and for fluctuations the lattice measurements and hard thermal loop calculations meet up at around 200 MeV.

The second talk was on strategies for finite chemical potential by Gert Aarts. At finite chemical potential, the fermionic determinant is complex, which precludes a simple probability interpretation, rendering ordinary Markov Chain-based Monte Carlo simulations impossible (the "sign problem"). Replacing the complex determinant by its absolute value, a technique known as phase quenching, leads to poor overlap and the so-called "Silver Blaze" problem, i.e. that extreme cancellations of highly oscillatory integrands are required to get the correct behaviour. It is therefore of interest to study models that have no sign problem, and these include two-colour QCD, and QCD with the gauge group G2 (one of the exceptional simple Lie groups). For real-world QCD, which does have a sign problem, there are a number of approaches to avoiding it: some groups simulate at zero chemical potential and measure susceptibilities to perform a Taylor expansion in μ, others use an imaginary chemical potential (where the fermion determinant is real) and try to analytically continue to real μ. A completely different approach is given by complex Langevin dynamics, where all field variables are complexified and subjected to Langevin evolution. This method seems to work well in resolving the Silver Blaze problem for many models; however, it is known to sometimes converge to the wrong limit, so further theoretical work is certainly needed.

The second plenary began with a talk by Kim Splittorff about chiral dynamics with Wilson fermions. Here there are two competing scenarios for approaching vanishing quark mass, the Aoki phase and the Sharpe-Singleton scenario, where in the latter case the pion mass never vanishes. In the quenched case, only the Aoki phase exists, but in unquenched simulations both scenarios have been observed. In Wilson chiral perturbation theory, it turns out that the sign of a given combination of low-energy constants parametrising the breaking of chiral symmetry by the Wilson term decides which scenario occurs. The eigenvalue density of the Dirac operator can also be determined analytically using Wilson χPT in the &epsilom;-regime, and the analytical results agree with simulations, finding an a/V1/2 scaling for the lowest eigenvalue.

Next was Masanori Hanada speaking about Monte Carlo approaches to string/M theory. Via the AdS/CFT correspondence, supergravity/string theories can be related to Super-Yang-Mills theories. In some regimes, the string theory is easier to calculate with, and hence string calculations can be used to make statements about some aspects of gauge theories. In other regimes, which apparently are of particular interest to string theorists, the SYM theory is easier to work with, and hence lattice simulations can be used to make predictions about aspects of string theory. In particular, a specific kind of Chern-Simons theory with matter (the ABJM theory) may apparently be the definition of M theory, the elusive unifying description of string theory. There also seems to be the possibility that simulations of certain zero-dimensional models may contain the key to why there are three spatial dimensions and the Universe is expanding.

After this, the Ken Wilson Lattice Award 2012 was announced: it goes to Blum et al. for their paper on K->ππ decays.

Then an invitation was given to a summer school in Brazil, and finally your correspondent could invite the conference participants to Mainz for next year.

After the lunch break, there were parallel sessions, and after the coffee break, there was a final plenary session. The first speaker of the latter was Peter Boyle presenting the BlueGene/Q system. Lattice QCD presents a special design challenge to a designer of HPC systems, since in order to achieve scalability it requires that the network bandwidth and the memory bandwidth be about equal and closely matched to the FPU speed. With input from lattice physicists, this was realised in the BG/Q system. As a result, the BG/Q has been able to scale to unprecedented performances, smashing the Petaflop barrier by achieving 3.07 PFlop/s sustained performance, while being the most energy efficient computer in the world.

After this, Gilberto Colangelo presented the FLAG-2 group and its work. FLAG-2 has moved beyond FLAG by also including physicists from the US and Japan, and by broadening its mandate to include also heavy-quark observables and αs. FLAG-2 expects to publish a review of results published up to the end of 2012 in early 2013, and every two years thereafter. End users will always be reminded to cite not just the FLAG review, but also the original paper(s).

The last plenary talk was given by Tom Blum, who spoke about the anomalous magnetic moment of the muon. The 3.5σ tension (which is about two times the size of the electroweak corrections) between current theory and experiment is one of the biggest hints of BSM physics that exists so far. However, progress is hindered by the theoretical uncertainties, the leading contribution to which is the uncertainty on the hadronic effects. The leading hadronic effect is the hadronic vacuum polarisation, on which much work is being done, including by the Mainz group and ETMC, with updated and improved results presented at this conference. Tom Blum presented another avenue towards improving the precision of the lattice predictions by using all-mode-averaging. The next-largest contribution is hadronic light-by-light scattering, which naively would be an infeasible O(V2) calculation, but which can be attacked using simulations of QCD+QED with muons. This is particularly important, since reducing the error on this contribution to 10% would increase the tension (assuming the means remained the same) to the 5σ (="discovery") level.

After the last plenary, Derek Leinweber spoke a few closing words and the lattice community scattered again, to reconvene next year in Mainz.

This ends our coverage of Lattice 2012. I will be putting up a summary of what I learned from Cairns for organising Lattice 2013 in Mainz later, and I will keep you updated on the preparations for Lattice 2013 as it approaches.

Friday, June 29, 2012

Lattice 2012, Days Three and Four

Apologies for the late update. Last night I was too tired (or tipsy, your guess) to blog.

Wednesday was the customary short day; there were plenary talks in the morning and excursions in the afternoon. Having already had a look at the wonders of the Great Barrier Reef in better weather before the conference, I decided to go to the zoo. In case that sounds kind of boring, let me tell you that the Cairns Tropical Zoo hosts some rather impressive animals; the saltwater crocodiles in particular are scarily big (one of them was known to eat cattle before he got captured), and the many birds and lizards are just very different from anything on the Northern hemisphere (and there were koalas and kangaroos, too).

Thursday started with another experimental talk, presented by Justine Serrano of LHCb, who spoke about the many flavour physics observations made by that collaboration. Highlights included pushing the bounds for the branching ratio Bs->μμ very close to the Standard Model prediction (this is an observable for which most of the uncertainty actually comes from lattice QCD predictions of fBs) as well as observing the decay B->πμμ for the first time (this is the rarest B decay ever observed). New measurements of φs from Bs->J/ψφ and Bs->J/ψππ are compatible with zero, and the parameter space for many new physics models has already now been tightly constrained by LHCb. There is some tension in the (poorly known) UT angle γ and in the isospin asymmetry in B->Kμμ and K->K*μμ, but the latter discrepancy seems most likely to be a fluctuation that will go away with more data. LHCb has also made the most precise measurements of B spectroscopy so far. With an upgrade intended to improve the acquisition rate to 10-20 times ahead, LHCb will certainly continue to impress in the future.

The next speaker was Cecilia Tarantino talking about the theoretical side of flavour physics. Here one of the most pressing issues is the inclusive-exclusive discrepancy in Vub and Vcb, where in each case the inclusive and exclusive measurements differ by more than 2σ. A unitarity triangle analysis favours the exclusive value for Vub and the inclusive value for Vcb; in each case more precise lattice input for the exclusive determination is needed along with more experimental data for the inclusive one. Another tension that arises in the UT fit is coming from the branching ratio BR(B->τν); this cannot be explained in the 2-doublet Higgs model of type II, but more elaborate 2-doublet Higgs models might still explain it. Since D mixing is now entering the stage, we might become sensitive to different potential new physics, since the charm is an up-type quark; the fDs puzzle, on the other hand, has now been resolved: the lattice values went up and the experiments came down.

The second plenary opened with a talk by Huey-Wen Lin on hadron structure from the lattice, where there are a number of open puzzles, some of most pressing ones of which are the nucleon charge radii and the axial charge of the nucleon. It is likely that many systematic effects contribute here, including excited states effects, which can be overcome by using the summation method or by explicitly including excited states in fits.

This was followed by a talk by Ross Young about nucleon strangeness measurements and their impact on dark matter searches. The theoretical uncertainties of dark matter searches are dominated by the uncertainties of the nucleon sigma terms, in particular the strange sigma term. These can analysed both directly from an analysis of nucleon three-point functions, or indirectly via the Feynman-Hellmann theorem. Modern estimates of the nucleon strangeness (and their errors) are much lower than those of ten years ago, and lattice QCD can contribute significantly to reducing the uncertainties of searches for the stuff than makes up one quarter of the Universe, but of which so far we somewhat embarrassingly no idea what it actually is.

The last plenary talk of the morning was given by Walter Freeman, who spoke about determining electromagnetic sea effects on hadron polarisabilities by reweighting. He compared various approaches to reducing the noise of stochatic estimators for reweighting factors, finding that neither projecting out the low modes nor introducing intermediate reweighting steps helped for this case, but that looking at derivatives of the reweighting factors instead and performing a hopping parameter expansion did help.

In the afternoon there were parallel sessions. Mainz graduate student Vera Gülpers gave a very nice talk on measuring the scalar form factor of the pion. My own talk was just an update on the ongoing radiative improvement of NRQCD, so actually not terribly exciting.

In the evening there was the conference banquet, which was very good; however, the waiting staff took the slightly strange decision to serve the chicken or vegetarian entree and the meat or fish main course to people based on whether they were seated on even or odd seats (I have no idea whether this might be an Australian custom, though).

Tuesday, June 26, 2012

Lattice 2012, Day Two

Hello again from Cairns. The first plenary of the second day began with a talk by Joel Giedt on technicolor-related theories on the lattice. Since two of the main theoretical problems facing the Standard Model, namely the hierarchy problem and the triviality problem, are related to the existence of a fundamental scalar, a clean solution to those problems might be to assume that no fundamental Higgs field exists and chiral symmetry is instead broken by a vacuum condensate of some new fermion fields interacting under some new "technicolor" gauge interaction. In order for such a fermion condensate to be able to give masses not just to the W and Z bosons, but also to the Standard Model fermions, there must be some interaction ("extended technicolor") mediating four-fermion interactions between the new and SM fermions, and in order for the resulting fermion masses to not be unreasonably suppressed, the technicolor theory must be slow-running ("walking") or conformal with an IR fixed point. Possible candidates for such models include QCD with Nf=12 flavours, or with adjoint fermions. It appears that different groups studying these models are so far obtaining results that are impossible to reconcile with each other, so the picture still seems to be fairly confused.

Next was the traditional experimental talk, delivered by Geoffrey Taylor of ATLAS. As we all know, the LHC is running admirably and has delivered an unprecedented luminosity, which has allowed the "rediscovery" of the Standard Model to be performed very rapidly. No signs of BSM physics have been found so far, but exclusion limits on many SUSY particles, Kaluza-Klein modes and assorted exotics have reached the 1 TeV-scale, and large regions of the parameter space of many SUSY models have been ruled out. Also, the Standard Model Higgs has been ruled out above a mass of 130 GeV, but there is a tantalizing excess of events across multiple channels in the 120-130 GeV range. If this excess is the Higgs, an excess above SM expectations in the γγ channel might suggest that this is either not the SM Higgs, or that there are new particles mediating the Higgs decays. Of course there wasn't going to be any big reveal from experiments at the lattice conference -- that will be reserved (assuming there is anything to reveal already) for ICHEP: the presentation of the results from CERN will be live-streamed on 4th July 2012. Until then the bets as to the next Nobel Prize are still open ...

The second plenary started after the coffee break with Norman Christ speaking about kaon mixing and K->2π decays on the lattice. These are very hard observables to treat, but working at (almost) physical quark masses and with a chiral fermion formulation helps significantly; the use of non-perturbative renormalisation and extensions to the Lüscher formula also contributed to make the recent results that were shown possible.

This was followed by a talk by Takumi Doi presenting the work of the HALQCD collaboration on nuclear physics from lattice QCD. HALQCD measure Bethe-Salpeter amplitudes on the lattice and infer a non-local potential from them, which can then be expanded into local interactions. Besides nucleon-nucleon interactions, they have also studied hyperon-nucleon potentials and three-nucleon forces. A new contraction algorithm has helped them to significantly reduce the computational effort for these multi-quark correlators.

The last plenary talk was given by Marco Panero who spoke about Large-N gauge theories on the lattice. In the limit of an infinite number of colours and vanishing coupling (such that the 't Hooft coupling λ=g2N remains finite), gauge theories are known to simplify significantly -- perturbatively, only the planar diagrams without dynamical fermion loops survive, with all other classes of diagrams suppressed by some power of 1/N. Non-perturbatively, numerical studies at N>3 suggest that the large-N limit is approached smoothly, with many thermodynamic observables showing only a trivial N-dependence.

In the afternoon there were parallel talks, and after that the poster session (Australian snacks are tasty, and Australian wines drink nicely). Certainly one of the prettiest posters was the one of Benjamin Jäger and Thomas Rae (both from Mainz) who presented the proposal and first tests of an anisotropic smearing method designed to improve signal-to-noise ratio for hadron with non-vanishing momentum.

Monday, June 25, 2012

Lattice 2012, Day One

Hello from Lattice 2012 in Cairns, Queensland, Australia (the tropical "down under"). I suppose this year we will have particularly many readers on this blog, since so many people couldn't make the long trip; I will try not to disappoint them too much.

Having had a couple of days to get over the jetlag and the acclimatization to the tropical climate here in Cairns, as well as to recover from the 32+ hour trip, I was quite ready for the conference to start. The reception last night was pleasant, and the staff are doing a great job keeping everything well-organised.

Today, the first session (after the Welcome by Derek Leinweber) was started by Stefan Schaefer, who spoke about prospects and challenges of dynamical fermion simulations. Over the last few years, the parameters of what would be considered a typical dynamical simulation have been steadfastly approaching to the physical point in the pion mass while increasingly larger and finer lattices are being studied. This progress has been made possible not just by Moore's law and increases in parallelism, but also and even more significantly by algorithmic improvements in the MD integrators used in HMC simulations, the solvers and preconditioners used in solving the Dirac equation (such as local deflation), and the treatment of the fermion determinant (e.g. the Hasenbusch trick or the DD-HMC), all of which are to some extent interrelated (in particular Stefan pointed out that a good frequency splitting in the determinant reduces force fluctuations, thereby aiding Omelyan-type integrators by making the difference between the shadow Hamiltonian and the real one more constant). One major issue confronting dynamical simulations at fine lattice spacings is the slowing down of the topological charge as the continuum limit is approached and the topological sectors emerge, leading to potentially very long autocorrelation times. One possible solution to this problem is to simulate using open boundary conditions in time, as proposed by Martin Lüscher and now implemented in the openQCD program, and first results demonstrating the absence of the problem in this setup were shown. I suppose it remains to be seen how the effects of the open boundary conditions on hadronic correlators can be handled (they are probably quite suppressed in the central region for large enough time extent).

Next was Jo Dudek talking about spectroscopy, with a focus on resonances and more qualitative statements rather then on precision physics with stable states. This is an area in which a number of experiments (including glueX, COMPASS and BES-III) are interested, but in which theory is still ahead of experiment, in particular as far as the search for hybrids is concerned; exotic hybrids in particular would present a "smoking gun" evidence of gluonic excitations in an experiment, but have not yet been seen. The work of the HadSpec collaboration, which Jo mainly presented, relies on the "distillation" approach for building correlation functions, and on the variational method with an operator basis constructed from quark bilinears with some covariant derivatives added in and the resulting operators put into definite continuum irreps and subduced to the corresponding lattice irreps. The results then allow to identify the continuum spin from which a given lattice state (at least predominantly) came on the basis of the generalised eigenvectors going with it. Moreover, it is possible to identify likely hybrids as presumably mainly containing a chromomagnetic excitation in addition to their quark model content, and to make some phenomenological statements about excitation energies and quark model identifications. The advantages of the distillation approach were demonstrated in the example of the η/η' system, where the disconnected parts are much less noisy in this way then with other approaches.

After the coffee break, Daniel Mohler continued the topic of resonances with his talk reviewing methods and results for determining resonance parameters. Besides the now widely-used Lüscher method, he explained the histogram method (which at least I had not yet heard of) and reviewed a study comparing the two. In addition, recent results for a number of resonances including the ρ, the Kπ, Dπ and D*π channels, were reviewed, and some even compared to experiment (which seemed to agree unexpectedly well given the limitations of the lattice results). As Daniel summarised, this is an area that is still in its infancy, but making good progress, even though a firm theoretical basis for treating the inelastic case appears to be lacking.

The next speaker was Taku Izubushi, who spoke about QCD+QED on the lattice. Isospin symmetry is broken not just by the different up and down quark masses, but also by electromagnetic effects, which need to be treated in order to go beyond the isospin limit. Another reason for being interested in QED effects is that the hadronic contributions to the anomalous magnetic moment of the muon are the source of the dominant theoretical uncertainty for this precision observable, in which there is some persistent tension between SM predictions and experiment, and that the next-to-leading hadronic contribution involves the hadronic light-by-light scattering amplitude, which can probably only be computed in a QCD+QED simulation of some sort. By adding quenched non-compact QED fields onto an existing lattice ensemble and reweighting the individual configurations accordingly, it is now possible to simulate QCD+QED, and this has been used to determine the electromagnetic effects on masses and decay constants; the difference of the up and down quark masses has also been determined, along with its effects on the nucleon mass difference.

The last plenary speaker was Tatsu Misumi with a talk about new fermion discretisations. He summarised the recent developments in this field by demonstrating some of the connections between the different recent proposals of new fermion actions, including what he called "flavored mass" (which includes the staggered overlap fermions of Adams), the "central branch" (Wilson fermions without the on-site term) and the "flavored chemical potential" (minimally doubled fermions) formalisms. In particular the Adams case of the "flavored mass" formalism was shown to possess attractive features, such as reducing the numerical cost for overlap fermions and the taste breaking effects for staggered fermions, while exactly preserving hypercubic symmetry (which is broken e.g. for the minimally doubled fermions).

After the lunch break (let it be noted that eating out in Cairns [perhaps generally in Australia? -- I wouldn't know] is rather expensive) there were parallel sessions. After the last of those, I had a slightly heated discussion about the one and only truly correct way to automate lattice perturbation theory (my sincere apologies to anyone offended by the raised voices -- it was all settled peacefully in the end, possibly just in time before the Convention Centre staff would have thrown us out of the building to lock up).