Friday, November 21, 2014

Scientific Program "Fundamental Parameters of the Standard Model from Lattice QCD"

Recent years have seen a significant increase in the overall accuracy of lattice QCD calculations of various hadronic observables. Results for quark and hadron masses, decay constants, form factors, the strong coupling constant and many other quantities are becoming increasingly important for testing the validity of the Standard Model. Prominent examples include calculations of Standard Model parameters, such as quark masses and the strong coupling constant, as well as the determination of CKM matrix elements, which is based on a variety of input quantities from experiment and theory. In order to make lattice QCD calculations more accessible to the entire particle physics community, several initiatives and working groups have sprung up, which collect the available lattice results and produce global averages.

We are therefore happy to announce the scientific program "Fundamental Parameters of the Standard Model from Lattice QCD" to be held from August 31 to September 11, 2015 at the Mainz Institute for Theoretical Physics (MITP) at Johannes Gutenberg University Mainz, Germany.

This scientific programme is designed to bring together lattice practitioners with members of the phenomenological and experimental communities who are using lattice estimates as input for phenomenological studies. In addition to sharing the expertise among several communities, the aim of the programme is to identify key quantities which allow for tests of the CKM paradigm with greater accuracy and to discuss the procedures in order to arrive at more reliable global estimates.

We would like to invite you to consider attending this and to apply through our website. After the deadline (March 31, 2015), an admissions committee will evaluate all the applications.

Among other benefits. MITP offers all its participants office space and access to computing facilities during their stay. In addition, MITP will cover local housing expenses for accepted participants. The MITP team will arrange the accommodation individually and also book the accommodation for accepted participants.

Please do not hesitate to contact us at if you have any questions.

We hope you will be able to join us in Mainz in 2015!

With best regards,

the organizers:
Gilberto Colangelo, Georg von Hippel, Heiko Lacker, Hartmut Wittig

Monday, June 30, 2014

LATTICE 2014, Day Six

The last day of the conference started out with a sequence of topical talks. First was Massimo D'Elia speaking about Lattice QCD with purely imaginary sources at zero and non-zero temperature. Contrary to what the name might suggest, an imaginary source is a source term that can be coupled ot the action so as to keep e-S real and positive. Examples include an imaginary chemical potential, an imaginary θ term, or an external electromagnetic field with a real magnetic or imaginary electric field strength. Applications include the study of the curvature of the critical line near μ=0 and the nature of the Roberge-Weiss phase transition, and the determination of electric dipole moments and the magnetic properties of nuclear matter.

Next was Tilo Wettig introducing the QPACE 2. QPACE now stands for "QCD Parallel Computing Engine" (as there is no more Cell processor involved). Each compute card consists of four Xeon Phi Knights Corner processors linked by a PCI Express bus and a weak CPU, which is only used for booting. The compute cards use a novel patented "brick" concept and employ an innovative kind of water cooling. Each rack has a peak performance of 310 TFlops. To run optimally on this architecture, codes will need some adjustments employing ideas such as site fusing, half-precision gauge fields, and the use of lattice sizes with prime factors of 3 and 5, but with optimal use of the SIMD units, scaling is almost perfect. A future successor, QPACE 3, will use Knights Landing units instead of the Knights Corner ones, and should achieve a peak performance of 1 TFlop per rack.

This was followed by Masakiyo Kitazawa speaking about measurements of thermodynamics using the gradient flow. The small-flowtime expansion for the gradient flow allows to define a renormalized energy-momentum tensor in terms of the zero-flowtime limit of two flowed dimension-four operators. This has been applied to obtain results for the trace anomaly and the entropy density, but the difficulty lies in finding a plateau region in flow time where both lattice artifacts and finite-volume effects can be neglected, so as to allow a reliable extrapolation to zero flow time.

After the coffee break, Chris Sachrajda reviewed the state of the lattice determination of long-distance effects to flavour-changing processes. As no new physics has been discovered by the LHC so far, precision flavour physics is still the most promising avenue in the search for BSM effects. For some quantities in this area, particularly in the field of Kaon physics, long distance effects are of crucial importance. An example is neutral Kaon mass difference ΔmK=mKL-mKS; this involves four-volume integrals over the expectation value of matrix elements of electroweak operators between hadronic states, raising the problem of how to prepare such hadronic states in this context. The problem can be solved by taking the time integral over a largeish interval, but placing the creation and annihilation operators well outside of the corresponding four-volume. The relevant correlation functions also contain terms growing exponentially with the time extent T,
which can be removed by adding suitably tuned terms to the electroweak Hamiltonian. UV divergences are eliminated the GIM mechanism together with the V-A structure of the electroweak currents. With all these theoretical developments in place, a calculation done at unphysical pion and Kaon masses gives a result for ΔmK close to the physical value (which may of course still be a fortuitous coincidence), and exhibits an apparent violation of the OZI rule in that the contribution from the disconnected diagram is very significant to the final result. Another example given was the decay KL->π0l+l-, for which the long-distance effects are known in χPT, and the question addressed by an exploratory study is whether the lattice can do better. Yet another example are the QED corrections to the pion decay constant, which contain IR divergences requiring a proper Bloch-Nordsieck treatment.

After some well-deserved applause for the organizers, the conference closed with the invitation to next year's lattice conference in Kobe, Japan, from 14th to 18th July 2014. The IAC also announced that the 2016 lattice conference will be hosted in Southampton, U.K., in the last week of July 2016.

As I had to fly back to Germany in the evening (a lecture having to be given on Monday), the posting of this and the previous day's summaries was delayed a little by travel and subsequent jetlag, but I am sure my readers will be delighted to know that I got home safe and sound, and with all my luggage intact.

LATTICE 2014, Day Five

The first plenary talk of the morning was by Sasa Prelovsek, who gave the review talk on hadron spectroscopy. In this area, the really hot topic is the nature of the XYZ states, such as the Zc+(3900), which decays into J/ψ π+, and thus cannot be a simple quark-antiquark bound state. In order to elucidate this question, the variational method has to be used with a basis of operators containing both one- and two-meson operators as well as possible tetraquark operators, and this then requires the use of all-to-all propagators (with distillation now being the most commonly used approach) as well as a Lüscher-type method to treat the multiparticle states. These added difficulties mean that studies in this area are still a bit rough at the moment, with the physical-pion, large-volume and continuum limits generally not yet taken. For the Zc+, Sasa et al. find a candidate state only when including both two-meson and tetraquark operators in their basis. The more charmonium-like states, such as the X(3872), are better studied, and the X(3872) in particular appears likely to be mostly a DD* molecule. The greatest challenges in spectroscopy are the mixing between quarkonia and light hadron states, which is still mostly ignored, and the inclusion of more-than-two particle states, for which the theoretical tools aren't quite there yet.

A topical talk on new algorithms for finite-density QCD given by Denes Sexty followed. QCD at finite chemical potential μ suffers from the well-known sign problem; while there are a number of methods to evade it (in particular analytically continuing from imaginary μ and Taylor expansion methods), the newer methods attempt to address it directly. One of these is the complex Langevin method, which responds to the complex action by complexifying the fields and noise term in the Langevin equation (which for gauge links means continuing from SU(N) to SL(N,C) and requires some means of restraining the links from wandering off too far into the unphysical part of the group manifold, e.g. by gauge cooling steps interspersed with the dynamical updates). In the past, this method was hampered by a lack of theoretical understanding and the presence of possibly unphysical runaway trajectories; now, it has been established that for holomorphic actions, the complex Langevin time average does converge to the ensemble average. Unfortunately, the action for QCD with a chemical potential is not holomorphic, but some studies indicate that this case may nevertheless be okay. The other new method to directly address the sign problem is the Lefschetz thimble, which relies on shifting the integration contour for the path integral into the complex plane, and for which simulation algorithms exist in the case of various toy models. For the complex Langevin method, there are now a number of results which look promising.

This was followed by another topical talk, Alberto Ramos speaking about the applications of the Wilson flow to scale setting and renormalization. It has long been known that the Wilson flow yields renormalized operators, and besides its use in setting the lattice scale, it is now widely used to define a renormalized coupling, where the renormalization scale is set by μ2=1/(8t). To avoid the need for a window where both cut-off and finite-volume effects are small, one can tie the renormalization scale to the volume as μ=1/(cL), however, this means that the boundary conditions become relevant. The errors on the Wilson flow coupling are orders of magnitude smaller than those on the Schrödinger functional coupling, but the SF coupling becomes less noisy at small coupling and thus provides information complementary to that from the WF coupling. Cut-off effects are important for Wilson flow obervables, and tree-level improvement has a big effect there. There is a small-flowtime expansion analogous to the OPE, and a fermionic version of the flow can be used to determine the chiral condensate. All in all, this is a very active field of current research.

After the coffee break, the Ken Wilson Award was announced. The award goes to Gergely Endrődy for significant contributions to our understanding of QCD matter in strong magnetic fields and to QCD thermodynamics. Gergely gave his prize talk on the topic of QCD in magnetic fields, starting from Hofstadter's butterfly, which is a self-similar fractal describing the energy levels accessible to an electron in a crystal (which tries to enforce Bloch waves) in a magnetic field (which tries to enforce Landau levels). The Dirac operator for a free lattice fermion in a magnetic field has a similar structure, which however disappears in the continuum limit, since the magnetic flux through a plaquette scales as a2. The quark condensate is related to the Dirac eigenvalues, and hence contains the same self-similar structure, which is washed out by the quark mass, however. When QCD interactions are turned on, these similarly wash out the fractal structure. What is left over is a growth of the quark condensate with the magnetic field at zero temperature ("magnetic catalysis"). At finite temperature, a similar effect was expected from models, but Gergely et al. have shown that in fact the opposite effect happens ("inverse magnetic catalysis").

This was followed by Tetsuya Onogi speaking about a hidden exact symmetry of graphene. Graphene, which is the most conductive material known under terrestrial conditions, has a band structure with a Dirac point resembling the dispersion relation for a massless relativistic fermion, with no gap. The symmetry preserving the vanishing of the gap against perturbations can be derived by treating the actual graphene lattice as a staggered version of a coarser hexagonal lattice, where six sites correspond to six internal degrees of freedom (three flavours, two spins), which then reveals a hidden flavour-chiral symmetry.

The afternoon saw the last set of parallel sessions. There were two more talks from members of the Mainz group (PhD student Hanno Horch and former postdoc Gregorio Herdoiza, now a Ramón y Cajal Fellow at the Universida Autónoma de Madrid) on work related to (g-2) and the Adler function.

Friday, June 27, 2014

LATTICE 2014, Days Three and Four

Wednesday was the "short" day as has been customary for many years now. I gave my own talk in the hadron structure session and got a lot less criticism than I expected; apparently it has been widely accepted by now that excited-state effects can be large in nucleon matrix elements even if naively it looks like there aren't any.

In the afternoon, there were no organized excursions, so I spent the afternoon in the Metropolitan Museum and took a walk around Central Park and down Fifth Avenue after it closed.

Today was started by the first non-lattice talk, given by Anthony Mezzacappa of the CHIMERA collaboration, who spoke about simulating core collapse supernovae to ascertain the mechanism behind these massive stellar explosions. Core collapse supernovae happen when a very massive star has reached the final stage of its life, in which it has an onion-like structure, with a hydrogen envelope around a helium envelope around further layers of increasingly heavy elements around a central iron core which is about the size of the Earth, but so dense as to be about the mass of the Sun. When this central core becomes so compressed that it can no longer keep from collapsing until it reaches nuclear densities (turning into a neutron star or a stellar black hole as a result), the infall of matter is supersonic, but the bounce back is subsonic (because the speed of sound is higher in the denser matter inside), which causes a shockwave to spread that eventually blows the star apart. However, the real story is more complicated than that, because a lot of energy is radiated away in the form of neutrinos, which may cause the shockwave to become weakened and avoid the explosion. The most important question is therefore how the processes occurring in the star cause the shockwave to revive. The simulations to investigate this are become quite large, requiring on the order of 100 Megacore-hours per second of supernova simulated. To fully include all variables would likely require sustained Exaflops, so the problems are usually simplified. Spherical symmetry is a bad assumption apparently, because it leads to no explosion. Azimuthal symmetry gives an explosion, and the generic three-dimensional case is not quite resolved yet.

This was followed by a review of BSM physics from the lattice by Yasumichi Aoki. The main idea investigated in this area is walking technicolor, i.e. the search for a technicolor-type gauge theory that has a very slowly running coupling and large mass anomalous dimension in order to permit both the generation of a realistic mass spectrum for the Standard Model fermions and the suppression of flavour-changing neutral currents to a level compatible with experiment. Another problem is to have a light Higgs and no other light unobserved particles. A number of theories under investigation show spectra compatible with this, with the scalar much lighter than the pseudoscalar (as opposed to QCD, where the pion is much ligher than the σ resonance).

After the coffee break, we had the experimental talk, by Brendan Casey on the FNAL E989 experiment and the anomalous magnetic moment of the muon. To understand the hadronic contributions much more work is needed, both on the theory side (where the work of my collaborators Anthony Francis and Vera Gülpers received well-deserved praise) and in experiment (where the R-ratio needs to be determined to sub-percent level, and where KLOE will investigate the leading contributions to hadronic light-by-light scattering). The new Fermilab (g-2) experiment is designed specifically to address many of the remaining sources of experimental error on the value (g-2) itself; the effort to get there has been quite impressive, with the pictures showing very nicely what kinds of huge projects even such relatively "small" experiments are.

The next talk was Antonin Portelli speaking about electromagnetic and isospin-breaking effects in lattice QCD. While isospin is a reasonably good symmetry of the strong interactions, it is broken at the sub-percent level, and the proton-neutron mass difference is an essential ingredient of the stability of matter. Understanding isospin-breaking effects (both from electromagnetism and from the difference between the up and down quark masses) is therefore a crucial endeavour for lattice theorist in the longer term. A number of collaborations are now simulating QCD+QED dynamically. Since QED does not have a mass gap, it tends to show long autocorrelations in Monte Carlo time; a new HMC Hamiltonian introduced by the BMW collaboration appears to get rid of this effect. The electromagnetic mass differences within the baryon octet are nicely reproduced by now, and the origin of the nucleon mass difference seems to become understood. For some reason, the Ξcc mass difference is also of great interest to phenomenologists, and has also been computed on the lattice.

The last plenary of the morning was a review of quark masses by Francesco Sanfilippo. He stressed the importance of ratios of quark masses (where in a mass-independent scheme, the ratio of renormalized masses equals that of the bare ones, avoiding the need for accurate knowledge of renormalization constants), and reviewed a number of methods that have been used to determine heavy quark masses, including the HPQCD method of using moments of current-current correlators, the use of NRQCD with perturbative subtractions and of non-perturbative HQET, as well as the ETMC ratios method. In the light sector, simulations are now done close to the physical point, and the isospin-breaking u-d mass difference is being investigated in a realistic manner.

In the afternoon, there were parallel sessions again. Besides some NRQCD talks, incuding a very nice talk on bottomonium spectroscopy using free-form smearing, I attended a number of talks on the gradient flow.

In the evening, there was the dinner cruise for those who had bought tickets. I hadn't and, having waived any claim to a left-over free ticket so interested others could attend instead, arranged otherwise for dinner.

Wednesday, June 25, 2014

LATTICE 2014, Day Two

Hello again from New York. The first plenary of the morning was given by Nicolas Garron speaking about K/π physics. After a summary of the most recent updates on the decay constants of the pion and Kaon and their ratio fK/fπ, as well as the zero momentum transfer form factor f+(0) (which are increasingly so precise that the question of when the precision was enough was raised from the audience after the talk), he proceeded to discuss the general theory of CP violation in neutral Kaon mixing and the ΔI=1/2 rule in K->ππ decays, and the ways in which lattice calculations are needed to understand these topics. A number of recent updates on the Kaon bag parameter BK were summarized, and the renormalization and mixing of the BSM operators entering neutral Kaon mixing (for which Mauro Papinutto showed some impressive results in one of the parallel sessions) were discussed. Finally, RBC/UKQCD now have results on the ΔI=1/2 and ΔI=3/2 amplitudes in K->ππ decays at the physical pion mass, which strongly support the ΔI=1/2 rule at a level compatible with phenomenology.

This was followed by a talk on a somewhat related topic, namely Stephan Dürr speaking about the question of whether the validity of χPT extends even to the physical pion mass. Contrary to the often-quoted theorem that the answer to any title with a question mark in it is "no", the answer was "yes" in this case. While the chiral expansion breaks down completely at pion masses of around 500 MeV (where NNLO corrections grow to be larger than the NLO ones), two different analyses (one using staggered, and one using Wilson fermions) that Stephan showed indicate that the NLO low-energy constants can be extracted in a reasonably consistent manner from fits in the range Mπ=135-400 MeV. However, the low-energy constant l4 showed a significant sensitivity to the range of pion masses used to fit.

The last talk before the coffee break was on Multigrid methods for lattice QCD and was given by Andreas Frommer. Multigrid methods have a long history in applied mathematics, where they are used more commonly in the context of finite-element methods (rather than the finite-difference approach used in lattice field theory). The basic ingredients from the applied mathematics point of view are a smooting operation together with restriction and prolongation operations that allow to reduce the size of the problem to a level where it can be solved directly, and then to retrieve the solution of the original problem from this. Interestingly, this was somewhat reinvented in a way tuned to lattice QCD from the physics side, where Lüscher's inexactly deflated SAP-preconditioned GCR that is part of the DD-HMC and openQCD packages forms a two-level multilevel scheme that leads to a great improvement in runtime behaviour as the quark mass is decreased. The Wuppertal applied mathematics group has extended this to a generic multilevel scheme for QCD (where it is found that three levels are even better than two at small quark masses, but four seem not to help appreciably more). From the mathematical side, most of the existing multigrid theory does not apply to QCD, however, so further mathematical research seems required to fully understand why and when these approaches work for QCD.

After the coffee break, Raul Briceno spoke about few-body physics. In this area, significant theoretical progress seems to have been made recently and still to be under way, extending Lüscher's finite-volume formalism for scattering phase shifts in various directions.

This was followed by a talk on the closely related and somewhat overlapping topic of hadronic interactions by Takeshi Yamazaki, who presented recent results for various scattering lengths and phase shifts, as well as reviewing the alternative HALQCD method, which relies on reconstructing an interaction potential from multi-particle correlators.

In the afternoon there were parallel sessions again. I got to chair the session on renormalization from the Schrödinger functional approach, where there has been significant progress on the chirally rotated SF and on studying the mixing of four-quark operators. Another very interesting session later in the afternoon was concerned with the various methods to get at quark-disconnected contributions to hadron structure observables, and some of the results obtained using them.

In the evening, the poster session took place.

Tuesday, June 24, 2014

LATTICE 2014, Day One

Hello, faithful readers, and a cordial welcome to the annual lattice conference blog, this time form New York, where I arrived two days early in order to beat the jet lag. The jet-lag adjustment days were well-spent in the Metropolitan Museum.

The conference started with a reception (a very exclusive event, admission to which was controlled by rather fierce security guards, who at first wouldn't even let us into the building) on Sunday night.

Since the plenary talks will be livestreamed at (search for "Lattice2014"), you don't have to rely on my summaries of the talks this time, and in fact I would like to encourage you to cross-check them and post about anything you feel I missed or misrepresented in the comment section (please note that comments are moderated, so it may take a while for yours to turn up).

After a brief opening address by the Vice-President of Columbia University, the first plenary talk of the conference was given by Martha Constantinou, who gave a review talk on hadron structure. The most active subfield in this area is nucleon structure, to which accordingly the greater part of her talk was devoted. A crucial quantity there is the axial charge gA of the nucleon, which a number of groups have been investigating using a number of methods. (Since I have been involved in the Mainz effort on this front, I am certainly somewhat biased, so take what follows with a grain of salt.) Martha very nicely explained the existing results and discussed the sources of error in detail, but I'm afraid I have to slightly disagree with some of her assessments, in particular regarding excited-state effects (which I believe to be more important) and finite-volume effects (where I think that MπL>4 is required to be on the safe side). An interesting development is the Feynman-Hellmann approach, where a term coupling to the current of interest (the axial current in this case) is added to the action, and derivatives of the nucleon mass are taken with respect to the coefficient of that term in order to get at the matrix element of the current; this appears to allow for high statistical precision. Another area of high activity are the nucleon electromagnetic form factors (for which I also believe excited-state effects to be far more important than thought so far). Here, the disconnected contributions relevant for the proton (rather than isovector) form factors are now being computed by some groups, which requires very high statistics (O(100,000) was mentioned) and/or some clever new ideas (like hierarchical probing). For the quark momentum fraction <x>, the importance of excited-state effects is uncontroversial, but the dominant error remains the renormalization. There are also increasingly results for the nucleon spin decomposition, although there are some open problems here, in particular with regards to the gluon angular momentum contributions and the resulting mixing. Beyond the nucleon, first results for hyperon form factors are now available. Further quantities discussed were the pion <x> and the electromagnetic form factors of the ρ meson (there are three of them). Overall, simulations at or near the physical pion mass are now removing the uncertainties from chiral extrapolations (and discretisation effects appear to be small in many nucleonic quantities), so that the confrontation with experiment becomes more acute, requiring full control of all other sources of error.

This was followed by another review talk, on heavy flavours, given by Chris Bouchard. The decay constant of the Ds meson has been the subject of much interest in the past, when a theory-experiment tension seemed to indicate a potential for new physics; that tension has mostly passed, but as a consequence there are now many recent results for fDs, which tend to meet an accuracy target of 1% required to have an impact at the level of experimental precision expected for 2020. For the decay constants of the B and Bs mesons, there are now results from many different formulations (NRQCD, HQET, Fermilab, heavy HISQ, ratios with heavy twisted mass quarks), which all agree quite well. The extraction of Vcs from semileptonic decays suggest a small tension with that using fDs, much as there is still some tension between the exclusive and inclusive determinations of Vub and Vcb. In testing for possible new physics, both rare decays (i.e. those that can occur only at the loop level in the Standard Model) and the mixing of neutral heavy-flavour mesons with the antiparticles are of particular relevance. Apparently, a recent calculation of D0 mixing by ETMC is enough to exclude new physics contributions up to scales as high as thousands of TeV.

After the coffee break, Michael Müller-Preussker gave a talk in memory of Pierre van Baal (1955-2013), reviewing recent results on topology on the lattice. Since the topological properties of field configurations are defined in terms of winding numbers of maps between continuous spaces, the definition of topological quantities on the lattice (which is after all discrete) can be ambiguous. Techniques that are used include the direct approach (using a discretisation of the continuum topological charge density and relying on some smoothing operation, such as link smearing, cooling or more recently the gradient flow, to bring the fields close enough to the continuum to make the topology unambiguous), the approach via the Atiyah-Singer index theorem (using the index of a Ginsparg-Wilson Dirac operator to define the topological charge), and the approach via spectral projectors (about which I unfortunately know more or less nothing).

The following talk was the review talk on finite-temperature (at vanishing chemical potential) results, which was given by Alexei Bazavov. In keeping with the location of the conference, he showed the Columbia plot before turning to results at the physical point, where the transition is a crossover and the transition temperature hence not so clearly defined. However, when looking for the peak of the chiral susceptibility, the results from different staggered formulations and more recently from domain-wall fermions at the physical pion mass agree quite well. An interesting observation appeared to be that in order for lattice results to match up with hadron resonance gas model predictions, the hadron resonance gas apparently also has to include the "missing states" predicted by quark models, but not observed experimentally. Other results presented included a new method to determine the equation of state using shifted boundary conditions, and numerous new results for the heavy-quark potential and quarkonium spectral functions.

In the afternoon there were parallel sessions. I would like to highlight the (first of two) sessions dedicated to lattice results on the anomalous magnetic moment of the muon. There are now a number of different methods of getting at the leading hadronic contribution: by direct determination of the hadronic vacuum polarization, via a mixed-representation approach (where the subtracted vacuum polarization is expressed as an integral over the vector correlator), and from moments of current-current correlators. While in principle all of these process the same information (which is after all encoded in the vector-vector correlation functions), they seem to have different strengths and weaknesses. A first lattice estimate of the systematic error incurred by neglecting disconnected diagrams (whose contribution cannot yet be resolved with the currently available statistics) was presented by Mainz PhD student Vera Gülpers.