Hello again from Lattice 2016 at Southampton. Today's first plenary talk was the review of nuclear physics from the lattice given by Martin Savage. Doing nuclear physics from first principles in QCD is obviously very hard, but also necessary in order to truly understand nuclei in theoretical terms. Examples of needed theory predictions include the equation of state of dense nuclear matter, which is important for understanding neutron stars, and the nuclear matrix elements required to interpret future searches for neutrinoless double β decays in terms of fundamental quantities. The problems include the huge number of required quark-line contractions and the exponentially decaying signal-to-noise ratio, but there are theoretical advances that increasingly allow to bring these under control. The main competing procedures are more or less direct applications of the Lüscher method to multi-baryon systems, and the HALQCD method of computing a nuclear potential from Bethe-Salpeter amplitudes and solving the Schrödinger equation for that potential. There has been a lot of progress in this field, and there are now first results for nuclear reaction rates.
Next, Mike Endres spoke about new simulation strategies for lattice QCD. One of the major problems in going to very fine lattice spacings is the well-known phenomenon critical slowing-down, i.e. the divergence of the autocorrelation times with some negative power of the lattice spacing, which is particularly severe for the topological charge (a quantity that cannot change at all in the continuum limit), leading to the phenomenon of "topology freezing" in simulations at fine lattice spacings. To overcome this problem, changes in the boundary conditions have been proposed: open boundary conditions that allow topological charge to move into and out of the system, and non-orientable boundary conditions that destroy the notion of an integer topological charge. An alternative route lies in algorithmic modifications such as metadynamics, where a potential bias is introduced to disfavour revisiting configurations, so as to forcibly sample across the potential wells of different topological sectors over time, or multiscale thermalization, where a Markov chain is first run at a coarse lattice spacing to obtain well-decorrelated configurations, and then each of those is subjected to a refining operation to obtain a (non-thermalized) gauge configuration at half the lattice spacing, each of which can then hopefully thermalized by a short sequence of Monte Carlo update operations.
As another example of new algorithmic ideas, Shinji Takeda presented tensor networks, which are mathematical objects that assign a tensor to each site of a lattice, with lattice links denoting the contraction of tensor indices. An example is given by the rewriting of the partition function of the Ising model that is at the heart of the high-temperature expansion, where the sum over the spin variables is exchanged against a sum over link variables taking values of 0 or 1. One of the applications of tensor networks in field theory is that they allow for an implementation of the renormalization group based on performing a tensor decomposition along the lines of a singular value decomposition, which can be truncated, and contracting the resulting approximate tensor decomposition into new tensors living on a coarser grid. Iterating this procedure until only one lattice site remains allows the evaluation of partition functions without running into any sign problems and at only O(log V) effort.
After the coffee break, Sara Collins gave the review talk on hadron structure. This is also a field in which a lot of progress has been made recently, with most of the sources of systematic error either under control (e.g. by performing simulations at or near the physical pion mass) or at least well understood (e.g. excited-state and finite-volume effects). The isovector axial charge gA of the nucleon, which for a long time was a bit of an embarrassment to lattice practitioners, since it stubbornly refused to approach its experimental value, is now understood to be particularly severely affected by excited-state effects, and once these are well enough suppressed or properly accounted for, the situation now looks quite promising. This lends much larger credibility to lattice predictions for the scalar and tensor nucleon charges, for which little or no experimental data exists. The electromagnetic form factors are also in much better shape than one or two years ago, with the electric Sachs form factor coming out close to experiment (but still with insufficient precision to resolve the conflict between the experimental electron-proton scattering and muonic hydrogen results), while now the magnetic Sachs form factor shows a trend to undershoot experiment. Going beyond isovector quantities (in which disconnected diagrams cancel), the progress in simulation techniques for disconnected diagrams has enabled the first computation of the purely disconnected strangeness form factors. The sigma term σπN comes out smaller on the lattice than it does in experiment, which still needs investigation, and the average momentum fraction <x> still needs to become the subject of a similar effort as the nucleon charges have received.
In keeping with the pattern of having large review talks immediately followed by a related topical talk, Huey-Wen Lin was next with a talk on the Bjorken-x dependence of the parton distribution functions (PDFs). While the PDFs are defined on the lightcone, which is not readily accessible on the lattice, a large-momentum effective theory formulation allows to obtain them as the infinite-momentum limit of finite-momentum parton distribution amplitudes. First studies show interesting results, but renormalization still remains to be performed.
After lunch, there were parallel sessions, of which I attended the ones into which most of the (g-2) talks had been collected, showing quite a rate of progress in terms of the treatment of in particular the disconnected contributions.
In the evening, the poster session took place.