Saturday, July 30, 2016

Lattice 2016, Day Six

The final day of the conference started with a review talk by Claudio Pica on lattice simulations trying to chart the fundamental physics beyond the Standard Model. The problem with the SM is perhaps to some extent how well it works, given that we know it must be incomplete. One of the main contenders for replacing it is the notion of strong dynamics at a higher energy scale giving rise to the Higgs boson as a composite particle. The most basic "technicolor" theories of this kind fail because they cannot account for the relatively large masses of the second- and third-generation quarks. To avoid that problem, the coupling of the technicolor gauge theory must not be running, but "walking" slowly from high to low energy scales, which has given rise to a veritable industry of lattice simulations investigating the β function of various gauge theories coupled to various numbers of fermions in various representations. The Higgs can then be either a dilaton associated with the breaking of conformal symmetry, which would naturally couple like a Standard Model Higgs, or a pseudo-Goldstone boson associated with the breaking of some global flavour symmetry. So far, nothing very conclusive has resulted, but of course the input from experiment at the moment only consists of limits ruling some models out, but not allowing for any discrimination between those models that aren't rules out.

A specific example of BSM physics, viz. strongly interacting dark matter, was presented in a talk by Enrico Rinaldi. If there is a new strongly-coupled interaction, as suggested by the composite Higgs models, then besides the Higgs there will also be other bound states, some of which may be stable and provide a dark matter candidate. While the "dark" nature of dark matter requires such a bound state to be neutral, the constituents might interact with the SM sector, allowing for the production and detection of dark matter. Many different models of composite dark matter have been considered, and the main limits currently come from the non-detection of dark matter in searches, which put limits on the "hadron-structure" observables of the dark matter candidates, such as their σ-terms and charge radii).

David Kaplan gave a talk on a new perspective on chiral gauge theories, the lattice formulation of which has always been a persistent problem, largely due to the Nielsen-Ninomiya theorem. However, the fermion determinant of chiral gauge theories is already somewhat ill-defined even in the continuum. A way to make it well-defined has been proposed by Alvarez-Gaumé et al. through the addition of an ungauged right-handed fermion. On the lattice, the U(1)A anomaly is found to emerge as the remnant of the explicit breaking of chiral symmetry by e.g. the Wilson term in the limit of vanishing lattice spacing. Attempts at realizing ungauged mirror fermions using domain wall fermions with a gauge field constrained to near one domain wall have failed, and a realizations using the gradient flow in the fifth dimension turns the mirror fermions into "fluff". A new realization along the lines of the overlap operator gives a lattice operator very similar to that of Alvarez-Gaumé by coupling the mirror fermion to a fixed point of the gradient flow, which is a pure gauge.

After the coffee break, Tony Hey gave a very entertaining, if somewhat meandering, talk about "Richard Feynman, Data-Intensive Science and the Future of Computing" going all the way from Feynman's experiences at Los Alamos to AI singularity scenarios and the security aspects of self-driving cars.

The final plenary talk was the review talk on machines and algorithms by Peter Boyle. The immediate roadmap for new computer architectures shows increases of around 400 times in the single-precision performance per node, and a two-fold increase in the bandwidth of interconnects, and this must be taken into account in algorithm design and implementation in order to achieve good scaling behaviour. Large increases in chip performance are to be expected from three-dimensional arrangement of units, which will allow thicker and shorter copper wires, although there remain engineering problems to solve, such as how to efficiently get the heat out of such chips. In terms of algorithms, multigrid solvers are now becoming available for a larger variety of fermion formulations, leading to potentially great increases in performance near the chiral and continuum limits. Multilevel integration methods, which allow for an exponential reduction of the noise, also look interesting, although at the moment these work only in the quenched theory.

The IAC announced that Lattice 2018 will take place at Michigan State University. Elvira Gamiz as the chair of the Lattice 2017 LOC extended an invitation to the lattice community to come to Granada for Lattice 2017, which will take place in the week 18-24 June 2017. And with that, and a round of well-deserved applause for the organizers, the conference closed.

My further travel plans are of interest only to a small subset of my readers, and need not be further elaborated upon in this venue.

Friday, July 29, 2016

Lattice 2016, Day Five

Today was the day of finite temperature and density, on which the general review talk was delivered by Heng-Tong Ding. While in the meantime agreement has been reached on the transition temperature, the nature of the transition (crossover) and the equation of state at the physical quark masses, on which different formulations differed a lot in the past, the Columbia plot of the nature of the transition as a function of the light and strange quark masses still remains to be explored, and there are discrepancies between results obtained in different formulations. On the topic of U(1)A restoration (on which I do have a layman's question: to my understanding U(1)A is broken by the axial anomaly, which to my understanding arises from the path integral measure - so why should one expect the symmetry to be restored at high temperature? The situation is quite different from dynamical spontaneous symmetry breaking, as far as I understand), there is no evidence for restoration so far. A number of groups have taken to using the gradient flow as a tool to perform relatively cheap investigations of the equation of state. There are also new results from the different approaches to finite-density QCD, including cumulants from the Taylor-expansion approach, which can be related to heavy-ion observables, and new ways of stabilizing complex Langevin dynamics.

This was followed by two topical talks. The first, by Seyong Kim, was on the subject of heavy flavours at finite temperature. Heavy flavours are one of the most important probes of the quark-gluon plasma, and J/ψ suppression has served as a diagnostic tool of QGP formation for a long time. To understand the influence of high temperatures on the survival of quarkonium states and on the transport properties of heavy flavours in the QGP, knowledge of the spectral functions is needed. Unfortunately, extracting these from a finite number of points in Euclidean point is an ill-posed problem, especially so when the time extent is small at high temperature. The methods used to get at them nevertheless, such as the maximum entropy method or Bayesian fits, need to use some kind of prior information, introducing the risk of a methodological bias leading to systematic errors that may be not only quantitative, but even qualitative; as an example, MEM shows P-wave bottomonium to melt around the transition temperature, whereas a newer Bayesian method shows it to survive, so clearly more work is needed.

The second topical talk was Kurt Langfeld speaking about the density-of-states method. This method is based on determining a function ρ(E), which is essentially the path integral of δ(S[φ]-E), such that the partition function can be written as the Laplace transform of ρ, which can be generalized to the case of actions with a sign problem, where the partition function can then be written as the Fourier transform of a function P(s). An algorithm to compute such functions exists in the form of what looks like a sort of microcanonical simulation in a window [E-δE;E+δE] and determines the slope of ρ at E, whence ρ can be reconstructed. Ergodicity is ensured by having the different windows overlap and running in parallel, with a possibility of "replica exchange" between the processes running for neighbouring windows when configurations within the overlap between them are generated. The examples shown, e.g. for the Potts model, looked quite impressive in that the method appears able to resolve double-peak structures even when the trough between the peaks is suppressed by many orders of magnitude, such that a Markov process would have no chance of crossing between the two probability peaks.

After the coffee break, Aleksi Kurkela reviewed the phenomenology of heavy ions. The flow properties that were originally taken as a sign of hydrodynamics having set in are now also observed in pp collisions, which seem unlikely to be hydrodynamical. In understanding and interpreting these results, the pre-equilibration evolution is an important source of uncertainty; the current understanding seems to be that the system goes from an overoccupied to an underoccupied state before thermalizing, making different descriptions necessary at different times. At early times, simulations of classical Yang-Mills theory on a lattice in proper-time/rapidity coordinates are used, whereas later a quasiparticle description and kinetic theory can be applied; all this seems to be qualitative so far.

The energy momentum tensor, which plays an important role in thermodynamics and hydrodynamics, was the topic of the last plenary of the day, which was given by Hiroshi Suzuki. Translation invariance is broken on the lattice, so the Ward-Takahashi identity for the energy-momentum tensor picks up an O(a) violation term, which can become O(1) by radiative corrections. As a consequence, three different renormalization factors are needed to renormalize the energy-momentum tensor. One way of getting at these are the shifted boundary conditions of Giusti and Meyer, another is the use of the gradient flow at short flow times, and there are first results from both methods.

The parallel sessions of the afternoon concluded the parallel programme.

Thursday, July 28, 2016

Lattice 2016, Days Three and Four

Following the canonical script for lattice conferences, yesterday was the day without plenaries. Instead, the morning was dedicated to parallel sessions (including my own talk), and the afternoon was free time with the option of taking one of several arranged excursions.

I went on the excursion to Salisbury cathedral (which is notable both for its fairly homogeneous and massive architectural ensemble, and for being home to one of four original copies of the Magna Carta) and Stonehenge (which in terms of diameter seems to be much smaller than I had expected from photos).

Today began with the traditional non-lattice theory talk, which was given by Monika Blanke, who spoke about the impact of lattice QCD results on CKM phenomenology. Since quarks cannot be observed in isolation, the extraction of CKM matrix elements from experimental results always require knowledge of the appropriate hadronic matrix elements of the currents involved in the measured reaction. This means that lattice results for the form factors of heavy-to-light semileptonic decays and for the hadronic parameters governing neutral kaon and B meson mixing are of crucial importance to CKM phenomenology, to the extent that there is even a sort of "wish list" to the lattice. There has long been a discrepancy between the values of both |Vcb| and |Vub| extracted from inclusive and exclusive decays, respectively, and the ratio |Vub/Vcb| that can be extracted from decays of Λb baryons only adds to the tension. However, this is likely to be a result of underestimated theoretical uncertainties or experimental issues, since the pattern of the discrepancies is not in agreement with that which would results from new physics effects induced by right-handed currents. General models of flavour violating new physics seems to favour the inclusive value for |Vub|. In b->s transitions, there is evidence for new physics effects at the 4σ level, but significant theoretical uncertainties remain. The B(s)->μ+μ- branching fractions are currently in agreement with the SM at the 2σ level, but new, more precise measurements are forthcoming.

Ran Zhou complemented this with a review talk about heavy flavour results from the lattice, where there are new results from a variety of different approaches (NRQCD, HQET, Fermilab and Columbia RHQ formalisms), which can serve as useful and important cross-checks on each other's methodological uncertainties.

Next came a talk by Amy Nicholson on neutrinoless double β decay results from the lattice. Neutrinoless double β decays are possible if neutrinos are Majorana particles, which would help to explain the small masses of the observed left-handed neutrinos through the see-saw mechanism pushing the right-handed neutrinos off to near the GUT scale. Treating the double β decay in the framework of a chiral effective theory, the leading-order matrix element required is a process π-->π+e-e-, for which there are first results in lattice QCD. The NLO process would have disconnected diagrams, but cannot contribute to the 0+->0+ transitions which are experimentally studied, whereas the NNLO process involves two-nucleon operators and still remains to be studied in greater detail on the lattice.

After the coffee break, Agostino Patella reviewed the hot topic of QED corrections to hadronic observables. There are currently two main methods for dealing with QED in the context of lattice simulations: either to simulate QCD+QED directly (usually at unphysically large electromagnetic couplings followed by an extrapolation to the physical value of α=1/137), or to expand it in powers of α and to measure only the resulting correlation functions (which will be four-point functions or higher) in lattice QCD. Both approaches have been used to obtain some already very impressive results on isospin-breaking QED effects in the hadronic spectrum, as shown already in the spectroscopy review talk. There are, however, still a number of theoretical issues connected to the regularization of IR modes that relate to the Gauss law constraint that would forbid the existence of a single charged particle (such as a proton) in a periodic box. The prescriptions to evade this problem all lead to a non-commutativity of limits requiring the infinite-volume limit to be taken before other limits (such as the continuum or chiral limits): QEDTL, which omits the global zero modes of the photon field, is non-local and does not have a transfer matrix; QEDL, which omits the spatial zero modes on each timeslice, has a transfer matrix, but is still non-local and renormalizes in a non-standard fashion, such that it does not have a non-relativistic limit; the use of a massive photon leads to a local theory with softly broken gauge symmetry, but still requires the infinite-volume limit to be taken before removing the photon mass. Going beyond hadron masses to decays introduces new IR problems, which need to be treated in the Bloch-Nordsieck way, leading to potentially large logarithms.

The 2016 Ken Wilson Lattice Award was awarded to Antonin Portelli for his outstanding contributions to our understanding of electromagnetic effects on hadron properties. Antonin was one of the driving forces behind the BMW collaboration's effort to determine the proton-neutron mass difference, which resulted in a Science paper exhibiting one of the most frequently-shown and impressive spectrum plots at this conference.

In the afternoon, parallel sessions took place, and in the evening there was a (very nice) conference dinner at the Southampton F.C. football stadium.

Tuesday, July 26, 2016

Lattice 2016, Day Two

Hello again from Lattice 2016 at Southampton. Today's first plenary talk was the review of nuclear physics from the lattice given by Martin Savage. Doing nuclear physics from first principles in QCD is obviously very hard, but also necessary in order to truly understand nuclei in theoretical terms. Examples of needed theory predictions include the equation of state of dense nuclear matter, which is important for understanding neutron stars, and the nuclear matrix elements required to interpret future searches for neutrinoless double β decays in terms of fundamental quantities. The problems include the huge number of required quark-line contractions and the exponentially decaying signal-to-noise ratio, but there are theoretical advances that increasingly allow to bring these under control. The main competing procedures are more or less direct applications of the Lüscher method to multi-baryon systems, and the HALQCD method of computing a nuclear potential from Bethe-Salpeter amplitudes and solving the Schrödinger equation for that potential. There has been a lot of progress in this field, and there are now first results for nuclear reaction rates.

Next, Mike Endres spoke about new simulation strategies for lattice QCD. One of the major problems in going to very fine lattice spacings is the well-known phenomenon critical slowing-down, i.e. the divergence of the autocorrelation times with some negative power of the lattice spacing, which is particularly severe for the topological charge (a quantity that cannot change at all in the continuum limit), leading to the phenomenon of "topology freezing" in simulations at fine lattice spacings. To overcome this problem, changes in the boundary conditions have been proposed: open boundary conditions that allow topological charge to move into and out of the system, and non-orientable boundary conditions that destroy the notion of an integer topological charge. An alternative route lies in algorithmic modifications such as metadynamics, where a potential bias is introduced to disfavour revisiting configurations, so as to forcibly sample across the potential wells of different topological sectors over time, or multiscale thermalization, where a Markov chain is first run at a coarse lattice spacing to obtain well-decorrelated configurations, and then each of those is subjected to a refining operation to obtain a (non-thermalized) gauge configuration at half the lattice spacing, each of which can then hopefully thermalized by a short sequence of Monte Carlo update operations.

As another example of new algorithmic ideas, Shinji Takeda presented tensor networks, which are mathematical objects that assign a tensor to each site of a lattice, with lattice links denoting the contraction of tensor indices. An example is given by the rewriting of the partition function of the Ising model that is at the heart of the high-temperature expansion, where the sum over the spin variables is exchanged against a sum over link variables taking values of 0 or 1. One of the applications of tensor networks in field theory is that they allow for an implementation of the renormalization group based on performing a tensor decomposition along the lines of a singular value decomposition, which can be truncated, and contracting the resulting approximate tensor decomposition into new tensors living on a coarser grid. Iterating this procedure until only one lattice site remains allows the evaluation of partition functions without running into any sign problems and at only O(log V) effort.

After the coffee break, Sara Collins gave the review talk on hadron structure. This is also a field in which a lot of progress has been made recently, with most of the sources of systematic error either under control (e.g. by performing simulations at or near the physical pion mass) or at least well understood (e.g. excited-state and finite-volume effects). The isovector axial charge gA of the nucleon, which for a long time was a bit of an embarrassment to lattice practitioners, since it stubbornly refused to approach its experimental value, is now understood to be particularly severely affected by excited-state effects, and once these are well enough suppressed or properly accounted for, the situation now looks quite promising. This lends much larger credibility to lattice predictions for the scalar and tensor nucleon charges, for which little or no experimental data exists. The electromagnetic form factors are also in much better shape than one or two years ago, with the electric Sachs form factor coming out close to experiment (but still with insufficient precision to resolve the conflict between the experimental electron-proton scattering and muonic hydrogen results), while now the magnetic Sachs form factor shows a trend to undershoot experiment. Going beyond isovector quantities (in which disconnected diagrams cancel), the progress in simulation techniques for disconnected diagrams has enabled the first computation of the purely disconnected strangeness form factors. The sigma term σπN comes out smaller on the lattice than it does in experiment, which still needs investigation, and the average momentum fraction <x> still needs to become the subject of a similar effort as the nucleon charges have received.

In keeping with the pattern of having large review talks immediately followed by a related topical talk, Huey-Wen Lin was next with a talk on the Bjorken-x dependence of the parton distribution functions (PDFs). While the PDFs are defined on the lightcone, which is not readily accessible on the lattice, a large-momentum effective theory formulation allows to obtain them as the infinite-momentum limit of finite-momentum parton distribution amplitudes. First studies show interesting results, but renormalization still remains to be performed.

After lunch, there were parallel sessions, of which I attended the ones into which most of the (g-2) talks had been collected, showing quite a rate of progress in terms of the treatment of in particular the disconnected contributions.

In the evening, the poster session took place.

Monday, July 25, 2016

Lattice 2016, Day One

Hello from Southampton, where I am attending the Lattice 2016 conference.

I arrived yesterday safe and sound, but unfortunately too late to attend the welcome reception. Today started off early and quite well with a full English breakfast, however.

The conference programme was opened with a short address by the university's Vicepresident of Research, who made a point of pointing out that he like 93% of UK scientists had voted to remain in the EU - an interesting testimony to the political state of affairs, I think.

The first plenary talk of the conference was a memorial to the scientific legacy of Peter Hasenfratz, who died earlier this year, delivered by Urs Wenger. Peter Hasenfratz was one of the pioneers of lattice field theory, and hearing of his groundbreaking achievements is one of those increasingly rare occasions when I get to feel very young: when he organized the first lattice symposium in 1982, he sent out individual hand-written invitations, and the early lattice reviews he wrote were composed in a time where most results were obtained in the quenched approximation. But his achievements are still very much current, amongst other things in the form of fixed-point actions as a realization of the Ginsparg-Wilson relation, which gave rise to the booming interest in chiral fermions.

This was followed by the review of hadron spectroscopy by Chuan Liu. The contents of the spectroscopy talks have by now shifted away from the ground-state spectrum of stable hadrons, the calculation of which has become more of a benchmark task, and towards more complex issues, such as the proton-neutron mass difference (which requires the treatment of isospin breaking effects both from QED and from the difference in bare mass of the up and down quarks) or the spectrum of resonances (which requires a thorough study of the volume dependence of excited-state energy levels via the Lüscher formalism). The former is required as part to the physics answer to the ageless question why anything exists at all, and the latter is called for in particular by the still pressing current question of the nature of the XYZ states.

Next came a talk by David Wilson on a more specific spectroscopy topic, namely resonances in coupled-channel scattering. Getting these right requires not only extensions of the Lüscher formalism, but also the extraction of very large numbers of energy levels via the generalized eigenvalue problem.

After the coffee break, Hartmut Wittig reviewed the lattice efforts at determining the hadronic contributions to the anomalous magnetic moment (g-2)μ of the muon from first principles. This is a very topical problem, as the next generation of muon experiments will reduce the experimental error by a factor of four or more, which will require a correspondingly large reduction in the theoretical uncertainties in order to interpret the experimental results. Getting to this level of accuracy requires getting the hadronic vacuum polarization contribution to sub-percent accuracy (which requires full control of both finite-volume and cut-off effects, and a reasonably accurate estimate for the disconnected contributions) and the hadronic light-by-light scattering contribution to an accuracy of better than 10% (which some way or another requires the calculation of a four-point function including a reasonable estimate for the disconnected contributions). There has been good progress towards both of these goals from a number of different collaborations, and the generally good overall agreement between results obtained using widely different formulations bodes well for the overall reliability of the lattice results, but there are still many obstacles to overcome.

The last plenary talk of the day was given by Sergei Dubovsky, who spoke about efforts to derive a theory of the QCD string. As with most stringy talks, I have to confess to being far too ignorant to give a good summary; what I took home is that there is some kind of string worldsheet theory with Goldstone bosons that can be used to describe the spectrum of large-Nc gauge theory, and that there are a number of theoretical surprises there.

Since the plenary programme is being streamed on the web, by the way, even those of you who cannot attend the conference can now do without my no doubt quite biased and very limited summaries and hear and see the talks for yourselves.

After lunch, parallel sessions took place. I found the sequence of talks by Stefan Sint, Alberto Ramos and Rainer Sommer about a precise determination of αs(MZ) using the Schrödinger functional and the gradient-flow coupling very interesting.

Tuesday, September 15, 2015

Fundamental Parameters from Lattice QCD, Last Days

The last few days of our scientific programme were quite busy for me, since I had agreed to give the summary talk on the final day. I therefore did not get around to blogging, and will keep this much-delayed summary rather short.

On Wednesday, we had a talk by Michele Della Morte on non-perturbatively matched HQET on the lattice and its use to extract the b quark mass, and a talk by Jeremy Green on the lattice measurement of the nucleon strange electromagnetic form factors (which are purely disconnected quantities).

On Thursday, Sara Collins gave a review of heavy-light hadron spectra and decays, and Mike Creutz presented arguments for why the question of whether the up-quark is massless is scheme dependent (because the sum and difference of the light quark masses are protected by symmetries, but will in general renormalize differently).

On Friday, I gave the summary of the programme. The main themes that I identified were the question of how to estimate systematic errors, and how to treat them in averaging procedures, the issues of isospin breaking and scale setting ambiguities as major obstacles on the way to sub-percent overall precision, and the need for improved communication between the "producers" and "consumers" of lattice results. In the closing discussion, the point was raised that for groups like CKMfitter and UTfit the correlations between different lattice quantities are very important, and that lattice collaborations should provide the covariance matrices of the final results for different observables that they publish wherever possible.

Wednesday, September 09, 2015

Fundamental Parameters from Lattice QCD, Day Seven

Today's programme featured two talks about the interplay between the strong and the electroweak interactions. The first speaker was Gregorio Herdoíza, who reviewed the determination of hadronic corrections to electroweak observables. In essence these determinations are all very similar to the determination of the leading hadronic correction to (g-2)μ since they involve the lattice calculation of the hadronic vacuum polarisation. In the case of the electromagnetic coupling α, its low-energy value is known to a precision of 0.3 ppb, but the value of α(mZ2) is known only to 0.1 ‰, and a larger portion of the difference in uncertainty is due to the hadronic contribution to the running of α, i.e. the hadronic vacuum polarization. Phenomenologically this can be estimated through the R-ratio, but this results in relatively large errors at low Q2. On the lattice, the hadronic vacuum polarization can be measured through the correlator of vector currents, and currently a determination of the running of α in agreement with phenomenology and with similar errors can be achieved, so that in the future lattice results are likely to take the lead here. In the case of the electroweak mixing angle, sin2θw is known well at the Z pole, but only poorly at low energy, although a number of experiments (including the P2 experiment at Mainz) are aiming to reduce the uncertainty at lower energies. Again, the running can be determined from the Z-γ mixing through the associated current-current correlator, and current efforts are under way, including an estimation of the systematic error caused by the omission of quark-disconnected diagrams.

The second speaker was Vittorio Lubicz, who looked at the opposite problem, i.e. the electroweak corrections to hadronic observables. Since approximately α=1/137, electromagnetic corrections at the one-loop level will become important once the 1% level of precision is being aimed for, and since the up and down quarks have different electrical charges, this is an isospin-breaking effect which also necessitates at the same time considering the strong isospin breaking caused by the difference in the up and down quark masses. There are two main methods to include QED effects into lattice simulations; the first is direct simulations of QCD+QED, and the second is the method of incorporating isospin-breaking effects in a systematic expansion pioneered by Vittorio and colleagues in Rome. Either method requires a systematic treatment of the IR divergences arising from the lack of a mass gap in QED. In the Rome approach this is done through splitting the Bloch-Nordsieck treatment of IR divergences and soft bremsstrahlung into two pieces, whose large-volume limits can be taken separately. There are many other technical issues to be dealt with, but first physical results from this method should be forthcoming soon.

In the afternoon there was a discussion about QED effects and the range of approaches used to treat them.

Monday, September 07, 2015

Fundamental Parameters from Lattice QCD, Day Six

The second week of our Scientific Programme started with an influx of new participants.

The first speaker of the day was Chris Kelly, who spoke about CP violation in the kaon sector from lattice QCD. As I hardly need to tell my readers, there are two sources of CP violation in the kaon system, the indirect CP-violation from neutral kaon-antikaon mixing, and the direct CP-violation from K->ππ decays. Both, however, ultimately stem from the single source of CP violation in the Standard Model, i.e. the complex phase e in the CKM matrix, which gives the area of the unitarity triangle. The hadronic parameter relevant to indirect CP-violation is the kaon bag parameter BK, which is a "gold-plated" quantity that can be very well determined on the lattice; however, the error on the CP violation parameter εK constraining the upper vertex of the unitarity triangle is dominated by the uncertainty on the CKM matrix element Vcb. Direct CP-violation is particularly sensitive to possible BSM effects, and is therefore of particular interest. Chris presented the recent efforts of the RBC/UKQCD collaboration to address the extraction of the relevant parameter ε'/ε and associated phenomena such as the ΔI=1/2 rule. For the two amplitudes A0 and A2, different tricks and methods were required; in particular for the isospin-zero channel, all-to-all propagators are needed. The overall errors are still large: although the systematics are dominated by the perturbative matching to the MSbar scheme, the statistical errors are very sizable, so that the 2.1σ tension with experiment observed is not particularly exciting or disturbing yet.

The second speaker of the morning was Gunnar Bali, who spoke about the topic of renormalons. It is well known that the perturbative series for quantum field theories are in fact divergent asymptotic series, whose typical term will grow like nkznn! for large orders n. Using the Borel transform, such series can be resummed, provided that there are no poles (IR renormalons) of the Borel transform on the positive real axis. In QCD, such poles arise from IR divergences in diagrams with chains of bubbles inserted into gluon lines, as well as from instanton-antiinstanton configurations in the path integral. The latter can be removed to infinity by considering the large-Nc limit, but the former are there to stay, making perturbatively defined quantities ambiguous at higher orders. A relevant example are heavy quark masses, where the different definitions (pole mass, MSbar mass, 1S mass, ...) are related by perturbative conversion factors; in a heavy-quark expansion, the mass of a heavy-light meson can be written as M=m+Λ+O(1/m), where m is the heavy quark mass, and Λ a binding energy of the order of some QCD energy scale. As M is unambiguous, the ambiguities in m must correspond to ambiguities in the binding energy Λ, which can be computed to high orders in numerical stochastic perturbation theory (NSPT). After dealing with some complications arising from the fact that IR divergences cannot be probed directly in a finite volume, it is found that the minimum term in the perturbative series (which corresponds to the perturbative ambiguity) is of order 180 MeV in the quenched theory, meaning that heavy quark masses are only defined up to this accuracy. Another example is the gluon condensate (which may be of relevance to the extraction of αs from τ decays), where it is found that the ambiguity is of the same size as the typically quoted result, making the usefulness of this quantity doubtful.

Friday, September 04, 2015

Fundamental Parameters from Lattice QCD, Day Five

The first speaker today was Martin Lüscher, who spoke about revisiting numerical stochastic perturbation theory. The idea behind numerical stochastic perturbation theory is to perform a simulation of a quantum field theory using the Langevin algorithm and to perturbatively expand the fields, which leads to a tower of coupled evolution equations, where only the lowest-order one depends explicitly on the noise, whereas the higher-order ones describe the evolution of the higher-order coefficients as a function of the lower-order ones. In Numerical Stochastic Perturbation Theory (NSPT), the resulting equations are integrated numerically (up to some, possibly rather high, finite order in the coupling), and the average over noises is replaced by a time average. The problems with this approach are that the autocorrelation time diverges as the inverse square of the lattice spacing, and that the extrapolation in the Langevin time step size is difficult to control well. An alternative approach is given by Instantaneous Stochastic Perturbation Theory (ISPT), in which the Langevin time evolution is replaced by the introduction of Gaussian noise sources at the vertices of tree diagrams describing the construction of the perturbative coefficients of the lattice fields. Since there is no free lunch, this approach suffers from power-law divergent statistical errors in the continuum limit, which arise from the way in which power-law divergences that cancel in the mean are shifted around between different orders when computing variances. This does not happen in the Langevin-based approach, because the Langevin theory is renormalizable.

The second speaker of the morning was Siegfried Bethke of the Particle Data Group, who allowed us a glimpse at the (still preliminary) world average of αs for 2015. In 2013, there were five classes of αs determinations: from lattice QCD, τ decays, deep inelastic scattering, e+e- colliders, and global Z pole fits. Except for the lattice determinations (and the Z pole fits, where there was only one number), these were each preaveraged using the range method -- i.e. taking the mean of the highest and lowest central value as average, and assigning it an ncertainty of half the difference between them. The lattice results were averaged using a χ2 weighted average. The total average (again a weighted average) was dominated by the lattice results, which in turn were dominated by the latest HPQCD result. For 2015, there have been a number of updates to most of the classes, and there is now a new class of αs determinations from the LHC (of which there is currently only one published, which lies rather low compared to other determinations, and is likely a downward fluctuation). In most cases, the new determinations have not or hardly changed the values and errors of their class. The most significant change is in the field of lattice determinations, where the PDG will change its policy and will no longer perform its own preaverages, taking instead the FLAG average as the lattice result. As a result, the error on the PDG value will increase; its value will also shift down a little, mostly due to the new LHC value.

The afternoon discussion centered on αs. Roger Horsley gave an overview of the methods used to determine it on the lattice (ghost vertices, the Schrödinger functional, the static energy at short distances, current-current correlators, and small Wilson loops) and reviewed the criteria used by FLAG to assess the quality of a given determination, as well as the averaging procedure used (which uses a more conservative error than what a weighted average would give). In the discussion, the points were raised that in order to reliably increase the precision to the sub-percent level and beyond will likely require not only addressing the scale setting uncertainties (which is reflected in the different values for r0 obtained by different collaboration and will affect the running of αs), but also the inclusion of QED effects.