tag:blogger.com,1999:blog-8669468Fri, 12 Jan 2018 05:35:15 +0000conferencesblogstravellattice fermionsexperimentarXivperturbation theorygeneral newsMITPpublicityweirdnesslat2013astronomybook reviewdata analysisgeneral physicsimprovementnobel prizepoliticsstrong couplinganalytical resultscomputingforecastingquarkssimulationsenergyfinancefittingfortranfunneutrinosobituariesscience publishingseminarschemistryevolutiongraphenehopeinterviewslanguageslower dimensionsmathematicsphilosophypythonrelativitysupersymmetrytadpolestopologyvirusesLife on the latticeThoughts on lattice QCD, particle physics and the world at large.http://latticeqcd.blogspot.com/noreply@blogger.com (Georg v. Hippel)Blogger221125tag:blogger.com,1999:blog-8669468.post-2237564681653197069Thu, 29 Jun 2017 13:46:00 +00002017-06-29T14:47:33.792+01:00conferencestravelLattice 2017, Day SixOn the last day of the 2017 lattice conference, there were plenary sessions only. The first plenary session opened with a talk by Antonio Rago, who gave a "community review" of lattice QCD on new chips. New chips in the case of lattice QCD means mostly Intel's new Knight's Landing architecture, to whose efficient use significant effort is devoted by the community. Different groups pursue very different approaches, from purely OpenMP-based C codes to mixed MPI/OpenMP-based codes maximizing the efficiency of the SIMD pieces using assembler code. The new NVidia Tesla Volta and Intel's OmniPath fabric also featured in the review.<br /><br />The next speaker was Zoreh Davoudi, who reviewed lattice inputs for nuclear physics. While simulating heavier nuclei directly in the lattice is still infeasible, nuclear phenomenologists appear to be very excited about the first-principles lattice QCD simulations of multi-baryon systems now reaching maturity, because these can be use to tune and validate nuclear models and effective field theories, from which predictions for heavier nuclei can then be derived so as to be based ultimately on QCD. The biggest controversy in the multi-baryon sector at the moment is due to HALQCD's claim that the multi-baryon mass plateaux seen by everyone except HALQCD (who use their own method based on Bethe-Salpeter amplitudes) are probably fakes or "mirages", and that using the Lüscher method to determine multi-baryon binding would require totally unrealistic source-sink separations of over 10 fm. The volume independence of the bound-state energies determined from the allegedly fake plateaux, as contrasted to the volume dependence of the scattering-state energies so extracted, provides a fairly strong defence against this claim, however. There are also new methods to improve the signal-to-noise ratio for multi-baryon correlation functions, such as phase reweighting.<br /><br />This was followed by a talk on the tetraquark candidate Z<sub>c</sub>(3900) by Yoichi Ikeda, who spent a large part of his talk on reiterating the HALQCD claim that the Lüscher method requires unrealistically large time separations. During the questions, William Detmold raised the important point that there would be no excited-state contamination at all if the interpolating operator created an eigenstate of the QCD Hamiltonian, and that for improved interpolating operators (such as generated by the variational method) one can get rather close to this situation, so that the HLAQCD criticism seems hardly applicable. As for the Z<sub>c</sub>(3900), HALQCD find it to be not a resonance, but a kinematic cusp, although this conclusion is based on simulations at rather heavy pion masses (m<sub>π</sub>> 400 MeV).<br /><br />The final plenary session was devoted to the anomalous magnetic moment of the muon, which is perhaps the most pressing topic for the lattice community, since the new (g-2) experiment is now running, and theoretical predictions matching the improved experimental precision will be needed soon. The first speaker was Christoph Lehner, who presented RBC/UKQCD's efforts to determine the hadronic vacuum polarization contribution to a<sub>μ</sub> with high precision. The strategy for this consists of two main ingredients: one is to minimize the statistical and systematic errors of the lattice calculation by using a full-volume low-mode average via a multigrid Lanczos method, explicitly including the leading effects of strong isospin breaking and QED, and the contribution from disconnected diagrams, and the other is to combine lattice and phenomenology to take maximum advantage of their respective strengths. This is achieved by using the time-momentum representation with a continuum correlator reconstructed from the R-ratio, which turns out to be quite precise at large times, but more uncertain at shorter times, which is exactly the opposite of the situation for the lattice correlator. Using a window which continuously switches over from the lattice to the continuum at time separations around 1.2 fm then minimizes the overall error on a<sub>μ</sub>.<br /><br />The last plenary talk was given by Gilberto Colangelo, who discussed the new dispersive approach to the hadronic light-by-light scattering contribution to a<sub>μ</sub>. Up to now the theory results for this small, but important, contribution have been based on models, which will always have an a priori unknown and irreducible systematic error, although lattice efforts are beginning to catch up. For a dispersive approach based on general principles such as analyticity and unitarity, the hadronic light-by-light tensor first needs to be Lorentz decomposed, which gives 138 tensors, of which 136 are independent, and of which gauge invariance permits only 54, of which 7 are distinct, with the rest related by crossing symmetry; care has to be taken to choose the tensor basis such that there are no kinematic singularities. A master formula in terms of 12 linear combinations of these components has been derived by Gilberto and collaborators, and using one- and two-pion intermediate states (and neglecting the rest) in a systematic fashion, they have been able to produce a model-independent theory result with small uncertainties based on experimental data for pion form factors and scattering amplitudes.<br /><br />The closing remarks were delivered by Elvira Gamiz, who advised participants that the proceedings deadline of 18 October will be strict, because this year's proceedings will not be published in PoS, but in EPJ Web of Conferences, who operate a much stricter deadline policy. Many thanks to Elvira for organizing such a splendid lattice conference! (I can appreciate how much work that is, and I think you should have received far more applause.)<br /><br />Huey-Wen Lin invited the community to East Lansing, Michigan, USA, for the Lattice 2018 conference, which will take place 22-28 July 2018 on the campus of Michigan State University.<br /><br />The IAC announced that Lattice 2019 will take place in Wuhan, China.<br /><br />And with that the conference ended. I stayed in Granada for a couple more days of sightseeing and relaxation, but the details thereof will be of legitimate interest only to a very small subset of my readership (whom I keep updated via different channels), and I therefore conclude my coverage and return the blog to its accustomed semi-hiatus state.<br /><br /><br />http://latticeqcd.blogspot.com/2017/06/lattice-2017-day-six.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-7530055135200886406Sun, 25 Jun 2017 06:30:00 +00002017-06-25T07:30:44.339+01:00Lattice 2017, Day FiveThe programme for today took account of the late end of the conference dinner in the early hours of the day, by moving the plenary sessions by half an hour. The first plenary talk of the day was given by Ben Svetitsky, who reviewed the status of BSM investigations using lattice field theory. An interesting point Ben raised was that these studies go not so much "beyond" the Standard Model (like SUSY, dark matter, or quantum gravity would), but "behind" or "beneath" it by seeking for a deeper explanation of the seemingly unnaturally small Higgs mass, flavour hierarchies, and other unreasonable-looking features of the SM. The original technicolour theory is quite dead, being Higgsless, but "walking" technicolour models are an area of active investigation. These models have a β-function that comes close to zero at some large coupling, leading to an almost conformal behaviour near the corresponding IR almost-fixed point. In such almost conformal theories, a light scalar (i.e. the Higgs) could arise naturally as the pseudo-Nambu-Goldstone boson of the approximate dilatation symmetry of the theory. A range of different gauge groups, numbers of flavours, and fermion representations are being investigated, with the conformal or quasi-conformal status of some of these being apparently controversial. An alternative approach to Higgs compositeness has the Higgs appear as the exact Nambu-Goldstone boson of some spontaneous symmetry breaking which keeps SU(2)<sub>L</sub>⨯U(1) intact, with the Higgs potential being generated at the loop level by the coupling to the SM sector. There are also some models of this type being actively investigated.<br /><br />The next plenary speaker was Stefano Forte, who reviewed the status and prospects of determining the strong coupling α<sub>s</sub> from sources other than the lattice. The PDG average for α<sub>s</sub> is a weighted average of six values, four of which are the pre-averages of the determinations from the lattice, from τ decays, from jet rates and shapes, and from parton distribution functions, and two of which are the determinations from the global electroweak fit and from top production at the LHC. Each of these channels has its own systematic issues, and one problem can be that overaggressive error estimates give too much weight to the corresponding determination, leading to statistically implausible scatter of results in some channels. It should be noted, however, that the lattice results are all quite compatible, with the most precise results by ALPHA and by HPQCD (which use different lattice formulations and completely different analysis methods) sitting right on top of each other.<br /><br />This was followed by a presentation by Thomas Korzec of the determination of α<sub>s</sub> by the ALPHA collaboration. I cannot really attempt to do justice to this work in a blog post, so I encourage you to look at their <a href="https://arxiv.org/abs/1706.03821" target="_blank">paper</a>. By making use of both the Schrödinger functional and the gradient flow coupling in finite volume, they are able to non-perturbatively run α<sub>s</sub> between hadronic and perturbative scales with high accuracy.<br /><br />After the coffee break, Erhard Seiler reviewed the status of the complex Langevin method, which is one of the leading methods for simulating actions with a sign problem, e.g. at finite chemical potential or with a θ term. Unfortunately, it is known that the complex Langevin method can sometimes converge to wrong results, and this can be traced to the violation by the complexification of the conditions under which the (real) Langevin method is justified, of which the development of zeros in e<sup>-S</sup> seems to be the most important case, giving rise to poles in the force which will violate ergodicity. There seems to be a lack of general theorems for situations like this, although the complex Langevin method has apparently been shown to be correct under certain difficult-to-check conditions. One of the best hopes for simulating with complex Langevin seems to be the dynamical stabilization proposed by Benjamin Jäger and collaborators.<br /><br />This was followed by Paulo Bedaque discussing the prospects of solving the sign problem using the method of thimbles and related ideas. As far as I understand, thimbles are permissible integration regions in complexified configuration space on which the imaginary part of the action is constant, and which can thus be integrated over without a sign problem. A holomorphic flow that is related both to the gradient flow and the Hamiltonian flow can be constructed so as to flow from the real integration region to the thimbles, and based on this it appears to have become possible to solve some toy models with a sign problem, even going so far as to perform real-time simulations in the Keldysh-Schwinger formalism in Euclidean space (if I understood correctly).<br /><br />In the afternoon, there was a final round of parallel sessions, one of which was again dedicated to the anomalous magnetic moment of the muon, this time focusing on the very difficult hadronic light-by-light contribution, for which the Mainz group has some very encouraging first results.<br />http://latticeqcd.blogspot.com/2017/06/lattice-2017-day-five.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-5033304552249484943Fri, 23 Jun 2017 12:20:00 +00002017-06-23T13:20:22.612+01:00conferencesLattice 2017, Days Three and FourWednesday was the customary short day, with parallel sessions in the morning, and time for excursions in the afternoon. I took the "Historic Granada" walking tour, which included visits to the Capilla Real and the very impressive Cathedral of Granada.<br /><br />The first plenary session of today had a slightly unusual format in that it was a kind of panel discussion on the topic of axions and QCD topology at finite temperature.<br /><br />After a brief outline by Mikko Laine, the session chair, the session started off with a talk by Guy Moore on the role of axions in cosmology and the role of lattice simulations in this context. Axions arise in the Peccei-Quinn solution to the strong CP problem and are a potential dark matter candidate. Guy presented some of his own real-time lattice simulations in classical field theory for axion fields, which exhibit the annihilation of cosmic-string-like vortex defects and associated axion production, and pointed out the need for accurate lattice QCD determinations of the topological susceptibility in the temperature range of 500-1200 MeV in order to fix the mass of the axion more precisely from the dark matter density (assuming that dark matter consists of axions).<br /><br />The following talks were all fairly short. Claudio Bonati presented algorithmic developments for simulations of the topological properties of high-temperature QCD. The long autocorrelations of the topological charge at small lattice spacing are a problem. Metadynamics, which bias the Monte Carlo evolution in a non-Markovian manner so as to more efficiently sample the configuration space, appear to be of help.<br /><br />Hidenori Fukaya reviewed the question of whether U(1)<sub>A</sub> remains anomalous at high temperature, which he claimed (both on theoretical grounds and based on numerical simulation results) it doesn't. I didn't quite understand this, since as far as I understand the axial anomaly, it is an operator identity, which will remain true even if both sides of the identity were to happen to vanish at high enough temperature, which is all that seemed to be shown; but this may just be my ignorance showing.<br /><br />Tamas Kovacs showed recent results on the temperature-dependence of the topological susceptibility of QCD. By a careful choice of algorithms based on physical considerations, he could measure the topological susceptibility over a wide range of temperatures, showing that it becomes tiny at large temperature.<br /><br />Then the speakers all sat on the stage as a panel and fielded questions from the audience. Perhaps it might have been a good idea to somehow force the speakers to engage each other; as it was, the advantage of this format over simply giving each speaker a longer time for answering questions didn't immediately become apparent to me.<br /><br />After the coffee break, things returned to the normal format. Boram Yoon gave a review of lattice determinations of the neutron electric dipole moment. Almost any BSM source of CP violation must show up as a contribution to the neutron EDM, which is therefore a very sensitive probe of new physics. The very strong experimental limits on any possible neutron EDM imply e.g. |θ|<10<sup>-10</sup> in QCD through lattice measurements of the effects of a θ term on the neutron EDM. Similarly, limits can be put on any quark EDMs or quark chromoelectric dipole moments. The corresponding lattice simulations have to deal with sign problems, and the usual techniques (Taylor expansions, simulations at complex θ) are employed to get past this, and seem to be working very well.<br /><br />The next plenary speaker was Phiala Shanahan, who showed recent results regarding the gluon structure of hadrons and nuclei. This line of research is motivated by the prospect of an electron-ion collider that would be particularly sensitive to the gluon content of nuclei. For gluonic contributions to the momentum and spin decomposition of the nucleon, there are some fresh results from different groups. For the gluonic transversity, Phiala and her collaborators have performed first studies in the φ system. The gluonic radii of small nuclei have also been looked at, with no deviation from the single-nucleon case visible at the present level of accuracy.<br /><br />The 2017 Kenneth Wilson Award was awarded to Raúl Briceño for his groundbreaking contributions to the study of resonances in lattice QCD. Raúl has been deeply involved both in the theoretical developments behind extending the reach of the Lüscher formalism to more and more complicated situations, and in the numerical investigations of resonance properties rendered possible by those developments.<br /><br />After the lunch break, there were once again parallel sessions, two of which were dedicated entirely to the topic of the hadronic vacuum polarization contribution to the anomalous magnetic moment of the muon, which has become one of the big topics in lattice QCD.<br /><br />In the evening, the conference dinner took place. The food was excellent, and the Flamenco dancers who arrived at midnight (we are in Spain after all, where it seems dinner never starts before 9pm) were quite impressive.<br />http://latticeqcd.blogspot.com/2017/06/lattice-2017-days-three-and-four.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-401462509949101772Tue, 20 Jun 2017 20:26:00 +00002017-06-20T21:27:51.515+01:00conferencesLattice 2017, Day TwoWelcome back to our blog coverage of the Lattics 2017 conference in Granada.<br /><br />Today's first plenary session started with an experimental talk by Arantza Oyanguren of the LHCb collaboration on B decay anomalies at LHCb. LHCb have amassed a huge number of b-bbar pairs, which allow them to search for and study in some detail even the rarest of decay modes, and they are of course still collecting more integrated luminosity. Readers of this blog will likely recall the B<sub>s</sub> → μ<sup>+</sup>μ<sup>-</sup> branching ratio result from LHCb, which agreed with the Standard Model prediction. In the meantime, there are many similar results for branching ratios that do not agree with Standard Model predictions at the 2-3σ level, e.g. the ratios of branching fractions like Br(B<sup>+</sup>→K<sup>+</sup>μ<sup>+</sup>μ<sup>-</sup>)/Br(B<sup>+</sup>→K<sup>+</sup>e<sup>+</sup>e<sup>-</sup>), in which lepton flavour universality appears to be violated. Global fits to data in these channels appear to favour the new physics hypothesis, but one should be cautious because of the "look-elsewhere" effect: when studying a very large number of channels, some will show an apparently significant deviation simply by statistical chance. On the other hand, it is very interesting that all the evidence indicating potential new physics (including the anomalous magnetic moment of the muon and the discrepancy between the muonic and electronic determinations of the proton electric charge radius) involve differences between processes involving muons and analogous processes involving electrons, an observation I'm sure model-builders have made a long time ago.<br /><br />This was followed by a talk on flavour physics anomalies by Damir Bečirević. Expanding on the theoretical interpretation of the anomalies discussed in the previous talk, he explained how the data seem to indicate a violation of lepton flavour universality at the level where the Wilson coefficient C<sub>9</sub> in the effective Hamiltonian is around zero for electrons, and around -1 for muons. Experimental data seem to favour the situation where C<sub>10</sub>=-C<sub>9</sub>, which can be accommodated is certain models with a Z' boson coupling preferentially to muons, or in certain special leptoquark models with corrections at the loop level only. Since I have little (or rather no) expertise in phenomenological model-building, I have no idea how likely these explanations are.<br /><br />The next speaker was Xu Feng, who presented recent progress in kaon physics simulations on the lattice. The "standard" kaon quantities, such as the kaon decay constant or f<sub>+</sub>(0), are by now very well-determined from the lattice, with overall errors at the sub-percent level, but beyond that there are many important quantities, such as the CP-violating amplitudes in K → ππ decays, that are still poorly known and very challenging. RBC/UKQCD have been leading the attack on many of these observables, and have presented a possible solution to the ΔI=1/2 rule, which consists in non-perturbative effects making the amplitude A<sub>0</sub> much larger relative to A<sub>2</sub> than what would be expected from naive colour counting. Making further progress on long-distance contributions to the K<sub>L</sub>-K<sub>S</sub> mass difference or ε<sub>K</sub> will require working at the physical pion mass and treating the charm quark with good control of discretization effects. For some processes, such as K<sub>L</sub>→π<sup>0</sup>ℓ<sup>+</sup>ℓ<sup>-</sup>, even the sign of the coefficient would be desirable.<br /><br />After the coffee break, Luigi Del Debbio talked about parton distributions in the LHC era. The LHC data reduce the error on the NNLO PDFs by around a factor of two in the intermediate-x region. Conversely, the theory errors coming from the PDFs are a significant part of the total error from the LHC on Higgs physics and BSM searches. In particular the small-x and large-x regions remain quite uncertain. On the lattice, PDFs can be determined via quasi-PDFs, in which the Wilson line inside the non-local bilinear is along a spatial direction rather than in a light-like direction. However, there are still theoretical issues to be settled in order to ensure that the renormalization and matching the the continuum really lead to the determination of continuum PDFs in the end.<br /><br />Next was a talk about chiral perturbation theory results on the multi-hadron state contamination of nucleon observables by Oliver Bär. It is well known that until very recently, lattice calculations of the nucleon axial charge underestimated its value relative to experiment, and this has been widely attributed to excited-state effects. Now, Oliver has calculated the corrections from nucleon-pion states on the extraction of the axial charge in chiral perturbation theory, and has found that they actually should lead to an overestimation of the axial charge from the plateau method, at least for source-sink separations above 2 fm, where ChPT is applicable. Similarly, other nucleon charges should be overestimated by 5-10%. Of course, nobody is currently measuring in that distance regime, and so it is quite possible that higher-order corrections or effects not captured by ChPT overcompensate this and lead to an underestimation, which would however mean that there is some instermediate source-sink separation for which one gets the experimental result by accident, as it were.<br /><br />The final plenary speaker of the morning was Chia-Cheng Chang, who discussed progress towards a precise lattice determination of the nucleon axial charge, presenting the results of the CalLAT collaboration from using what they refer to as the Feynman-Hellmann method, a novel way of implementing what is essentially the summation method through ideas based in the Feynman-Hellmann theorem (but which doesn't involve simulating with a modified action, as a straightforward applicaiton of the Feynman-Hellmann theorem would demand).<br /><br />After the lunch break, there were parallel sessions, and in the evening, the poster session took place. A particular interesting and entertaining contribution was a quiz about women's contributions to physics and computer science, the winner of which will win a bottle of wine and a book.<br /><br />http://latticeqcd.blogspot.com/2017/06/lattice-2017-day-two.htmlnoreply@blogger.com (Georg v. Hippel)1tag:blogger.com,1999:blog-8669468.post-6271179502548662995Mon, 19 Jun 2017 20:40:00 +00002017-06-20T21:26:19.706+01:00conferencesLattice 2017, Day OneHello from Granada and welcome to our coverage of the 2017 lattice conference.<br /><br />After welcome addresses by the conference chair, a representative of the government agency in charge of fundamental research, and the rector of the university, the conference started off in a somewhat sombre mood with a commemoration of Roberto Petronzio, a pioneer of lattice QCD, who passed away last year. Giorgio Parisi gave a memorial talk summarizing Roberto's many contributions to the development of the field, from his early work on perturbative QCD and the parton model, through his pioneering contributions to lattice QCD back in the days of small quenched lattices, to his recent work on partially twisted boundary conditions and on isospin breaking effects, which is very much at the forefront of the field at the moment, not to omit Roberto's role as director of the Italian INFN in politically turbulent times.<br /><br />This was followed by a talk by Martin Lüscher on stochastic locality and master-field simulations of very large lattices. The idea of a master-field simulation is based on the observation of volume self-averaging, i.e. that the variance of volume-averaged quantities is much smaller on large lattices (intuitively, this would be because an infinitely-extended properly thermalized lattice configuration would have to contain any possible finite sub-configuration with a frequency corresponding to its weight in the path integral, and that thus a large enough typical lattice configuration is itself a sort of ensemble). A master field is then a huge (e.g. 256<sup>4</sup>) lattice configuration, on which volume averages of quantities are computed, which have an expectation value equal to the QCD expectation value of the quantity in question, and a variance which can be estimated using a double volume sum that is doable using an FFT. To generate such huge lattice, algorithms with global accept-reject steps (like HMC) are unsuitable, because ΔH grows with the square root of the volume, but stochastic molecular dynamics (SMD) can be used, and it has been rigorously shown that for short-enough trajectory lengths SMD converges to a unique stationary state even without an accept-reject step.<br /><br />After the coffee break, yet another novel simulation method was discussed by Ignacio Cirac, who presented techniques to perform quantum simulations of QED and QCD on alattice. While quantum computers of the kind that would render RSA-based public-key cryptography irrelevant remain elusive at the moment, the idea of a quantum simulator (which is essentially an analogue quantum computer), which goes back to Richard Feynman, can already be realized in practice: optical lattices allow trapping atoms on lattice sites while fine-tuning their interactions so as to model the couplings of some other physical system, which can thus be simulated. The models that are typically simulated in this way are solid-state models such as the Hubbard model, but it is of course also possible to setup a quantum simulator for a lattice field theory that has been formulated in the Hamiltonian framework. In order to model a gauge theory, it is necessary to model the gauge symmetry by some atomic symmetry such as angular momentum conservation, and this has been done at least in theory for QED and QCD. The Schwinger model has been studied in some detail. The plaquette action for d>1+1 additionally requires a four-point interaction between the atoms modelling the link variables, which can be realized using additional auxiliary variables, and non-abelian gauge groups can be encoded using multiple species of bosonic atoms. A related theoretical tool that is still in its infancy, but shows significant promise, is the use of tensor networks. This is based on the observation that for local Hamiltonians the entanglement between a region and its complement grows only as the surface of the region, not its volume, so only a small corner of the total Hilbert space is relevant; this allows one to write the coefficients of the wavefunction in a basis of local states as a contraction of tensors, from where classical algorithms that scale much better than the exponential growth in the number of variables that would naively be expected can be derived. Again, the method has been successfully applied to the Schwinger model, but higher dimensions are still challenging, because the scaling, while not exponential, still becomes very bad.<br /><br />Staying with the topic of advanced simulation techniques, the next talk was Leonardo Giusti speaking about the block factorization of fermion determinants into local actions for multi-boson fields. By decomposing the lattice into three pieces, of which the middle one separates the other by a distance Δ large enough to render e<sup>-M<sub>π</sub>Δ</sup> small, and by applying a domain decomposition similar to the one used in Lüscher's DD-HMC algorithm to the Dirac operator, Leonardo and collaborators have been able to derive a multi-boson algorithm that allows to perform multilevel integration with dynamical fermions. For hadronic observables, the quark propagator also needs to be factorized, which Leonardo et al. also have achieved, making a significant decrease in statistical error possible.<br /><br />After the lunch break there were parallel sessions, in one of which I gave my own talk and another one of which I chaired, thus finishing all of my duties other than listening (and blogging) on day one.<br /><br />In the evening, there was a reception followed by a special guided tour of the truly stunning Alhambra (which incidentally contains a great many colourful - and very tasteful - lattices in the form of ornamental patterns).<br />http://latticeqcd.blogspot.com/2017/06/lattice-2017-day-one.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-4462476608748833635Wed, 18 Jan 2017 13:46:00 +00002017-01-18T13:50:06.741+00:00If you speak German ...... you might find this video amusing.<br /><br /><iframe width="560" height="349" src="https://www.youtube.com/embed/eR4ufoV9emE" frameborder="0" allowfullscreen></iframe><br />http://latticeqcd.blogspot.com/2017/01/if-you-speak-german.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-2868000246317540094Thu, 12 Jan 2017 16:37:00 +00002017-01-12T16:38:42.408+00:00Book Review: "Lattice QCD — Practical Essentials"There is a new book about Lattice QCD, <a href="http://www.springer.com/la/book/9789402409970">Lattice Quantum Chromodynamics: Practical Essentials</a> by Francesco Knechtli, Michael Günther and Mike Peardon. At a 140 pages, this is a pretty slim volume, so it is obvious that it does not aim to displace time-honoured introductory textbooks like Montvay and Münster, or the newer books by Gattringer and Lang or DeGrand and DeTar. Instead, as suggested by the subtitle "Practical Essentials", and as said explicitly by the authors in their preface, this book aims to prepare beginning graduate students for their practical work in generating gauge configurations and measuring and analysing correlators.<br /><br />In line with this aim, the authors spend relatively little time on the physical or field theoretic background; while some more advanced topics such as the Nielson-Ninomiya theorem and the Symanzik effective theory or touched upon, the treatment of foundational topics is generally quite brief, and some topics, such as lattice perturbation theory or non-perturbative renormalization, are altogether omitted. The focus of the book is on Monte Carlo simulations, for which both the basic ideas and practically relevant algorithms — heatbath and overrelaxation fro pure gauge fields, and hybrid Monte Carlo for dynamical fermions — are described in some detail, including the RHMC algorithm and advanced techniques such as determinant factorizations, higher-order symplectic integrators, and multiple-timescale integration. The techniques from linear algebra required to deal with fermions are also covered in some detail, from the basic ideas of Krylov space methods through concrete descriptions of the GMRES and CG algorithms, along with such important preconditioners as even-odd and domain decomposition, to the ideas of algebraic multigrid methods. Stochastic estimation of all-to-all propagators with dilution, the one-end trick and low-mode averaging and explained, as are techniques for building interpolating operators with specific quantum numbers, gauge link and quark field smearing, and the use of the variational method to extract hadronic mass spectra. Scale setting, the Wilson flow, and Lüscher's method for extracting scattering phase shifts are also discussed briefly, as are the basic statistical techniques for data analysis. Each chapter contains a list of references to the literature covering both original research articles and reviews and textbooks for further study.<br /><br />Overall, I feel that the authors succeed very well at their stated aim of giving a quick introduction to the methods most relevant to current research in lattice QCD in order to let graduate students hit the ground running and get to perform research as quickly as possible. In fact, I am slightly worried that they may turn out to be too successful, since a graduate student having studied only this book could well start performing research, while having only a very limited understanding of the underlying field-theoretical ideas and problems (a problem that already exists in our field in any case). While this in no way detracts from the authors' achievement, and while I feel I can recommend this book to beginners, I nevertheless have to add that it should be complemented by a more field-theoretically oriented traditional textbook for completeness.<br /><br />___<br /><i>Note that I have deliberately not linked to the Amazon page for this book. Please support your local bookstore — nowadays, you can usually order online on their websites, and many bookstores are more than happy to ship books by post.</i><br />http://latticeqcd.blogspot.com/2017/01/book-review-lattice-qcd-practical.htmlnoreply@blogger.com (Georg v. Hippel)1tag:blogger.com,1999:blog-8669468.post-3553126744678770882Sat, 30 Jul 2016 20:50:00 +00002016-07-30T21:51:40.802+01:00conferencestravelLattice 2016, Day SixThe final day of the conference started with a review talk by Claudio Pica on lattice simulations trying to chart the fundamental physics beyond the Standard Model. The problem with the SM is perhaps to some extent how well it works, given that we know it must be incomplete. One of the main contenders for replacing it is the notion of strong dynamics at a higher energy scale giving rise to the Higgs boson as a composite particle. The most basic "technicolor" theories of this kind fail because they cannot account for the relatively large masses of the second- and third-generation quarks. To avoid that problem, the coupling of the technicolor gauge theory must not be running, but "walking" slowly from high to low energy scales, which has given rise to a veritable industry of lattice simulations investigating the β function of various gauge theories coupled to various numbers of fermions in various representations. The Higgs can then be either a dilaton associated with the breaking of conformal symmetry, which would naturally couple like a Standard Model Higgs, or a pseudo-Goldstone boson associated with the breaking of some global flavour symmetry. So far, nothing very conclusive has resulted, but of course the input from experiment at the moment only consists of limits ruling some models out, but not allowing for any discrimination between those models that aren't rules out.<br /><br />A specific example of BSM physics, <i>viz.</i> strongly interacting dark matter, was presented in a talk by Enrico Rinaldi. If there is a new strongly-coupled interaction, as suggested by the composite Higgs models, then besides the Higgs there will also be other bound states, some of which may be stable and provide a dark matter candidate. While the "dark" nature of dark matter requires such a bound state to be neutral, the constituents might interact with the SM sector, allowing for the production and detection of dark matter. Many different models of composite dark matter have been considered, and the main limits currently come from the non-detection of dark matter in searches, which put limits on the "hadron-structure" observables of the dark matter candidates, such as their σ-terms and charge radii).<br /><br />David Kaplan gave a talk on a new perspective on chiral gauge theories, the lattice formulation of which has always been a persistent problem, largely due to the Nielsen-Ninomiya theorem. However, the fermion determinant of chiral gauge theories is already somewhat ill-defined even in the continuum. A way to make it well-defined has been proposed by Alvarez-Gaumé <i>et al.</i> through the addition of an ungauged right-handed fermion. On the lattice, the U(1)<sub>A</sub> anomaly is found to emerge as the remnant of the explicit breaking of chiral symmetry by e.g. the Wilson term in the limit of vanishing lattice spacing. Attempts at realizing ungauged mirror fermions using domain wall fermions with a gauge field constrained to near one domain wall have failed, and a realizations using the gradient flow in the fifth dimension turns the mirror fermions into "fluff". A new realization along the lines of the overlap operator gives a lattice operator very similar to that of Alvarez-Gaumé by coupling the mirror fermion to a fixed point of the gradient flow, which is a pure gauge.<br /><br />After the coffee break, Tony Hey gave a very entertaining, if somewhat meandering, talk about "Richard Feynman, Data-Intensive Science and the Future of Computing" going all the way from Feynman's experiences at Los Alamos to AI singularity scenarios and the security aspects of self-driving cars.<br /><br />The final plenary talk was the review talk on machines and algorithms by Peter Boyle. The immediate roadmap for new computer architectures shows increases of around 400 times in the single-precision performance per node, and a two-fold increase in the bandwidth of interconnects, and this must be taken into account in algorithm design and implementation in order to achieve good scaling behaviour. Large increases in chip performance are to be expected from three-dimensional arrangement of units, which will allow thicker and shorter copper wires, although there remain engineering problems to solve, such as how to efficiently get the heat out of such chips. In terms of algorithms, multigrid solvers are now becoming available for a larger variety of fermion formulations, leading to potentially great increases in performance near the chiral and continuum limits. Multilevel integration methods, which allow for an exponential reduction of the noise, also look interesting, although at the moment these work only in the quenched theory.<br /><br />The IAC announced that Lattice 2018 will take place at Michigan State University. Elvira Gamiz as the chair of the Lattice 2017 LOC extended an invitation to the lattice community to come to Granada for <a href="http://www.lattice2017.es/">Lattice 2017</a>, which will take place in the week 18-24 June 2017. And with that, and a round of well-deserved applause for the organizers, the conference closed.<br /><br />My further travel plans are of interest only to a small subset of my readers, and need not be further elaborated upon in this venue.<br />http://latticeqcd.blogspot.com/2016/07/lattice-2016-day-six.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-1312340437708869420Fri, 29 Jul 2016 21:26:00 +00002016-07-29T22:26:20.682+01:00conferencesLattice 2016, Day FiveToday was the day of finite temperature and density, on which the general review talk was delivered by Heng-Tong Ding. While in the meantime agreement has been reached on the transition temperature, the nature of the transition (crossover) and the equation of state at the physical quark masses, on which different formulations differed a lot in the past, the Columbia plot of the nature of the transition as a function of the light and strange quark masses still remains to be explored, and there are discrepancies between results obtained in different formulations. On the topic of U(1)<sub>A</sub> restoration (on which I do have a layman's question: to my understanding U(1)<sub>A</sub> is broken by the axial anomaly, which to my understanding arises from the path integral measure - so why should one expect the symmetry to be restored at high temperature? The situation is quite different from dynamical spontaneous symmetry breaking, as far as I understand), there is no evidence for restoration so far. A number of groups have taken to using the gradient flow as a tool to perform relatively cheap investigations of the equation of state. There are also new results from the different approaches to finite-density QCD, including cumulants from the Taylor-expansion approach, which can be related to heavy-ion observables, and new ways of stabilizing complex Langevin dynamics.<br /><br />This was followed by two topical talks. The first, by Seyong Kim, was on the subject of heavy flavours at finite temperature. Heavy flavours are one of the most important probes of the quark-gluon plasma, and J/ψ suppression has served as a diagnostic tool of QGP formation for a long time. To understand the influence of high temperatures on the survival of quarkonium states and on the transport properties of heavy flavours in the QGP, knowledge of the spectral functions is needed. Unfortunately, extracting these from a finite number of points in Euclidean point is an ill-posed problem, especially so when the time extent is small at high temperature. The methods used to get at them nevertheless, such as the maximum entropy method or Bayesian fits, need to use some kind of prior information, introducing the risk of a methodological bias leading to systematic errors that may be not only quantitative, but even qualitative; as an example, MEM shows P-wave bottomonium to melt around the transition temperature, whereas a newer Bayesian method shows it to survive, so clearly more work is needed.<br /><br />The second topical talk was Kurt Langfeld speaking about the density-of-states method. This method is based on determining a function ρ(E), which is essentially the path integral of δ(S[φ]-E), such that the partition function can be written as the Laplace transform of ρ, which can be generalized to the case of actions with a sign problem, where the partition function can then be written as the Fourier transform of a function P(s). An algorithm to compute such functions exists in the form of what looks like a sort of microcanonical simulation in a window [E-δE;E+δE] and determines the slope of ρ at E, whence ρ can be reconstructed. Ergodicity is ensured by having the different windows overlap and running in parallel, with a possibility of "replica exchange" between the processes running for neighbouring windows when configurations within the overlap between them are generated. The examples shown, e.g. for the Potts model, looked quite impressive in that the method appears able to resolve double-peak structures even when the trough between the peaks is suppressed by many orders of magnitude, such that a Markov process would have no chance of crossing between the two probability peaks.<br /><br />After the coffee break, Aleksi Kurkela reviewed the phenomenology of heavy ions. The flow properties that were originally taken as a sign of hydrodynamics having set in are now also observed in pp collisions, which seem unlikely to be hydrodynamical. In understanding and interpreting these results, the pre-equilibration evolution is an important source of uncertainty; the current understanding seems to be that the system goes from an overoccupied to an underoccupied state before thermalizing, making different descriptions necessary at different times. At early times, simulations of classical Yang-Mills theory on a lattice in proper-time/rapidity coordinates are used, whereas later a quasiparticle description and kinetic theory can be applied; all this seems to be qualitative so far.<br /><br />The energy momentum tensor, which plays an important role in thermodynamics and hydrodynamics, was the topic of the last plenary of the day, which was given by Hiroshi Suzuki. Translation invariance is broken on the lattice, so the Ward-Takahashi identity for the energy-momentum tensor picks up an O(a) violation term, which can become O(1) by radiative corrections. As a consequence, three different renormalization factors are needed to renormalize the energy-momentum tensor. One way of getting at these are the shifted boundary conditions of Giusti and Meyer, another is the use of the gradient flow at short flow times, and there are first results from both methods.<br /><br />The parallel sessions of the afternoon concluded the parallel programme.<br /><br /><br />http://latticeqcd.blogspot.com/2016/07/lattice-2016-day-five.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-7399696396766454303Thu, 28 Jul 2016 22:59:00 +00002016-07-29T17:37:26.483+01:00conferencesLattice 2016, Days Three and FourFollowing the canonical script for lattice conferences, yesterday was the day without plenaries. Instead, the morning was dedicated to parallel sessions (including my own talk), and the afternoon was free time with the option of taking one of several arranged excursions.<br /><br />I went on the excursion to Salisbury cathedral (which is notable both for its fairly homogeneous and massive architectural ensemble, and for being home to one of four original copies of the Magna Carta) and Stonehenge (which in terms of diameter seems to be much smaller than I had expected from photos).<br /><br />Today began with the traditional non-lattice theory talk, which was given by Monika Blanke, who spoke about the impact of lattice QCD results on CKM phenomenology. Since quarks cannot be observed in isolation, the extraction of CKM matrix elements from experimental results always require knowledge of the appropriate hadronic matrix elements of the currents involved in the measured reaction. This means that lattice results for the form factors of heavy-to-light semileptonic decays and for the hadronic parameters governing neutral kaon and B meson mixing are of crucial importance to CKM phenomenology, to the extent that there is even a sort of "wish list" to the lattice. There has long been a discrepancy between the values of both |V<sub>cb</sub>| and |V<sub>ub</sub>| extracted from inclusive and exclusive decays, respectively, and the ratio |V<sub>ub</sub>/V<sub>cb</sub>| that can be extracted from decays of Λ<sub>b</sub> baryons only adds to the tension. However, this is likely to be a result of underestimated theoretical uncertainties or experimental issues, since the pattern of the discrepancies is not in agreement with that which would results from new physics effects induced by right-handed currents. General models of flavour violating new physics seems to favour the inclusive value for |V<sub>ub</sub>|. In b->s transitions, there is evidence for new physics effects at the 4σ level, but significant theoretical uncertainties remain. The B<sub>(s)</sub>->μ<sup>+</sup>μ<sup>-</sup> branching fractions are currently in agreement with the SM at the 2σ level, but new, more precise measurements are forthcoming.<br /><br />Ran Zhou complemented this with a review talk about heavy flavour results from the lattice, where there are new results from a variety of different approaches (NRQCD, HQET, Fermilab and Columbia RHQ formalisms), which can serve as useful and important cross-checks on each other's methodological uncertainties.<br /><br />Next came a talk by Amy Nicholson on neutrinoless double β decay results from the lattice. Neutrinoless double β decays are possible if neutrinos are Majorana particles, which would help to explain the small masses of the observed left-handed neutrinos through the see-saw mechanism pushing the right-handed neutrinos off to near the GUT scale. Treating the double β decay in the framework of a chiral effective theory, the leading-order matrix element required is a process π<sup>-</sup>->π<sup>+</sup>e<sup>-</sup>e<sup>-</sup>, for which there are first results in lattice QCD. The NLO process would have disconnected diagrams, but cannot contribute to the 0<sup>+</sup>->0<sup>+</sup> transitions which are experimentally studied, whereas the NNLO process involves two-nucleon operators and still remains to be studied in greater detail on the lattice.<br /><br />After the coffee break, Agostino Patella reviewed the hot topic of QED corrections to hadronic observables. There are currently two main methods for dealing with QED in the context of lattice simulations: either to simulate QCD+QED directly (usually at unphysically large electromagnetic couplings followed by an extrapolation to the physical value of α=1/137), or to expand it in powers of α and to measure only the resulting correlation functions (which will be four-point functions or higher) in lattice QCD. Both approaches have been used to obtain some already very impressive results on isospin-breaking QED effects in the hadronic spectrum, as shown already in the spectroscopy review talk. There are, however, still a number of theoretical issues connected to the regularization of IR modes that relate to the Gauss law constraint that would forbid the existence of a single charged particle (such as a proton) in a periodic box. The prescriptions to evade this problem all lead to a non-commutativity of limits requiring the infinite-volume limit to be taken before other limits (such as the continuum or chiral limits): QED<sub>TL</sub>, which omits the global zero modes of the photon field, is non-local and does not have a transfer matrix; QED<sub>L</sub>, which omits the spatial zero modes on each timeslice, has a transfer matrix, but is still non-local and renormalizes in a non-standard fashion, such that it does not have a non-relativistic limit; the use of a massive photon leads to a local theory with softly broken gauge symmetry, but still requires the infinite-volume limit to be taken before removing the photon mass. Going beyond hadron masses to decays introduces new IR problems, which need to be treated in the Bloch-Nordsieck way, leading to potentially large logarithms.<br /><br />The 2016 Ken Wilson Lattice Award was awarded to Antonin Portelli for his outstanding contributions to our understanding of electromagnetic effects on hadron properties. Antonin was one of the driving forces behind the BMW collaboration's effort to determine the proton-neutron mass difference, which resulted in a <i>Science</i> paper exhibiting one of the most frequently-shown and impressive spectrum plots at this conference.<br /><br />In the afternoon, parallel sessions took place, and in the evening there was a (very nice) conference dinner at the Southampton F.C. football stadium.<br /><br />http://latticeqcd.blogspot.com/2016/07/lattice-2016-days-three-and-four.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-7733595512774224540Tue, 26 Jul 2016 20:32:00 +00002016-07-26T21:32:12.667+01:00conferencesLattice 2016, Day TwoHello again from Lattice 2016 at Southampton. Today's first plenary talk was the review of nuclear physics from the lattice given by Martin Savage. Doing nuclear physics from first principles in QCD is obviously very hard, but also necessary in order to truly understand nuclei in theoretical terms. Examples of needed theory predictions include the equation of state of dense nuclear matter, which is important for understanding neutron stars, and the nuclear matrix elements required to interpret future searches for neutrinoless double β decays in terms of fundamental quantities. The problems include the huge number of required quark-line contractions and the exponentially decaying signal-to-noise ratio, but there are theoretical advances that increasingly allow to bring these under control. The main competing procedures are more or less direct applications of the Lüscher method to multi-baryon systems, and the HALQCD method of computing a nuclear potential from Bethe-Salpeter amplitudes and solving the Schrödinger equation for that potential. There has been a lot of progress in this field, and there are now first results for nuclear reaction rates.<br /><br />Next, Mike Endres spoke about new simulation strategies for lattice QCD. One of the major problems in going to very fine lattice spacings is the well-known phenomenon critical slowing-down, i.e. the divergence of the autocorrelation times with some negative power of the lattice spacing, which is particularly severe for the topological charge (a quantity that cannot change at all in the continuum limit), leading to the phenomenon of "topology freezing" in simulations at fine lattice spacings. To overcome this problem, changes in the boundary conditions have been proposed: open boundary conditions that allow topological charge to move into and out of the system, and non-orientable boundary conditions that destroy the notion of an integer topological charge. An alternative route lies in algorithmic modifications such as metadynamics, where a potential bias is introduced to disfavour revisiting configurations, so as to forcibly sample across the potential wells of different topological sectors over time, or multiscale thermalization, where a Markov chain is first run at a coarse lattice spacing to obtain well-decorrelated configurations, and then each of those is subjected to a refining operation to obtain a (non-thermalized) gauge configuration at half the lattice spacing, each of which can then hopefully thermalized by a short sequence of Monte Carlo update operations.<br /><br />As another example of new algorithmic ideas, Shinji Takeda presented tensor networks, which are mathematical objects that assign a tensor to each site of a lattice, with lattice links denoting the contraction of tensor indices. An example is given by the rewriting of the partition function of the Ising model that is at the heart of the high-temperature expansion, where the sum over the spin variables is exchanged against a sum over link variables taking values of 0 or 1. One of the applications of tensor networks in field theory is that they allow for an implementation of the renormalization group based on performing a tensor decomposition along the lines of a singular value decomposition, which can be truncated, and contracting the resulting approximate tensor decomposition into new tensors living on a coarser grid. Iterating this procedure until only one lattice site remains allows the evaluation of partition functions without running into any sign problems and at only O(log <i>V</i>) effort.<br /><br />After the coffee break, Sara Collins gave the review talk on hadron structure. This is also a field in which a lot of progress has been made recently, with most of the sources of systematic error either under control (e.g. by performing simulations at or near the physical pion mass) or at least well understood (e.g. excited-state and finite-volume effects). The isovector axial charge <i>g<sub>A</sub></i> of the nucleon, which for a long time was a bit of an embarrassment to lattice practitioners, since it stubbornly refused to approach its experimental value, is now understood to be particularly severely affected by excited-state effects, and once these are well enough suppressed or properly accounted for, the situation now looks quite promising. This lends much larger credibility to lattice predictions for the scalar and tensor nucleon charges, for which little or no experimental data exists. The electromagnetic form factors are also in much better shape than one or two years ago, with the electric Sachs form factor coming out close to experiment (but still with insufficient precision to resolve the conflict between the experimental electron-proton scattering and muonic hydrogen results), while now the magnetic Sachs form factor shows a trend to undershoot experiment. Going beyond isovector quantities (in which disconnected diagrams cancel), the progress in simulation techniques for disconnected diagrams has enabled the first computation of the purely disconnected strangeness form factors. The sigma term σ<sub>πN</sub> comes out smaller on the lattice than it does in experiment, which still needs investigation, and the average momentum fraction <<i>x</i>> still needs to become the subject of a similar effort as the nucleon charges have received.<br /><br />In keeping with the pattern of having large review talks immediately followed by a related topical talk, Huey-Wen Lin was next with a talk on the Bjorken-<i>x</i> dependence of the parton distribution functions (PDFs). While the PDFs are defined on the lightcone, which is not readily accessible on the lattice, a large-momentum effective theory formulation allows to obtain them as the infinite-momentum limit of finite-momentum parton distribution amplitudes. First studies show interesting results, but renormalization still remains to be performed.<br /><br />After lunch, there were parallel sessions, of which I attended the ones into which most of the <i>(g-2)</i> talks had been collected, showing quite a rate of progress in terms of the treatment of in particular the disconnected contributions.<br /><br />In the evening, the poster session took place.<br /><br />http://latticeqcd.blogspot.com/2016/07/lattice-2016-day-two.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-6754011095960953956Mon, 25 Jul 2016 21:58:00 +00002016-07-25T23:05:04.594+01:00conferencestravelLattice 2016, Day OneHello from Southampton, where I am attending the Lattice 2016 conference.<br /><br />I arrived yesterday safe and sound, but unfortunately too late to attend the welcome reception. Today started off early and quite well with a full English breakfast, however.<br /><br />The conference programme was opened with a short address by the university's Vicepresident of Research, who made a point of pointing out that he like 93% of UK scientists had voted to remain in the EU - an interesting testimony to the political state of affairs, I think.<br /><br />The first plenary talk of the conference was a memorial to the scientific legacy of Peter Hasenfratz, who died earlier this year, delivered by Urs Wenger. Peter Hasenfratz was one of the pioneers of lattice field theory, and hearing of his groundbreaking achievements is one of those increasingly rare occasions when I get to feel very young: when he organized the first lattice symposium in 1982, he sent out individual hand-written invitations, and the early lattice reviews he wrote were composed in a time where most results were obtained in the quenched approximation. But his achievements are still very much current, amongst other things in the form of fixed-point actions as a realization of the Ginsparg-Wilson relation, which gave rise to the booming interest in chiral fermions.<br /><br />This was followed by the review of hadron spectroscopy by Chuan Liu. The contents of the spectroscopy talks have by now shifted away from the ground-state spectrum of stable hadrons, the calculation of which has become more of a benchmark task, and towards more complex issues, such as the proton-neutron mass difference (which requires the treatment of isospin breaking effects both from QED and from the difference in bare mass of the up and down quarks) or the spectrum of resonances (which requires a thorough study of the volume dependence of excited-state energy levels via the Lüscher formalism). The former is required as part to the physics answer to the ageless question why anything exists at all, and the latter is called for in particular by the still pressing current question of the nature of the XYZ states.<br /><br />Next came a talk by David Wilson on a more specific spectroscopy topic, namely resonances in coupled-channel scattering. Getting these right requires not only extensions of the Lüscher formalism, but also the extraction of very large numbers of energy levels via the generalized eigenvalue problem.<br /><br />After the coffee break, Hartmut Wittig reviewed the lattice efforts at determining the hadronic contributions to the anomalous magnetic moment (g-2)<sub>μ</sub> of the muon from first principles. This is a very topical problem, as the next generation of muon experiments will reduce the experimental error by a factor of four or more, which will require a correspondingly large reduction in the theoretical uncertainties in order to interpret the experimental results. Getting to this level of accuracy requires getting the hadronic vacuum polarization contribution to sub-percent accuracy (which requires full control of both finite-volume and cut-off effects, and a reasonably accurate estimate for the disconnected contributions) and the hadronic light-by-light scattering contribution to an accuracy of better than 10% (which some way or another requires the calculation of a four-point function including a reasonable estimate for the disconnected contributions). There has been good progress towards both of these goals from a number of different collaborations, and the generally good overall agreement between results obtained using widely different formulations bodes well for the overall reliability of the lattice results, but there are still many obstacles to overcome.<br /><br />The last plenary talk of the day was given by Sergei Dubovsky, who spoke about efforts to derive a theory of the QCD string. As with most stringy talks, I have to confess to being far too ignorant to give a good summary; what I took home is that there is some kind of string worldsheet theory with Goldstone bosons that can be used to describe the spectrum of large-N<sub>c</sub> gauge theory, and that there are a number of theoretical surprises there.<br /><br />Since the plenary programme is being <a href="http://www.southampton.ac.uk/lattice2016/plenary-streaming/">streamed</a> on the web, by the way, even those of you who cannot attend the conference can now do without my no doubt quite biased and very limited summaries and hear and see the talks for yourselves.<br /><br />After lunch, parallel sessions took place. I found the sequence of talks by Stefan Sint, Alberto Ramos and Rainer Sommer about a precise determination of α<sub>s</sub>(M<sub>Z</sub>) using the Schrödinger functional and the gradient-flow coupling very interesting.<br />http://latticeqcd.blogspot.com/2016/07/lattice-2016-day-one.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-2104748736122290844Tue, 15 Sep 2015 11:56:00 +00002015-10-03T20:13:12.118+01:00conferencesMITPFundamental Parameters from Lattice QCD, Last DaysThe last few days of our scientific programme were quite busy for me, since I had agreed to give the summary talk on the final day. I therefore did not get around to blogging, and will keep this much-delayed summary rather short.<br /><br />On Wednesday, we had a talk by Michele Della Morte on non-perturbatively matched HQET on the lattice and its use to extract the b quark mass, and a talk by Jeremy Green on the lattice measurement of the nucleon strange electromagnetic form factors (which are purely disconnected quantities).<br /><br />On Thursday, Sara Collins gave a review of heavy-light hadron spectra and decays, and Mike Creutz presented arguments for why the question of whether the up-quark is massless is scheme dependent (because the sum and difference of the light quark masses are protected by symmetries, but will in general renormalize differently).<br /><br />On Friday, I gave the summary of the programme. The main themes that I identified were the question of how to estimate systematic errors, and how to treat them in averaging procedures, the issues of isospin breaking and scale setting ambiguities as major obstacles on the way to sub-percent overall precision, and the need for improved communication between the "producers" and "consumers" of lattice results. In the closing discussion, the point was raised that for groups like CKMfitter and UTfit the correlations between different lattice quantities are very important, and that lattice collaborations should provide the covariance matrices of the final results for different observables that they publish wherever possible.http://latticeqcd.blogspot.com/2015/09/fundamental-parameters-from-lattice-qcd_15.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-5972860561427219574Wed, 09 Sep 2015 20:00:00 +00002015-09-15T12:55:40.616+01:00conferencesMITPFundamental Parameters from Lattice QCD, Day SevenToday's programme featured two talks about the interplay between the strong and the electroweak interactions. The first speaker was Gregorio Herdoíza, who reviewed the determination of hadronic corrections to electroweak observables. In essence these determinations are all very similar to the determination of the leading hadronic correction to (g-2)<sub>μ</sub> since they involve the lattice calculation of the hadronic vacuum polarisation. In the case of the electromagnetic coupling α, its low-energy value is known to a precision of 0.3 ppb, but the value of α(m<sub>Z</sub><sup>2</sup>) is known only to 0.1 ‰, and a larger portion of the difference in uncertainty is due to the hadronic contribution to the running of α, i.e. the hadronic vacuum polarization. Phenomenologically this can be estimated through the R-ratio, but this results in relatively large errors at low Q<sup>2</sup>. On the lattice, the hadronic vacuum polarization can be measured through the correlator of vector currents, and currently a determination of the running of α in agreement with phenomenology and with similar errors can be achieved, so that in the future lattice results are likely to take the lead here. In the case of the electroweak mixing angle, sin<sup>2</sup>θ<sub>w</sub> is known well at the Z pole, but only poorly at low energy, although a number of experiments (including the P2 experiment at Mainz) are aiming to reduce the uncertainty at lower energies. Again, the running can be determined from the Z-γ mixing through the associated current-current correlator, and current efforts are under way, including an estimation of the systematic error caused by the omission of quark-disconnected diagrams.<br /><br />The second speaker was Vittorio Lubicz, who looked at the opposite problem, i.e. the electroweak corrections to hadronic observables. Since approximately α=1/137, electromagnetic corrections at the one-loop level will become important once the 1% level of precision is being aimed for, and since the up and down quarks have different electrical charges, this is an isospin-breaking effect which also necessitates at the same time considering the strong isospin breaking caused by the difference in the up and down quark masses. There are two main methods to include QED effects into lattice simulations; the first is direct simulations of QCD+QED, and the second is the method of incorporating isospin-breaking effects in a systematic expansion pioneered by Vittorio and colleagues in Rome. Either method requires a systematic treatment of the IR divergences arising from the lack of a mass gap in QED. In the Rome approach this is done through splitting the Bloch-Nordsieck treatment of IR divergences and soft bremsstrahlung into two pieces, whose large-volume limits can be taken separately. There are many other technical issues to be dealt with, but first physical results from this method should be forthcoming soon.<br /><br />In the afternoon there was a discussion about QED effects and the range of approaches used to treat them.http://latticeqcd.blogspot.com/2015/09/fundamental-parameters-from-lattice-qcd_9.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-2620838725276882962Mon, 07 Sep 2015 17:00:00 +00002015-09-07T18:00:02.460+01:00conferencesMITPFundamental Parameters from Lattice QCD, Day SixThe second week of our Scientific Programme started with an influx of new participants.<br /><br />The first speaker of the day was Chris Kelly, who spoke about CP violation in the kaon sector from lattice QCD. As I hardly need to tell my readers, there are two sources of CP violation in the kaon system, the indirect CP-violation from neutral kaon-antikaon mixing, and the direct CP-violation from K->ππ decays. Both, however, ultimately stem from the single source of CP violation in the Standard Model, i.e. the complex phase e<sup>iδ</sup> in the CKM matrix, which gives the area of the unitarity triangle. The hadronic parameter relevant to indirect CP-violation is the kaon bag parameter B<sub>K</sub>, which is a "gold-plated" quantity that can be very well determined on the lattice; however, the error on the CP violation parameter ε<sub>K</sub> constraining the upper vertex of the unitarity triangle is dominated by the uncertainty on the CKM matrix element V<sub>cb</sub>. Direct CP-violation is particularly sensitive to possible BSM effects, and is therefore of particular interest. Chris presented the recent efforts of the RBC/UKQCD collaboration to address the extraction of the relevant parameter ε'/ε and associated phenomena such as the ΔI=1/2 rule. For the two amplitudes A<sub>0</sub> and A<sub>2</sub>, different tricks and methods were required; in particular for the isospin-zero channel, all-to-all propagators are needed. The overall errors are still large: although the systematics are dominated by the perturbative matching to the MSbar scheme, the statistical errors are very sizable, so that the 2.1σ tension with experiment observed is not particularly exciting or disturbing yet.<br /><br />The second speaker of the morning was Gunnar Bali, who spoke about the topic of renormalons. It is well known that the perturbative series for quantum field theories are in fact divergent asymptotic series, whose typical term will grow like <i>n<sup>k</sup>z<sup>n</sup>n!</i> for large orders <i>n</i>. Using the Borel transform, such series can be resummed, provided that there are no poles (IR renormalons) of the Borel transform on the positive real axis. In QCD, such poles arise from IR divergences in diagrams with chains of bubbles inserted into gluon lines, as well as from instanton-antiinstanton configurations in the path integral. The latter can be removed to infinity by considering the large-<i>N<sub>c</sub></i> limit, but the former are there to stay, making perturbatively defined quantities ambiguous at higher orders. A relevant example are heavy quark masses, where the different definitions (pole mass, MSbar mass, 1S mass, ...) are related by perturbative conversion factors; in a heavy-quark expansion, the mass of a heavy-light meson can be written as <i>M=m+Λ+O(1/m)</i>, where <i>m</i> is the heavy quark mass, and Λ a binding energy of the order of some QCD energy scale. As <i>M</i> is unambiguous, the ambiguities in <i>m</i> must correspond to ambiguities in the binding energy Λ, which can be computed to high orders in numerical stochastic perturbation theory (NSPT). After dealing with some complications arising from the fact that IR divergences cannot be probed directly in a finite volume, it is found that the minimum term in the perturbative series (which corresponds to the perturbative ambiguity) is of order 180 MeV in the quenched theory, meaning that heavy quark masses are only defined up to this accuracy. Another example is the gluon condensate (which may be of relevance to the extraction of α<sub>s</sub> from τ decays), where it is found that the ambiguity is of the same size as the typically quoted result, making the usefulness of this quantity doubtful.http://latticeqcd.blogspot.com/2015/09/fundamental-parameters-from-lattice-qcd_7.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-7418913630200480815Fri, 04 Sep 2015 18:00:00 +00002015-09-07T13:18:03.740+01:00conferencesMITPFundamental Parameters from Lattice QCD, Day FiveThe first speaker today was Martin Lüscher, who spoke about revisiting numerical stochastic perturbation theory. The idea behind numerical stochastic perturbation theory is to perform a simulation of a quantum field theory using the Langevin algorithm and to perturbatively expand the fields, which leads to a tower of coupled evolution equations, where only the lowest-order one depends explicitly on the noise, whereas the higher-order ones describe the evolution of the higher-order coefficients as a function of the lower-order ones. In Numerical Stochastic Perturbation Theory (NSPT), the resulting equations are integrated numerically (up to some, possibly rather high, finite order in the coupling), and the average over noises is replaced by a time average. The problems with this approach are that the autocorrelation time diverges as the inverse square of the lattice spacing, and that the extrapolation in the Langevin time step size is difficult to control well. An alternative approach is given by Instantaneous Stochastic Perturbation Theory (ISPT), in which the Langevin time evolution is replaced by the introduction of Gaussian noise sources at the vertices of tree diagrams describing the construction of the perturbative coefficients of the lattice fields. Since there is no free lunch, this approach suffers from power-law divergent statistical errors in the continuum limit, which arise from the way in which power-law divergences that cancel in the mean are shifted around between different orders when computing variances. This does not happen in the Langevin-based approach, because the Langevin theory is renormalizable.<br /><br />The second speaker of the morning was Siegfried Bethke of the Particle Data Group, who allowed us a glimpse at the (still preliminary) world average of α<sub>s</sub> for 2015. In 2013, there were five classes of α<sub>s</sub> determinations: from lattice QCD, τ decays, deep inelastic scattering, e<sup>+</sup>e<sup>-</sup> colliders, and global Z pole fits. Except for the lattice determinations (and the Z pole fits, where there was only one number), these were each preaveraged using the range method -- i.e. taking the mean of the highest and lowest central value as average, and assigning it an ncertainty of half the difference between them. The lattice results were averaged using a χ<sup>2</sup> weighted average. The total average (again a weighted average) was dominated by the lattice results, which in turn were dominated by the latest HPQCD result. For 2015, there have been a number of updates to most of the classes, and there is now a new class of α<sub>s</sub> determinations from the LHC (of which there is currently only one published, which lies rather low compared to other determinations, and is likely a downward fluctuation). In most cases, the new determinations have not or hardly changed the values and errors of their class. The most significant change is in the field of lattice determinations, where the PDG will change its policy and will no longer perform its own preaverages, taking instead the FLAG average as the lattice result. As a result, the error on the PDG value will increase; its value will also shift down a little, mostly due to the new LHC value.<br /><br />The afternoon discussion centered on α<sub>s</sub>. Roger Horsley gave an overview of the methods used to determine it on the lattice (ghost vertices, the Schrödinger functional, the static energy at short distances, current-current correlators, and small Wilson loops) and reviewed the criteria used by FLAG to assess the quality of a given determination, as well as the averaging procedure used (which uses a more conservative error than what a weighted average would give). In the discussion, the points were raised that in order to reliably increase the precision to the sub-percent level and beyond will likely require not only addressing the scale setting uncertainties (which is reflected in the different values for r<sub>0</sub> obtained by different collaboration and will affect the running of α<sub>s</sub>), but also the inclusion of QED effects.http://latticeqcd.blogspot.com/2015/09/fundamental-parameters-from-lattice-qcd_3.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-949588993284764875Fri, 04 Sep 2015 07:40:00 +00002015-09-04T08:40:12.053+01:00conferencesMITPFundamental Parameters from Lattice QCD, Day FourToday's first speaker was Andreas Jüttner, who reviewed the extraction of the light-quark CKM matrix elements V<sub>ud</sub> and V<sub>us</sub> from lattice simulations. Since leptonic and semileptonic decay widths of Kaons and pions are very well measured, the matrix element |V<sub>us</sub>| and the ratio |V<sub>us</sub>|/|V<sub>ud</sub>| can be precisely determined if the form factor f<sub>+</sub><sup>Kπ</sup>(0) and the ratio of decay constants f<sub>K</sub>/f<sub>π</sub> are precisely predicted from the lattice. To reach the desired level of precision, the isospin breaking effects from the difference of the up and down quark masses and from electromagnetic interactions will need to be included (they are currently treated in chiral perturbation theory, which may not apply very well in the SU(3) case). Given the required level of precision, full control of all systematics is very important, and the problem of how to properly estimate the associated errors arises, to which different collaborations are offering very different answers. To make the lattice results optimally usable for CKMfitter &Co., one should ideally provide all of the lattice inputs to the CKMfitter fit separately (and not just some combination that presents a particularly small error), as well as their correlations (as far as possible).<br /><br />Unfortunately, I had to miss the second talk of the morning, by Xavier García i Tormo on the extraction of α<sub>s</sub> from the static-quark potential, because our Sonderforschungsbereich (SFB/CRC) is currently up for review for a second funding period, and the local organizers had to be available for questioning by panel members.<br /><br />Later in the afternoon, I returned to the workshop and joined a very interesting discussion on the topic of averaging in the presence of theoretical uncertainties. The large number of possible choices to be made in that context implies that the somewhat subjective nature of systematic error estimates survives into the averages, rather than being dissolved into a consensus of some sort.<br /><br />http://latticeqcd.blogspot.com/2015/09/fundamental-parameters-from-lattice-qcd_18.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-794857730062574410Fri, 04 Sep 2015 07:23:00 +00002015-09-07T13:11:58.161+01:00conferencesMITPFundamental Parameters from Lattice QCD, Day ThreeToday, our first speaker was Jerôme Charles, who presented new ideas about how treat data with theoretical uncertainties. The best place to read about this is probably his <a href="">talk</a>, but I will try to summarize what I understood. The framework is a firmly frequentist approach to statistics, which answers the basic question of how likely the observed data are if a given null hypothesis is true. In such a context, one can consider a theoretical uncertainty as a fixed bias δ of the estimator under consideration (such as a lattice simulation) which survives the limit of infinite statistics. One can then test the null hypothesis that the true value of the observable in question is μ by constructing a test statistic for the estimator being distributed normally with mean μ+δ and standard deviation σ (the statistical error quoted for the result). The p-value of μ then depends on δ, but not on the quoted systematic error Δ. Since the true value of δ is not known, one has to perform a scan over some region Ω, for example the interval Ω<sub>n</sub>=[-nΔ;nΔ] and take the supremum over this range of δ. One possible extension is to choose Ω adaptively in that a larger range of values needs to be scanned (i.e. a larger true systematic error in comparison to the quoted systematic error is allowed for) for lower p-values; interestingly enough, the resulting curves of p-values are numerically close to what is obtained from a naive Gaussian approach treating the systematic error as a (pseudo-)random variable. For multiple systematic errors, a multidimensional Ω has to be chosen in some way; the most natural choices of a hypercube or a hyperball correspond to adding the errors linearly or in quadrature, respectively. The linear (hypercube) scheme stands out as the only one that guarantees that the systematic error of an average is no smaller than the smallest systematic error of an individual result.<br /><br />The second speaker was Patrick Fritzsch, who gave a nive review of recent lattice determinations of semileptonic heavy-light decays, both the more commonly studied B decays to πℓν and Kℓν, and the decays of the Λ<sub>b</sub> that have recently been investigated by Meinel <i>et al.</i> with the help of LHCb.<br /><br />In the afternoon, both the CKMfitter collaboration and the FLAG group held meetings.<br /><br />http://latticeqcd.blogspot.com/2015/09/fundamental-parameters-from-lattice-qcd_4.htmlnoreply@blogger.com (Georg v. Hippel)2tag:blogger.com,1999:blog-8669468.post-8804916079446072933Tue, 01 Sep 2015 15:29:00 +00002015-09-01T16:29:14.736+01:00conferencesMITPFundamental Parameters from Lattice QCD, Day TwoThis morning, we started with a talk by Taku Izubuchi, who reviewed the lattice efforts relating to the hadronic contributions to the anomalous magnetic moment (g-2) of the muon. While the QED and electroweak contributions to (g-2) are known to great precision, most of the theoretical uncertainty presently comes from the hadronic (i.e. QCD) contributions, of which there are two that are relevant at the present level of precision: the contribution from the hadronic vacuum polarization, which can be inserted into the leading-order QED correction, and the contribution from hadronic light-by-light scattering, which can be inserted between the incoming external photon and the muon line. There are a number of established methods for computing the hadronic vacuum polarization, both phenomenologically using a dispersion relation and the experimental R-ratio, and in lattice field theory by computing the correlator of two vector currents (which can, and needs to, be refined in various way in order to achieve competitive levels of precision). No such well-established methods exist yet for the light-by-light scattering, which is so far mostly described using models. There are however, now efforts from a number of different sides to tackle this contribution; Taku mainly presented the appproach by the RBC/UKQCD collaboration, which uses stochastic sampling of the internal photon propagators to explicitly compute the diagrams contributing to (g-2). Another approach would be to calculate the four-point amplitude explicitly (which has recently been done for the first time by the Mainz group) and to decompose this into form factors, which can then be integrated to yield the light-by-light scattering contribution to (g-2).<br /><br />The second talk of the day was given by Petros Dimopoulos, who reviewed lattice determinations of D and B leptonic decays and mixing. For the charm quark, cut-off effects appear to be reasonably well-controlled with present-day lattice spacings and actions, and the most precise lattice results for the D and D<sub>s</sub> decay constants claim sub-percent accuracy. For the b quark, effective field theories or extrapolation methods have to be used, which introduces a source of hard-to-assess theoretical uncertainty, but the results obtained from the different approaches generally agree very well amongst themselves. Interestingly, there does not seem to be any noticeable dependence on the number of dynamical flavours in the heavy-quark flavour observables, as N<sub>f</sub>=2 and N<sub>f</sub>=2+1+1 results agree very well to within the quoted precisions.<br /><br />In the afternoon, the CKMfitter collaboration split off to hold their own meeting, and the lattice participants met for a few one-on-one or small-group discussions of some topics of interest.<br /><br />http://latticeqcd.blogspot.com/2015/09/fundamental-parameters-from-lattice-qcd.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-2781958310220674791Mon, 31 Aug 2015 16:33:00 +00002015-08-31T17:34:27.131+01:00conferencesMITPFundamental Parameters from Lattice QCD, Day OneGreetings from Mainz, where I have the pleasure of covering a meeting for you without having to travel from my usual surroundings (I clocked up more miles this year already than can be good from my environmental conscience).<br /><br />Our <a href="http://indico.mitp.uni-mainz.de/conferenceDisplay.py?confId=28">Scientific Programme</a> (which is the bigger of the two formats of meetings that the <a href="http://www.mitp.uni-mainz.de/">Mainz Institute of Theoretical Physics</a> (MITP) hosts, the smaller being Topical Workshops) started off today with two keynote talks summarizing the status and expectations of the <a href="http://itpwiki.unibe.ch/flag/index.php/Review_of_lattice_results_concerning_low_energy_particle_physics">FLAG</a> (Flavour Lattice Averaging Group, presented by Tassos Vladikas) and <a href="http://ckmfitter.in2p3.fr/">CKMfitter</a> (presented by Sébastien Descotes-Genon) collaborations. Both groups are in some way in the business of performing weighted averages of flavour physics quantities, but of course their backgrounds, rationale and methods are quite different in many regards. I will no attempt to give a line-by-line summary of the talks or the afternoon discussion session here, but instead just summarize a few <br />points that caused lively discussions or seemed important in some other way.<br /><br />By now, computational resources have reached the point where we can achieve such statistics that the total error on many lattice determinations of precision quantities is completely dominated by systematics (and indeed different groups would differ at the several-σ level if one were to consider only their statistical errors). This may sound good in a way (because it is what you'd expect in the limit of infinite statistics), but it is also very problematic, because the estimation of systematic errors is in the end really more of an art than a science, having a crucial subjective component at its heart. This means not only that systematic errors quoted by different groups may not be readily comparable, but also that it become important how to treat systematic errors (which may also be correlated, if e.g. two groups use the same one-loop renormalization constants) when averaging different results. How to do this is again subject to subjective choices to some extent. FLAG imposes cuts on quantities relating to the most important sources of systematic error (lattice spacings, pion mass, spatial volume) to select acceptable ensembles, then adds the statistical and systematic errors in quadrature, before performing a weighted average and computing the overall error taking correlations between different results into account using <a href="http://iopscience.iop.org/1402-4896/51/6/002/">Schmelling's procedure</a>. CKMfitter, on the other hand, adds all systematic errors linearly, and uses the <a href="http://arxiv.org/abs/hep-ph/0104062">Rfit procedure</a> to perform a maximum likelihood fit. Either choice is equally permissible, but they are not directly compatible (so CKMfitter can't use FLAG averages as such).<br /><br />Another point raised was that it is important for lattice collaborations computing mixing parameters to not just provide products like <i>f<sub>B</sub>√B<sub>B</sub></i>, but also <i>f<sub>B</sub></i> and <i>B<sub>B</sub></i> separately (as well as information about the correlation between these quantities) in order to help making the global CKM fits easier.<br /><br />http://latticeqcd.blogspot.com/2015/08/fundamental-parameters-from-lattice-qcd.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-1782869081254270076Sat, 18 Jul 2015 13:19:00 +00002015-07-18T14:19:35.642+01:00conferencesLATTICE 2015, Day FiveIn a marked deviation from the "standard programme" of the lattice conference series, Saturday started off with parallel sessions, one of which featured my own talk.<br /><br />The lunch break was relatively early, therefore, but first we all assembled in the plenary hall for the conference group photo (a new addition to the traditions of the lattice conference), and was followed by afternoon plenary sessions. The first of these was devoted to finite temperature and density, and started with Harvey Meyer giving the review talk on finite-temperature lattice QCD. The thermodynamic properties of QCD are by now relatively well-known: the transition temperature is agreed to be around 155 MeV, chiral symmetry restoration and the deconfinement transition coincide (as well as that can defined in the case of a crossover), and the number of degrees of freedom is compatible with a plasma of quarks and gluons above the transition, but the thermodynamic potentials approach the Stefan-Boltzmann limit only slowly, indicating that there are strong correlations in the medium. Below the transition, the hadron resonance gas model describes the data well. The Columbia plot describing the nature of the transition as a function of the light and strange quark masses is being further solidified: the size of the lower-left hand corner first-order region is being measured, and the nature of the left-hand border (most likely O(4) second-order) is being explored. Beyond these static properties, real-time properties are beginning to be studied through the finite-temperature spectral functions. One interesting point was that there is a difference between the screening masses (spatial correlation lengths) and quasiparticle masses (from the spectral function) in any given channel, which may even tend in opposite directions as functions of the temperature (as seen for the pion channel).<br /><br />Next, Szabolcs Borsanyi spoke about fluctuations of conserved charges at finite temperature and density. While of course the sum of all outcoming conserved charges in a collision must equal the sum of the ingoing ones, when considering a subvolume of the fireball, this can be best described in the grand canonical ensemble, as charges can move into and out of the subvolume. The quark number susceptibilities are then related to the fluctuating phase of the fermionic determinant. The methods being used to avoid the sign problem include Taylor expansions, fugacity expansions and simulations at imaginary chemical potential, all with their own strengths and weaknesses. Fluctuations can be used as a thermometer to measure the freeze-out temperature.<br /><br />Lastly, Luigi Scorzato reviewed the Lefschetz thimble, which may be a way out of the sign problem (e.g. at finite chemical potential). The Lefschetz thimble is a higher-dimensional generalization of the concept of steepest-descent integration, in which the integral of e<sup>S(z)</sup> for complex S(z) is evaluated by finding the stationary points of S and integrating along the curves passing through them along which the imaginary part of S is constant. On such Lefschetz thimbles, a Langevin algorithm can be defined, allowing for a Monte Carlo evaluation of the path integral in terms of Lefschetz thimbles. In quantum-mechanical toy models, this seems to work already, and there appears hope that this might be a way to avoid the sign problem of finite-density QCD.<br /><br />After the coffee break, the last plenary session turned to physics beyond the Standard Model. Daisuke Kadoh reviewed the progress in putting supersymmetry onto the lattice, which is still a difficult problem due to the fact that the finite differences which replace derivatives on a lattice do not respect the Leibniz rule, introducing SUSY-breaking terms when discretizing. The ways past this are either imposing exact lattice supersymmetries or fine-tuning the theory so as to remove the SUSY-breaking in the continuum limit. Some theories in both two and four dimensions have been simulated successfully, including N=1 Super-Yang-Mills theory in four dimensions. Given that there is no evidence for SUSY in nature, lattice SUSY is of interesting especially for the purpose of verifying the ideas of gauge-dravity duality from the Super-Yang-Mills side, and in one and two dimensions, agreement with the predictions from gauge-gravity duality has been found.<br /><br />The final plenary speaker was Anna Hasenfratz, who reviewed Beyond-the-Standard-Model calculations in technicolor-like theories. If the Higgs is to be a composite particle, there must be some spontaneously broken symmetry that keeps it light, either a flavour symmetry (pions) or a scale symmetry (dilaton). There are in fact a number of models that have a light scalar particle, but the extrapolation of these theories is rendered difficult by the fact that this scalar is (and for phenomenologically interesting models would have to be) lighter than the (techni-)pion, and thus the usual formalism of chiral perturbation theory may not work. Many models of strong BSM interactions have been and are being studied using a large number of different methods, with not always conclusive results. A point raised towards the end of the talk was that for theories with a conformal IR fixed-point, universality might be violated (and there are some indications that e.g. Wilson and staggered fermions seem to give qualitatively different behaviour for the beta function in such cases).<br /><br />The conference ended with some well-deserved applause for the organizing team, who really ran the conference very smoothly even in the face of a typhoon. Next year's lattice conference will take place in Southampton (England/UK) from 24th to 30th July 2016. Lattice 2017 will take place in Granada (Spain).<br />http://latticeqcd.blogspot.com/2015/07/lattice-2015-day-five.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-1532722304931539399Fri, 17 Jul 2015 13:16:00 +00002015-07-18T14:19:53.519+01:00conferencesLATTICE 2015, Days Three and FourDue to the one-day shift of the entire conference programme relative to other years, Thursday instead of Wednesday was the short day. In the morning, there were parallel sessions. The most remarkable thing to be reported from those (from my point of view) is that MILC are generating a=0.03 fm lattices now, which handily beats the record for the finest lattice spacing; they are observing some problems with the tunnelling of the topological charge at such fine lattices, but appear hopeful that they can be useful.<br /><br />After the lunch break, excursions were offered. I took the trip to Himeji to see Himeji Castle, a very remarkable five-story wooden building that due to its white exterior is also known the "White Heron Castle". During the trip, typhoon Nangka approached, so the rains cut our enjoyment of the castle park a bit short (though seeing koi in a pond with the rain falling into it had a certain special appeal to it, the enjoyment of which I in my Western ignorance suppose might be considered a form of Japanese <i>wabi</i> aesthetics).<br /><br />As the typhoon resolved into a rainstorm, the programme wasn't cancelled or changed, and so today's plenary programme started with a talk on some formal developments in QFT by Mithat Ünsal, who reviewed trans-series, Lefschetz thimbles, and Borel summability as different sides of the same coin. I'm far too ignorant of these more formal field theory topics to do them justice, so I won't try a detailed summary. Essentially, it appears that the expansion of certain theories around the saddle points corresponding to instantons is determined by their expansion around the trivial vacuum, and the ambiguities arising in the Borel resummation of perturbative series when the Borel transform has a pole on the positive real axis can in some way be connected to this phenomenon, which may allow for a way to resolve the ambiguities.<br /><br />Next, Francesco Sannino spoke about the "bright, dark, and safe" sides of the lattice. The bright side referred to the study of visible matter, in particular to the study of technicolor models as a way of implementing the spontaneous breaking of electroweak symmetry, without the need for a fundamental scalar introducing numerous tunable parameters, and with the added benefits of removing the hierarchy problem and the problem of φ<sup>4</sup> triviality. The dark side referred to the study of dark matter in the context of composite dark matter theories, where one should remember that if the visible 5% of the mass of the universe require three gauge groups for their description, the remaining 95% are unlikely to be described by a single dark matter particle and a homogeneous dark energy. The safe side referred to the very current idea of asymptotic safety, which is of interest especially in quantum gravity, but might also apply to some extension of the Standard Model, making it valid at all energy scales.<br /><br />After the coffee break, the traditional experimental talk was given by Toru Iijima of the Belle II collaboration. The Belle II detector is now beginning commissioning at the upcoming SuperKEKB accelerator, which will greatly improved luminosity to allow for precise tests of the Standard Model in the flavour sector. In this, Belle II will be complementary to LHCb, because it will have far lower backgrounds allowing for precision measurements of rare processes, while not being able to access as high energies. Most of the measurements planned at Belle II will require lattice inputs to interpret, so there is a challenge to our community to come up with sufficiently precise and reliable predictions for all required flavour observables. Besides quark flavour physics, Belle II will also search for lepton flavour violation in τ decays, try to improve the phenomenological prediction for (g-2)<sub>μ</sub> by measuring the cross section for e<sup>+</sup>e<sup>-</sup> -> hadrons more precisely, and search for exotic charmonium- and bottomonium-like states.<br /><br />Closely related was the next talk, a review of progress in heavy flavour physics on the lattice given by Carlos Pena. While simulations of relativistic b quarks at the physical mass will become a possibility in the not-too-distant future, for the time being heavy-quark physics is still dominated by the use of effective theories (HQET and NRQCD) and methods based either on appropriate extrapolations from the charm quark mass region, or on the Fermilab formalism, which is sort of in-between. For the leptonic decay constants of heavy-light mesons, there are now results from all formalisms, which generally agree very well with each other, indicating good reliability. For the semileptonic form factors, there has been a lot of development recently, but to obtain precision at the 1% level, good control of all systematics is needed, and this includes the momentum-dependence of the form factors. The z-expansion, and extended versions thereof allowing for simultaneous extrapolation in the pion mass and lattice spacing, has the advantage of allowing for a test of its convergence properties by checking the unitarity bound on its coefficients.<br /><br />After the coffee break, there were parallel sessions again. In the evening, the conference banquet took place. Interestingly, the (excelleent) food was not Japanese, but European (albeit with a slight Japanese twist in seasoning and presentation).<br />http://latticeqcd.blogspot.com/2015/07/lattice-2015-days-three-and-four.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-3174432257854640691Wed, 15 Jul 2015 12:47:00 +00002015-07-16T03:05:20.505+01:00conferencesLATTICE 2015, Day TwoHello again from Lattice 2015 in Kobe. Today's first plenary session began with a review talk on hadronic structure calculations on the lattice given by James Zanotti. James did an excellent job summarizing the manifold activities in this core area of lattice QCD, which is also of crucial phenomenological importance given situations such as the proton radius puzzle. It is now generally agreed that excited-state effects are one of the more important issues facing hadron structure calculations, especially in the nucleon sector, and that these (possibly together with finite-volume effects) are likely responsible for the observed discrepancies between theory and experiment for quantities such as the axial charge of the nucleon. Many groups are studying the charges and form factors of the nucleon, and some have moved on to more complicated quantities, such as transverse momentum distributions. Newer ideas in the field include the use of the Feynman-Hellmann theorem to access quantities that are difficult to access through the traditional three-point-over-two-point ratio method, such as form factors at very high momentum transfer, and quantities with disconnected diagrams (such as nucleon strangeness form factors).<br /><br />Next was a review of progress in light flavour physics by Andreas Jüttner, who likewise gave an excellent overview of this also phenomenologically very important core field. Besides the "standard" quantities, such as the leptonic pion and kaon decay constants and the semileptonic K-to-pi form factors, more difficult light-flavour quantities are now being calculated, including the bag parameter B<sub>K</sub> and other quantities related to both Standard Model and BSM neutral kaon mixing, which require the incorporation of long-distance effects, including those from charm quarks. Given the emergence of lattice ensembles at the physical pion mass, the analysis strategies of groups are beginning to change, with the importance of global ChPT fits receding. Nevertheless, the lattice remains important in determining the low-energy constants of Chiral Perturbation Theory. Some groups are also using newer theoretical developments to study quantities once believed to be outside the purview of lattice QCD, such as final-state photon corrections to meson decays, or the timelike pion form factor.<br /><br />After the coffee break, the Ken Wilson Award for Excellence in Lattice Field Theory was announced. The award goes to Stefan Meinel for his substantial and timely contributions to our understanding of the physics of the bottom quark using lattice QCD. In his acceptance talk, Stefan reviewed his recent work on determining |V<sub>ub</sub>|/|V<sub>cb</sub>| from decays of Λ<sub>b</sub> baryons measured by the LHCb collaboration. There has long been a discrepancy between the inclusive and exclusive (from B -> πlν) determinations of V<sub>ub</sub>, which might conceivably be due to a new (BSM) right-handed coupling. Since LHCb measures the decay widths for Λ<sub>b</sub> to both pμν and Λ<sub>c</sub>μν, combining these with lattice determinations of the corresponding Λ<sub>b</sub> form factors allows for a precise determination of |V<sub>ub</sub>|/|V<sub>cb</sub>|. The results agree well with the exclusive determination from B -> πlν, and fully agree with CKM unitarity. There are, however, still other channels (such as b -> sμ<sup>+</sup>μ<sup>-</sup> and b -> cτν) in which there is still potential for new physics, and LHCb measurements are pending.<br /><br />This was followed by a talk by Maxwell T. Hansen (now a postdoc at Mainz) on three-body observables from lattice QCD. The well-known Lüscher method relates two-body scattering amplitudes to the two-body energy levels in a finite volume. The basic steps in the derivation are to express the full momentum-space propagator in terms of a skeleton expansion involving the two-particle irreducible Bethe-Salpeter kernel, to express the difference between the two-particle reducible loops in finite and infinite volume in terms of two-particle cuts, and to reorganize the skeleton expansion by the number of cuts to reveal that the poles of the propagator (i.e. the energy levels) in finite volume are related to the scattering matrix. For three-particle systems, the skeleton expansion becomes more complicated, since there can now be situations involving two-particle interactions and a spectator particle, and intermediate lines can go on-shell between different two-particle interactions. Treating a number of other technical issues such as cusps, Max and collaborators have been able to derive a Lüscher-like formula three-body scattering in the case of scalar particles with a Z<sub>2</sub> symmetry forbidding 2-to-3 couplings. Various generalizations remain to be explored.<br /><br />The day's plenary programme ended with a talk on the Standard Model prediction for direct CP violation in K-> ππ decays by Christopher Kelly. This has been an enormous effort by the RBC/UKQCD collaboration, who have shown that the ΔI=1/2 rule comes from low-energy QCD by way of strong cancellations between the dominant contributions, and have determined ε' from the lattice for the first time. This required the generation of ensembles with an unusual set of boundary conditions (G-parity boundary conditions on the quarks, requiring complex conjugation boundary conditions on the gauge fields) in space to enforce a moving pion ground state, as well as the precise evaluation of difficult disconnected diagrams using low modes and stochastic estimators, and treatment of finite-volume effects in the Lellouch-Lüscher formalism. Putting all of this together with the non-perturbative renormalization (in the RI-sMOM scheme) of ten operators in the electroweak Hamiltonian gives a result which currently still has three times the experimental error, but is systematically improvable, with better-than-experimental precision expected in maybe five years.<br /><br />In the afternoon there were parallel sessions again, and in the evening, the poster session took place. Food ran out early, but it was pleasant to see <a href="http://arxiv.org/abs/1306.1440">free-form smearing</a> begin improved upon and used to very good effect by Randy Lewis, Richard Woloshyn and students.<br />http://latticeqcd.blogspot.com/2015/07/lattice-2015-day-two.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-1945513826714968814Tue, 14 Jul 2015 11:30:00 +00002015-07-14T12:30:28.925+01:00conferencestravelLATTICE 2015, Day OneHello from Kobe, where I am attending the Lattice 2015 conference. The trip here was uneventful, as was the jetlag-day.<br /><br />The conference started yesterday evening with a reception in the Kobe Animal Kingdom (there were no animals when we were there, though, with the exception of some fish in a pond and some cats in a cage, but there were lot of plants).<br /><br />Today, the scientific programme began with the first plenary session. After a welcome address by Akira Ukawa, who reminded us of the previous lattice meetings held in Japan and the tremendous progress the field has made in the intervening twelve years, Leonardo Giusti gave the first plenary talk, speaking about recent progress on chiral symmetry breaking. Lattice results have confirmed the proportionality of the square of the pion mass to the quark mass (i.e. the Gell-Mann-Oakes-Renner (GMOR) relation, a hallmark of chiral symmetry breaking) very accurately for a long time. Another relation involving the chiral condensate is the Banks-Casher relation, which relates it to the eigenvalue density of the Dirac operator at zero. It can be shown that the eigenvalue density is renormalizable, and that thus the mode number in a given interval is renormalization-group invariant. Two recent lattice studies, one with twisted-mass fermions and one with O(a)-improved Wilson fermions, confirm the Banks-Casher relation, with the chiral condensates found agreeing very well with those inferred from GMOR. Another relation is the Witten-Veneziano relation, which relates the η' mass to the topological susceptibility, thus explaining how precisely the η' is not a Goldstone boson. The topological charge on the lattice can be defined through the index of the Neuberger operator or through chain of spectral porjectors, but a recently invented and much cheaper definition is through the topological charge density at finite flow time in Lüscher's Wilson flow formalism. The renormalization properties of the Wilson flow allow for a derivation of the universality of the topological susceptibility, and numerical tests using all three definitions indeed agree within errors in the continuum limit. Higher cumulants determined in the Wilson flow formalism agree with large-N<sub>c</sub> predictions in pure Yang-Mills, and the suppression of the topological susceptibility in QCD relative to the pure Yang-Mills case is in line with expectations (which in principle can be considered an <i>a posteriori</i> determination of N<sub>f</sub> in agreement with the value used in simulations).<br /><br />The next speaker was Yu Nakayama, who talked about a related topic, namely the determination of the chiral phase transition in QCD from the conformal bootstrap. The chiral phase transition can be studied in the framework of a Landau effective theory in three dimensions. While the mean-field theory predicts a second-order phase transition in the O(4) universality class, one-loop perturbation theory in 4-ε dimensions predicts a first-order phase transition at ε=1. Making use of the conformal symmetry of the effective theory, one can apply the conformal bootstrap method, which combines an OPE with crossing relations to obtain results for critical exponents, and the results from this method suggest that the phase transition is in fact of second order. This also agrees with many lattice studies, but others disagree. The role of the anomalously broken U(1)<sub>A</sub> symmetry in this analysis appears to be unclear.<br /><br />After the coffee break, Tatsumi Aoyama, a long-time collaborator in the heroic efforts of Kinoshita to calculate the four- and five-loop QED contributions to the electron and muon anomalous moments, gave a plenary talk on the determination of the QED contribution to lepton (g-2). For likely readers of this blog, the importance of (g-2) is unlikely to require an explanation: the current 3σ tension between theory and experiment for (g-2)<sub>μ</sub> is the strongest hint of physics beyond the Standard Model so far, and since the largest uncertainties on the theory side are hadronic, lattice QCD is challenged to either resolve the tension or improve the accuracy of the predictions to the point where the tension becomes an unambiguous, albeit indirect, discovery of new physics. The QED calculations are on the face of it simpler, being straightforward Feynman diagram evaluations. However, the number of Feynman diagrams grows so quickly at higher orders that automated methods are required. In fact, in a first step, the number of Feynman diagrams is reduced by using the Ward-Takahashi identity to relate the vertex diagrams relevant to (g-2) to self-energy diagrams, which are then subjected to an automated renormalization procedure using the Zimmermann forest formula. In a similar way, infrared divergences are subtracted using a more complicated "annotated forest"-formula (there are two kinds of IR subtractions needed, so the subdiagrams in a forest need to be labelled with the kind of subtraction). The resulting UV- and IR-finite integrands are then integrated using VEGAS in Feynman parameter space. In order to maintain the required precision, quadruple-precision floating-point numbers (or an emulation thereof) must be used. Whether these methods could cope with the six-loop QED contribution is not clear, but with the current and projected experimental errors, that contribution will not be required for the foreseeable future, anyway.<br /><br />This was followed by another (g-2)-related plenary, with Taku Izubichi speaking about the determination of anomalous magnetic moments and nucleon electric dipole moments in QCD. In particular the anomalous magnetic moment has become such an active topic recently that the time barely sufficed to review all of the activity in this field, which ranges from different approaches to parameterizing the momentum dependence of the hadronic vacuum polarization, through clever schemes to reduce the noise by subtracting zero-momentum contributions, to new ways of extracting the vacuum polarization through the use of background magnetic fields, as well as simulations of QCD+QED on the lattice. Among the most important problems are finite-volume effects.<br /><br />After the lunch break, there were parallel sessions in the afternoon. I got to chair the first session on hadron structure, which was devoted to determinations of hadronic contributions to (g-2)<sub>μ</sub>.<br /><br />After the coffee break, there were more parallel sessions, another complete one of which was devoted to (g-2) and closely-related topics. A talk deserving to be highlighted was given by Jeremy Green, who spoke about the first direct calculation of the hadronic light-to-light scattering amplitude from lattice QCD.<br />http://latticeqcd.blogspot.com/2015/07/lattice-2015-day-one.htmlnoreply@blogger.com (Georg v. Hippel)0tag:blogger.com,1999:blog-8669468.post-3617778499689342936Fri, 10 Apr 2015 08:19:00 +00002015-04-10T15:59:59.883+01:00conferencesWorkshop "Fundamental Parameters from Lattice QCD" at MITP (upcoming deadline)Recent years have seen a significant increase in the overall accuracy of lattice QCD calculations of various hadronic observables. Results for quark and hadron masses, decay constants, form factors, the strong coupling constant and many other quantities are becoming increasingly important for testing the validity of the Standard Model. Prominent examples include calculations of Standard Model parameters, such as quark masses and the strong coupling constant, as well as the determination of CKM matrix elements, which is based on a variety of input quantities from experiment and theory. In order to make lattice QCD calculations more accessible to the entire particle physics community, several initiatives and working groups have sprung up, which collect the available lattice results and produce global averages.<br /><br />The scientific programme "<a href="https://indico.mitp.uni-mainz.de/conferenceDisplay.py?confId=28">Fundamental Parameters from Lattice QCD</a>" at the Mainz Institute of Theoretical Physics (<a href="http://www.mitp.uni-mainz.de/">MITP</a>) is designed to bring together lattice practitioners with members of the phenomenological and experimental communities who are using lattice estimates as input for phenomenological studies. In addition to sharing the expertise among several communities, the aim of the programme is to identify key quantities which allow for tests of the CKM paradigm with greater accuracy and to discuss the procedures in order to arrive at more reliable global estimates.<br /><br />The deadline for <a href="https://indico.mitp.uni-mainz.de/confRegistrationFormDisplay.py/display?confId=28" title="Registration form">registration</a> is <b>Wednesday, 15 April 2015</b>. Please register <a href="https://indico.mitp.uni-mainz.de/confRegistrationFormDisplay.py/display?confId=28" title="Register now!">at this link</a>.http://latticeqcd.blogspot.com/2015/03/workshop-fundamental-parameters-from.htmlnoreply@blogger.com (Georg v. Hippel)0