Tuesday, September 15, 2015

Fundamental Parameters from Lattice QCD, Last Days

The last few days of our scientific programme were quite busy for me, since I had agreed to give the summary talk on the final day. I therefore did not get around to blogging, and will keep this much-delayed summary rather short.

On Wednesday, we had a talk by Michele Della Morte on non-perturbatively matched HQET on the lattice and its use to extract the b quark mass, and a talk by Jeremy Green on the lattice measurement of the nucleon strange electromagnetic form factors (which are purely disconnected quantities).

On Thursday, Sara Collins gave a review of heavy-light hadron spectra and decays, and Mike Creutz presented arguments for why the question of whether the up-quark is massless is scheme dependent (because the sum and difference of the light quark masses are protected by symmetries, but will in general renormalize differently).

On Friday, I gave the summary of the programme. The main themes that I identified were the question of how to estimate systematic errors, and how to treat them in averaging procedures, the issues of isospin breaking and scale setting ambiguities as major obstacles on the way to sub-percent overall precision, and the need for improved communication between the "producers" and "consumers" of lattice results. In the closing discussion, the point was raised that for groups like CKMfitter and UTfit the correlations between different lattice quantities are very important, and that lattice collaborations should provide the covariance matrices of the final results for different observables that they publish wherever possible.

Wednesday, September 09, 2015

Fundamental Parameters from Lattice QCD, Day Seven

Today's programme featured two talks about the interplay between the strong and the electroweak interactions. The first speaker was Gregorio Herdoíza, who reviewed the determination of hadronic corrections to electroweak observables. In essence these determinations are all very similar to the determination of the leading hadronic correction to (g-2)μ since they involve the lattice calculation of the hadronic vacuum polarisation. In the case of the electromagnetic coupling α, its low-energy value is known to a precision of 0.3 ppb, but the value of α(mZ2) is known only to 0.1 ‰, and a larger portion of the difference in uncertainty is due to the hadronic contribution to the running of α, i.e. the hadronic vacuum polarization. Phenomenologically this can be estimated through the R-ratio, but this results in relatively large errors at low Q2. On the lattice, the hadronic vacuum polarization can be measured through the correlator of vector currents, and currently a determination of the running of α in agreement with phenomenology and with similar errors can be achieved, so that in the future lattice results are likely to take the lead here. In the case of the electroweak mixing angle, sin2θw is known well at the Z pole, but only poorly at low energy, although a number of experiments (including the P2 experiment at Mainz) are aiming to reduce the uncertainty at lower energies. Again, the running can be determined from the Z-γ mixing through the associated current-current correlator, and current efforts are under way, including an estimation of the systematic error caused by the omission of quark-disconnected diagrams.

The second speaker was Vittorio Lubicz, who looked at the opposite problem, i.e. the electroweak corrections to hadronic observables. Since approximately α=1/137, electromagnetic corrections at the one-loop level will become important once the 1% level of precision is being aimed for, and since the up and down quarks have different electrical charges, this is an isospin-breaking effect which also necessitates at the same time considering the strong isospin breaking caused by the difference in the up and down quark masses. There are two main methods to include QED effects into lattice simulations; the first is direct simulations of QCD+QED, and the second is the method of incorporating isospin-breaking effects in a systematic expansion pioneered by Vittorio and colleagues in Rome. Either method requires a systematic treatment of the IR divergences arising from the lack of a mass gap in QED. In the Rome approach this is done through splitting the Bloch-Nordsieck treatment of IR divergences and soft bremsstrahlung into two pieces, whose large-volume limits can be taken separately. There are many other technical issues to be dealt with, but first physical results from this method should be forthcoming soon.

In the afternoon there was a discussion about QED effects and the range of approaches used to treat them.

Monday, September 07, 2015

Fundamental Parameters from Lattice QCD, Day Six

The second week of our Scientific Programme started with an influx of new participants.

The first speaker of the day was Chris Kelly, who spoke about CP violation in the kaon sector from lattice QCD. As I hardly need to tell my readers, there are two sources of CP violation in the kaon system, the indirect CP-violation from neutral kaon-antikaon mixing, and the direct CP-violation from K->ππ decays. Both, however, ultimately stem from the single source of CP violation in the Standard Model, i.e. the complex phase e in the CKM matrix, which gives the area of the unitarity triangle. The hadronic parameter relevant to indirect CP-violation is the kaon bag parameter BK, which is a "gold-plated" quantity that can be very well determined on the lattice; however, the error on the CP violation parameter εK constraining the upper vertex of the unitarity triangle is dominated by the uncertainty on the CKM matrix element Vcb. Direct CP-violation is particularly sensitive to possible BSM effects, and is therefore of particular interest. Chris presented the recent efforts of the RBC/UKQCD collaboration to address the extraction of the relevant parameter ε'/ε and associated phenomena such as the ΔI=1/2 rule. For the two amplitudes A0 and A2, different tricks and methods were required; in particular for the isospin-zero channel, all-to-all propagators are needed. The overall errors are still large: although the systematics are dominated by the perturbative matching to the MSbar scheme, the statistical errors are very sizable, so that the 2.1σ tension with experiment observed is not particularly exciting or disturbing yet.

The second speaker of the morning was Gunnar Bali, who spoke about the topic of renormalons. It is well known that the perturbative series for quantum field theories are in fact divergent asymptotic series, whose typical term will grow like nkznn! for large orders n. Using the Borel transform, such series can be resummed, provided that there are no poles (IR renormalons) of the Borel transform on the positive real axis. In QCD, such poles arise from IR divergences in diagrams with chains of bubbles inserted into gluon lines, as well as from instanton-antiinstanton configurations in the path integral. The latter can be removed to infinity by considering the large-Nc limit, but the former are there to stay, making perturbatively defined quantities ambiguous at higher orders. A relevant example are heavy quark masses, where the different definitions (pole mass, MSbar mass, 1S mass, ...) are related by perturbative conversion factors; in a heavy-quark expansion, the mass of a heavy-light meson can be written as M=m+Λ+O(1/m), where m is the heavy quark mass, and Λ a binding energy of the order of some QCD energy scale. As M is unambiguous, the ambiguities in m must correspond to ambiguities in the binding energy Λ, which can be computed to high orders in numerical stochastic perturbation theory (NSPT). After dealing with some complications arising from the fact that IR divergences cannot be probed directly in a finite volume, it is found that the minimum term in the perturbative series (which corresponds to the perturbative ambiguity) is of order 180 MeV in the quenched theory, meaning that heavy quark masses are only defined up to this accuracy. Another example is the gluon condensate (which may be of relevance to the extraction of αs from τ decays), where it is found that the ambiguity is of the same size as the typically quoted result, making the usefulness of this quantity doubtful.

Friday, September 04, 2015

Fundamental Parameters from Lattice QCD, Day Five

The first speaker today was Martin Lüscher, who spoke about revisiting numerical stochastic perturbation theory. The idea behind numerical stochastic perturbation theory is to perform a simulation of a quantum field theory using the Langevin algorithm and to perturbatively expand the fields, which leads to a tower of coupled evolution equations, where only the lowest-order one depends explicitly on the noise, whereas the higher-order ones describe the evolution of the higher-order coefficients as a function of the lower-order ones. In Numerical Stochastic Perturbation Theory (NSPT), the resulting equations are integrated numerically (up to some, possibly rather high, finite order in the coupling), and the average over noises is replaced by a time average. The problems with this approach are that the autocorrelation time diverges as the inverse square of the lattice spacing, and that the extrapolation in the Langevin time step size is difficult to control well. An alternative approach is given by Instantaneous Stochastic Perturbation Theory (ISPT), in which the Langevin time evolution is replaced by the introduction of Gaussian noise sources at the vertices of tree diagrams describing the construction of the perturbative coefficients of the lattice fields. Since there is no free lunch, this approach suffers from power-law divergent statistical errors in the continuum limit, which arise from the way in which power-law divergences that cancel in the mean are shifted around between different orders when computing variances. This does not happen in the Langevin-based approach, because the Langevin theory is renormalizable.

The second speaker of the morning was Siegfried Bethke of the Particle Data Group, who allowed us a glimpse at the (still preliminary) world average of αs for 2015. In 2013, there were five classes of αs determinations: from lattice QCD, τ decays, deep inelastic scattering, e+e- colliders, and global Z pole fits. Except for the lattice determinations (and the Z pole fits, where there was only one number), these were each preaveraged using the range method -- i.e. taking the mean of the highest and lowest central value as average, and assigning it an ncertainty of half the difference between them. The lattice results were averaged using a χ2 weighted average. The total average (again a weighted average) was dominated by the lattice results, which in turn were dominated by the latest HPQCD result. For 2015, there have been a number of updates to most of the classes, and there is now a new class of αs determinations from the LHC (of which there is currently only one published, which lies rather low compared to other determinations, and is likely a downward fluctuation). In most cases, the new determinations have not or hardly changed the values and errors of their class. The most significant change is in the field of lattice determinations, where the PDG will change its policy and will no longer perform its own preaverages, taking instead the FLAG average as the lattice result. As a result, the error on the PDG value will increase; its value will also shift down a little, mostly due to the new LHC value.

The afternoon discussion centered on αs. Roger Horsley gave an overview of the methods used to determine it on the lattice (ghost vertices, the Schrödinger functional, the static energy at short distances, current-current correlators, and small Wilson loops) and reviewed the criteria used by FLAG to assess the quality of a given determination, as well as the averaging procedure used (which uses a more conservative error than what a weighted average would give). In the discussion, the points were raised that in order to reliably increase the precision to the sub-percent level and beyond will likely require not only addressing the scale setting uncertainties (which is reflected in the different values for r0 obtained by different collaboration and will affect the running of αs), but also the inclusion of QED effects.

Fundamental Parameters from Lattice QCD, Day Four

Today's first speaker was Andreas Jüttner, who reviewed the extraction of the light-quark CKM matrix elements Vud and Vus from lattice simulations. Since leptonic and semileptonic decay widths of Kaons and pions are very well measured, the matrix element |Vus| and the ratio |Vus|/|Vud| can be precisely determined if the form factor f+(0) and the ratio of decay constants fK/fπ are precisely predicted from the lattice. To reach the desired level of precision, the isospin breaking effects from the difference of the up and down quark masses and from electromagnetic interactions will need to be included (they are currently treated in chiral perturbation theory, which may not apply very well in the SU(3) case). Given the required level of precision, full control of all systematics is very important, and the problem of how to properly estimate the associated errors arises, to which different collaborations are offering very different answers. To make the lattice results optimally usable for CKMfitter &Co., one should ideally provide all of the lattice inputs to the CKMfitter fit separately (and not just some combination that presents a particularly small error), as well as their correlations (as far as possible).

Unfortunately, I had to miss the second talk of the morning, by Xavier García i Tormo on the extraction of αs from the static-quark potential, because our Sonderforschungsbereich (SFB/CRC) is currently up for review for a second funding period, and the local organizers had to be available for questioning by panel members.

Later in the afternoon, I returned to the workshop and joined a very interesting discussion on the topic of averaging in the presence of theoretical uncertainties. The large number of possible choices to be made in that context implies that the somewhat subjective nature of systematic error estimates survives into the averages, rather than being dissolved into a consensus of some sort.

Fundamental Parameters from Lattice QCD, Day Three

Today, our first speaker was Jerôme Charles, who presented new ideas about how treat data with theoretical uncertainties. The best place to read about this is probably his talk, but I will try to summarize what I understood. The framework is a firmly frequentist approach to statistics, which answers the basic question of how likely the observed data are if a given null hypothesis is true. In such a context, one can consider a theoretical uncertainty as a fixed bias δ of the estimator under consideration (such as a lattice simulation) which survives the limit of infinite statistics. One can then test the null hypothesis that the true value of the observable in question is μ by constructing a test statistic for the estimator being distributed normally with mean μ+δ and standard deviation σ (the statistical error quoted for the result). The p-value of μ then depends on δ, but not on the quoted systematic error Δ. Since the true value of δ is not known, one has to perform a scan over some region Ω, for example the interval Ωn=[-nΔ;nΔ] and take the supremum over this range of δ. One possible extension is to choose Ω adaptively in that a larger range of values needs to be scanned (i.e. a larger true systematic error in comparison to the quoted systematic error is allowed for) for lower p-values; interestingly enough, the resulting curves of p-values are numerically close to what is obtained from a naive Gaussian approach treating the systematic error as a (pseudo-)random variable. For multiple systematic errors, a multidimensional Ω has to be chosen in some way; the most natural choices of a hypercube or a hyperball correspond to adding the errors linearly or in quadrature, respectively. The linear (hypercube) scheme stands out as the only one that guarantees that the systematic error of an average is no smaller than the smallest systematic error of an individual result.

The second speaker was Patrick Fritzsch, who gave a nive review of recent lattice determinations of semileptonic heavy-light decays, both the more commonly studied B decays to πℓν and Kℓν, and the decays of the Λb that have recently been investigated by Meinel et al. with the help of LHCb.

In the afternoon, both the CKMfitter collaboration and the FLAG group held meetings.

Tuesday, September 01, 2015

Fundamental Parameters from Lattice QCD, Day Two

This morning, we started with a talk by Taku Izubuchi, who reviewed the lattice efforts relating to the hadronic contributions to the anomalous magnetic moment (g-2) of the muon. While the QED and electroweak contributions to (g-2) are known to great precision, most of the theoretical uncertainty presently comes from the hadronic (i.e. QCD) contributions, of which there are two that are relevant at the present level of precision: the contribution from the hadronic vacuum polarization, which can be inserted into the leading-order QED correction, and the contribution from hadronic light-by-light scattering, which can be inserted between the incoming external photon and the muon line. There are a number of established methods for computing the hadronic vacuum polarization, both phenomenologically using a dispersion relation and the experimental R-ratio, and in lattice field theory by computing the correlator of two vector currents (which can, and needs to, be refined in various way in order to achieve competitive levels of precision). No such well-established methods exist yet for the light-by-light scattering, which is so far mostly described using models. There are however, now efforts from a number of different sides to tackle this contribution; Taku mainly presented the appproach by the RBC/UKQCD collaboration, which uses stochastic sampling of the internal photon propagators to explicitly compute the diagrams contributing to (g-2). Another approach would be to calculate the four-point amplitude explicitly (which has recently been done for the first time by the Mainz group) and to decompose this into form factors, which can then be integrated to yield the light-by-light scattering contribution to (g-2).

The second talk of the day was given by Petros Dimopoulos, who reviewed lattice determinations of D and B leptonic decays and mixing. For the charm quark, cut-off effects appear to be reasonably well-controlled with present-day lattice spacings and actions, and the most precise lattice results for the D and Ds decay constants claim sub-percent accuracy. For the b quark, effective field theories or extrapolation methods have to be used, which introduces a source of hard-to-assess theoretical uncertainty, but the results obtained from the different approaches generally agree very well amongst themselves. Interestingly, there does not seem to be any noticeable dependence on the number of dynamical flavours in the heavy-quark flavour observables, as Nf=2 and Nf=2+1+1 results agree very well to within the quoted precisions.

In the afternoon, the CKMfitter collaboration split off to hold their own meeting, and the lattice participants met for a few one-on-one or small-group discussions of some topics of interest.

Monday, August 31, 2015

Fundamental Parameters from Lattice QCD, Day One

Greetings from Mainz, where I have the pleasure of covering a meeting for you without having to travel from my usual surroundings (I clocked up more miles this year already than can be good from my environmental conscience).

Our Scientific Programme (which is the bigger of the two formats of meetings that the Mainz Institute of Theoretical Physics (MITP) hosts, the smaller being Topical Workshops) started off today with two keynote talks summarizing the status and expectations of the FLAG (Flavour Lattice Averaging Group, presented by Tassos Vladikas) and CKMfitter (presented by Sébastien Descotes-Genon) collaborations. Both groups are in some way in the business of performing weighted averages of flavour physics quantities, but of course their backgrounds, rationale and methods are quite different in many regards. I will no attempt to give a line-by-line summary of the talks or the afternoon discussion session here, but instead just summarize a few
points that caused lively discussions or seemed important in some other way.

By now, computational resources have reached the point where we can achieve such statistics that the total error on many lattice determinations of precision quantities is completely dominated by systematics (and indeed different groups would differ at the several-σ level if one were to consider only their statistical errors). This may sound good in a way (because it is what you'd expect in the limit of infinite statistics), but it is also very problematic, because the estimation of systematic errors is in the end really more of an art than a science, having a crucial subjective component at its heart. This means not only that systematic errors quoted by different groups may not be readily comparable, but also that it become important how to treat systematic errors (which may also be correlated, if e.g. two groups use the same one-loop renormalization constants) when averaging different results. How to do this is again subject to subjective choices to some extent. FLAG imposes cuts on quantities relating to the most important sources of systematic error (lattice spacings, pion mass, spatial volume) to select acceptable ensembles, then adds the statistical and systematic errors in quadrature, before performing a weighted average and computing the overall error taking correlations between different results into account using Schmelling's procedure. CKMfitter, on the other hand, adds all systematic errors linearly, and uses the Rfit procedure to perform a maximum likelihood fit. Either choice is equally permissible, but they are not directly compatible (so CKMfitter can't use FLAG averages as such).

Another point raised was that it is important for lattice collaborations computing mixing parameters to not just provide products like fB√BB, but also fB and BB separately (as well as information about the correlation between these quantities) in order to help making the global CKM fits easier.

Saturday, July 18, 2015

LATTICE 2015, Day Five

In a marked deviation from the "standard programme" of the lattice conference series, Saturday started off with parallel sessions, one of which featured my own talk.

The lunch break was relatively early, therefore, but first we all assembled in the plenary hall for the conference group photo (a new addition to the traditions of the lattice conference), and was followed by afternoon plenary sessions. The first of these was devoted to finite temperature and density, and started with Harvey Meyer giving the review talk on finite-temperature lattice QCD. The thermodynamic properties of QCD are by now relatively well-known: the transition temperature is agreed to be around 155 MeV, chiral symmetry restoration and the deconfinement transition coincide (as well as that can defined in the case of a crossover), and the number of degrees of freedom is compatible with a plasma of quarks and gluons above the transition, but the thermodynamic potentials approach the Stefan-Boltzmann limit only slowly, indicating that there are strong correlations in the medium. Below the transition, the hadron resonance gas model describes the data well. The Columbia plot describing the nature of the transition as a function of the light and strange quark masses is being further solidified: the size of the lower-left hand corner first-order region is being measured, and the nature of the left-hand border (most likely O(4) second-order) is being explored. Beyond these static properties, real-time properties are beginning to be studied through the finite-temperature spectral functions. One interesting point was that there is a difference between the screening masses (spatial correlation lengths) and quasiparticle masses (from the spectral function) in any given channel, which may even tend in opposite directions as functions of the temperature (as seen for the pion channel).

Next, Szabolcs Borsanyi spoke about fluctuations of conserved charges at finite temperature and density. While of course the sum of all outcoming conserved charges in a collision must equal the sum of the ingoing ones, when considering a subvolume of the fireball, this can be best described in the grand canonical ensemble, as charges can move into and out of the subvolume. The quark number susceptibilities are then related to the fluctuating phase of the fermionic determinant. The methods being used to avoid the sign problem include Taylor expansions, fugacity expansions and simulations at imaginary chemical potential, all with their own strengths and weaknesses. Fluctuations can be used as a thermometer to measure the freeze-out temperature.

Lastly, Luigi Scorzato reviewed the Lefschetz thimble, which may be a way out of the sign problem (e.g. at finite chemical potential). The Lefschetz thimble is a higher-dimensional generalization of the concept of steepest-descent integration, in which the integral of eS(z) for complex S(z) is evaluated by finding the stationary points of S and integrating along the curves passing through them along which the imaginary part of S is constant. On such Lefschetz thimbles, a Langevin algorithm can be defined, allowing for a Monte Carlo evaluation of the path integral in terms of Lefschetz thimbles. In quantum-mechanical toy models, this seems to work already, and there appears hope that this might be a way to avoid the sign problem of finite-density QCD.

After the coffee break, the last plenary session turned to physics beyond the Standard Model. Daisuke Kadoh reviewed the progress in putting supersymmetry onto the lattice, which is still a difficult problem due to the fact that the finite differences which replace derivatives on a lattice do not respect the Leibniz rule, introducing SUSY-breaking terms when discretizing. The ways past this are either imposing exact lattice supersymmetries or fine-tuning the theory so as to remove the SUSY-breaking in the continuum limit. Some theories in both two and four dimensions have been simulated successfully, including N=1 Super-Yang-Mills theory in four dimensions. Given that there is no evidence for SUSY in nature, lattice SUSY is of interesting especially for the purpose of verifying the ideas of gauge-dravity duality from the Super-Yang-Mills side, and in one and two dimensions, agreement with the predictions from gauge-gravity duality has been found.

The final plenary speaker was Anna Hasenfratz, who reviewed Beyond-the-Standard-Model calculations in technicolor-like theories. If the Higgs is to be a composite particle, there must be some spontaneously broken symmetry that keeps it light, either a flavour symmetry (pions) or a scale symmetry (dilaton). There are in fact a number of models that have a light scalar particle, but the extrapolation of these theories is rendered difficult by the fact that this scalar is (and for phenomenologically interesting models would have to be) lighter than the (techni-)pion, and thus the usual formalism of chiral perturbation theory may not work. Many models of strong BSM interactions have been and are being studied using a large number of different methods, with not always conclusive results. A point raised towards the end of the talk was that for theories with a conformal IR fixed-point, universality might be violated (and there are some indications that e.g. Wilson and staggered fermions seem to give qualitatively different behaviour for the beta function in such cases).

The conference ended with some well-deserved applause for the organizing team, who really ran the conference very smoothly even in the face of a typhoon. Next year's lattice conference will take place in Southampton (England/UK) from 24th to 30th July 2016. Lattice 2017 will take place in Granada (Spain).

Friday, July 17, 2015

LATTICE 2015, Days Three and Four

Due to the one-day shift of the entire conference programme relative to other years, Thursday instead of Wednesday was the short day. In the morning, there were parallel sessions. The most remarkable thing to be reported from those (from my point of view) is that MILC are generating a=0.03 fm lattices now, which handily beats the record for the finest lattice spacing; they are observing some problems with the tunnelling of the topological charge at such fine lattices, but appear hopeful that they can be useful.

After the lunch break, excursions were offered. I took the trip to Himeji to see Himeji Castle, a very remarkable five-story wooden building that due to its white exterior is also known the "White Heron Castle". During the trip, typhoon Nangka approached, so the rains cut our enjoyment of the castle park a bit short (though seeing koi in a pond with the rain falling into it had a certain special appeal to it, the enjoyment of which I in my Western ignorance suppose might be considered a form of Japanese wabi aesthetics).

As the typhoon resolved into a rainstorm, the programme wasn't cancelled or changed, and so today's plenary programme started with a talk on some formal developments in QFT by Mithat Ünsal, who reviewed trans-series, Lefschetz thimbles, and Borel summability as different sides of the same coin. I'm far too ignorant of these more formal field theory topics to do them justice, so I won't try a detailed summary. Essentially, it appears that the expansion of certain theories around the saddle points corresponding to instantons is determined by their expansion around the trivial vacuum, and the ambiguities arising in the Borel resummation of perturbative series when the Borel transform has a pole on the positive real axis can in some way be connected to this phenomenon, which may allow for a way to resolve the ambiguities.

Next, Francesco Sannino spoke about the "bright, dark, and safe" sides of the lattice. The bright side referred to the study of visible matter, in particular to the study of technicolor models as a way of implementing the spontaneous breaking of electroweak symmetry, without the need for a fundamental scalar introducing numerous tunable parameters, and with the added benefits of removing the hierarchy problem and the problem of φ4 triviality. The dark side referred to the study of dark matter in the context of composite dark matter theories, where one should remember that if the visible 5% of the mass of the universe require three gauge groups for their description, the remaining 95% are unlikely to be described by a single dark matter particle and a homogeneous dark energy. The safe side referred to the very current idea of asymptotic safety, which is of interest especially in quantum gravity, but might also apply to some extension of the Standard Model, making it valid at all energy scales.

After the coffee break, the traditional experimental talk was given by Toru Iijima of the Belle II collaboration. The Belle II detector is now beginning commissioning at the upcoming SuperKEKB accelerator, which will greatly improved luminosity to allow for precise tests of the Standard Model in the flavour sector. In this, Belle II will be complementary to LHCb, because it will have far lower backgrounds allowing for precision measurements of rare processes, while not being able to access as high energies. Most of the measurements planned at Belle II will require lattice inputs to interpret, so there is a challenge to our community to come up with sufficiently precise and reliable predictions for all required flavour observables. Besides quark flavour physics, Belle II will also search for lepton flavour violation in τ decays, try to improve the phenomenological prediction for (g-2)μ by measuring the cross section for e+e- -> hadrons more precisely, and search for exotic charmonium- and bottomonium-like states.

Closely related was the next talk, a review of progress in heavy flavour physics on the lattice given by Carlos Pena. While simulations of relativistic b quarks at the physical mass will become a possibility in the not-too-distant future, for the time being heavy-quark physics is still dominated by the use of effective theories (HQET and NRQCD) and methods based either on appropriate extrapolations from the charm quark mass region, or on the Fermilab formalism, which is sort of in-between. For the leptonic decay constants of heavy-light mesons, there are now results from all formalisms, which generally agree very well with each other, indicating good reliability. For the semileptonic form factors, there has been a lot of development recently, but to obtain precision at the 1% level, good control of all systematics is needed, and this includes the momentum-dependence of the form factors. The z-expansion, and extended versions thereof allowing for simultaneous extrapolation in the pion mass and lattice spacing, has the advantage of allowing for a test of its convergence properties by checking the unitarity bound on its coefficients.

After the coffee break, there were parallel sessions again. In the evening, the conference banquet took place. Interestingly, the (excelleent) food was not Japanese, but European (albeit with a slight Japanese twist in seasoning and presentation).

Wednesday, July 15, 2015

LATTICE 2015, Day Two

Hello again from Lattice 2015 in Kobe. Today's first plenary session began with a review talk on hadronic structure calculations on the lattice given by James Zanotti. James did an excellent job summarizing the manifold activities in this core area of lattice QCD, which is also of crucial phenomenological importance given situations such as the proton radius puzzle. It is now generally agreed that excited-state effects are one of the more important issues facing hadron structure calculations, especially in the nucleon sector, and that these (possibly together with finite-volume effects) are likely responsible for the observed discrepancies between theory and experiment for quantities such as the axial charge of the nucleon. Many groups are studying the charges and form factors of the nucleon, and some have moved on to more complicated quantities, such as transverse momentum distributions. Newer ideas in the field include the use of the Feynman-Hellmann theorem to access quantities that are difficult to access through the traditional three-point-over-two-point ratio method, such as form factors at very high momentum transfer, and quantities with disconnected diagrams (such as nucleon strangeness form factors).

Next was a review of progress in light flavour physics by Andreas Jüttner, who likewise gave an excellent overview of this also phenomenologically very important core field. Besides the "standard" quantities, such as the leptonic pion and kaon decay constants and the semileptonic K-to-pi form factors, more difficult light-flavour quantities are now being calculated, including the bag parameter BK and other quantities related to both Standard Model and BSM neutral kaon mixing, which require the incorporation of long-distance effects, including those from charm quarks. Given the emergence of lattice ensembles at the physical pion mass, the analysis strategies of groups are beginning to change, with the importance of global ChPT fits receding. Nevertheless, the lattice remains important in determining the low-energy constants of Chiral Perturbation Theory. Some groups are also using newer theoretical developments to study quantities once believed to be outside the purview of lattice QCD, such as final-state photon corrections to meson decays, or the timelike pion form factor.

After the coffee break, the Ken Wilson Award for Excellence in Lattice Field Theory was announced. The award goes to Stefan Meinel for his substantial and timely contributions to our understanding of the physics of the bottom quark using lattice QCD. In his acceptance talk, Stefan reviewed his recent work on determining |Vub|/|Vcb| from decays of Λb baryons measured by the LHCb collaboration. There has long been a discrepancy between the inclusive and exclusive (from B -> πlν) determinations of Vub, which might conceivably be due to a new (BSM) right-handed coupling. Since LHCb measures the decay widths for Λb to both pμν and Λcμν, combining these with lattice determinations of the corresponding Λb form factors allows for a precise determination of |Vub|/|Vcb|. The results agree well with the exclusive determination from B -> πlν, and fully agree with CKM unitarity. There are, however, still other channels (such as b -> sμ+μ- and b -> cτν) in which there is still potential for new physics, and LHCb measurements are pending.

This was followed by a talk by Maxwell T. Hansen (now a postdoc at Mainz) on three-body observables from lattice QCD. The well-known Lüscher method relates two-body scattering amplitudes to the two-body energy levels in a finite volume. The basic steps in the derivation are to express the full momentum-space propagator in terms of a skeleton expansion involving the two-particle irreducible Bethe-Salpeter kernel, to express the difference between the two-particle reducible loops in finite and infinite volume in terms of two-particle cuts, and to reorganize the skeleton expansion by the number of cuts to reveal that the poles of the propagator (i.e. the energy levels) in finite volume are related to the scattering matrix. For three-particle systems, the skeleton expansion becomes more complicated, since there can now be situations involving two-particle interactions and a spectator particle, and intermediate lines can go on-shell between different two-particle interactions. Treating a number of other technical issues such as cusps, Max and collaborators have been able to derive a Lüscher-like formula three-body scattering in the case of scalar particles with a Z2 symmetry forbidding 2-to-3 couplings. Various generalizations remain to be explored.

The day's plenary programme ended with a talk on the Standard Model prediction for direct CP violation in K-> ππ decays by Christopher Kelly. This has been an enormous effort by the RBC/UKQCD collaboration, who have shown that the ΔI=1/2 rule comes from low-energy QCD by way of strong cancellations between the dominant contributions, and have determined ε' from the lattice for the first time. This required the generation of ensembles with an unusual set of boundary conditions (G-parity boundary conditions on the quarks, requiring complex conjugation boundary conditions on the gauge fields) in space to enforce a moving pion ground state, as well as the precise evaluation of difficult disconnected diagrams using low modes and stochastic estimators, and treatment of finite-volume effects in the Lellouch-Lüscher formalism. Putting all of this together with the non-perturbative renormalization (in the RI-sMOM scheme) of ten operators in the electroweak Hamiltonian gives a result which currently still has three times the experimental error, but is systematically improvable, with better-than-experimental precision expected in maybe five years.

In the afternoon there were parallel sessions again, and in the evening, the poster session took place. Food ran out early, but it was pleasant to see free-form smearing begin improved upon and used to very good effect by Randy Lewis, Richard Woloshyn and students.

Tuesday, July 14, 2015

LATTICE 2015, Day One

Hello from Kobe, where I am attending the Lattice 2015 conference. The trip here was uneventful, as was the jetlag-day.

The conference started yesterday evening with a reception in the Kobe Animal Kingdom (there were no animals when we were there, though, with the exception of some fish in a pond and some cats in a cage, but there were lot of plants).

Today, the scientific programme began with the first plenary session. After a welcome address by Akira Ukawa, who reminded us of the previous lattice meetings held in Japan and the tremendous progress the field has made in the intervening twelve years, Leonardo Giusti gave the first plenary talk, speaking about recent progress on chiral symmetry breaking. Lattice results have confirmed the proportionality of the square of the pion mass to the quark mass (i.e. the Gell-Mann-Oakes-Renner (GMOR) relation, a hallmark of chiral symmetry breaking) very accurately for a long time. Another relation involving the chiral condensate is the Banks-Casher relation, which relates it to the eigenvalue density of the Dirac operator at zero. It can be shown that the eigenvalue density is renormalizable, and that thus the mode number in a given interval is renormalization-group invariant. Two recent lattice studies, one with twisted-mass fermions and one with O(a)-improved Wilson fermions, confirm the Banks-Casher relation, with the chiral condensates found agreeing very well with those inferred from GMOR. Another relation is the Witten-Veneziano relation, which relates the η' mass to the topological susceptibility, thus explaining how precisely the η' is not a Goldstone boson. The topological charge on the lattice can be defined through the index of the Neuberger operator or through chain of spectral porjectors, but a recently invented and much cheaper definition is through the topological charge density at finite flow time in Lüscher's Wilson flow formalism. The renormalization properties of the Wilson flow allow for a derivation of the universality of the topological susceptibility, and numerical tests using all three definitions indeed agree within errors in the continuum limit. Higher cumulants determined in the Wilson flow formalism agree with large-Nc predictions in pure Yang-Mills, and the suppression of the topological susceptibility in QCD relative to the pure Yang-Mills case is in line with expectations (which in principle can be considered an a posteriori determination of Nf in agreement with the value used in simulations).

The next speaker was Yu Nakayama, who talked about a related topic, namely the determination of the chiral phase transition in QCD from the conformal bootstrap. The chiral phase transition can be studied in the framework of a Landau effective theory in three dimensions. While the mean-field theory predicts a second-order phase transition in the O(4) universality class, one-loop perturbation theory in 4-ε dimensions predicts a first-order phase transition at ε=1. Making use of the conformal symmetry of the effective theory, one can apply the conformal bootstrap method, which combines an OPE with crossing relations to obtain results for critical exponents, and the results from this method suggest that the phase transition is in fact of second order. This also agrees with many lattice studies, but others disagree. The role of the anomalously broken U(1)A symmetry in this analysis appears to be unclear.

After the coffee break, Tatsumi Aoyama, a long-time collaborator in the heroic efforts of Kinoshita to calculate the four- and five-loop QED contributions to the electron and muon anomalous moments, gave a plenary talk on the determination of the QED contribution to lepton (g-2). For likely readers of this blog, the importance of (g-2) is unlikely to require an explanation: the current 3σ tension between theory and experiment for (g-2)μ is the strongest hint of physics beyond the Standard Model so far, and since the largest uncertainties on the theory side are hadronic, lattice QCD is challenged to either resolve the tension or improve the accuracy of the predictions to the point where the tension becomes an unambiguous, albeit indirect, discovery of new physics. The QED calculations are on the face of it simpler, being straightforward Feynman diagram evaluations. However, the number of Feynman diagrams grows so quickly at higher orders that automated methods are required. In fact, in a first step, the number of Feynman diagrams is reduced by using the Ward-Takahashi identity to relate the vertex diagrams relevant to (g-2) to self-energy diagrams, which are then subjected to an automated renormalization procedure using the Zimmermann forest formula. In a similar way, infrared divergences are subtracted using a more complicated "annotated forest"-formula (there are two kinds of IR subtractions needed, so the subdiagrams in a forest need to be labelled with the kind of subtraction). The resulting UV- and IR-finite integrands are then integrated using VEGAS in Feynman parameter space. In order to maintain the required precision, quadruple-precision floating-point numbers (or an emulation thereof) must be used. Whether these methods could cope with the six-loop QED contribution is not clear, but with the current and projected experimental errors, that contribution will not be required for the foreseeable future, anyway.

This was followed by another (g-2)-related plenary, with Taku Izubichi speaking about the determination of anomalous magnetic moments and nucleon electric dipole moments in QCD. In particular the anomalous magnetic moment has become such an active topic recently that the time barely sufficed to review all of the activity in this field, which ranges from different approaches to parameterizing the momentum dependence of the hadronic vacuum polarization, through clever schemes to reduce the noise by subtracting zero-momentum contributions, to new ways of extracting the vacuum polarization through the use of background magnetic fields, as well as simulations of QCD+QED on the lattice. Among the most important problems are finite-volume effects.

After the lunch break, there were parallel sessions in the afternoon. I got to chair the first session on hadron structure, which was devoted to determinations of hadronic contributions to (g-2)μ.

After the coffee break, there were more parallel sessions, another complete one of which was devoted to (g-2) and closely-related topics. A talk deserving to be highlighted was given by Jeremy Green, who spoke about the first direct calculation of the hadronic light-to-light scattering amplitude from lattice QCD.

Friday, April 10, 2015

Workshop "Fundamental Parameters from Lattice QCD" at MITP (upcoming deadline)

Recent years have seen a significant increase in the overall accuracy of lattice QCD calculations of various hadronic observables. Results for quark and hadron masses, decay constants, form factors, the strong coupling constant and many other quantities are becoming increasingly important for testing the validity of the Standard Model. Prominent examples include calculations of Standard Model parameters, such as quark masses and the strong coupling constant, as well as the determination of CKM matrix elements, which is based on a variety of input quantities from experiment and theory. In order to make lattice QCD calculations more accessible to the entire particle physics community, several initiatives and working groups have sprung up, which collect the available lattice results and produce global averages.

The scientific programme "Fundamental Parameters from Lattice QCD" at the Mainz Institute of Theoretical Physics (MITP) is designed to bring together lattice practitioners with members of the phenomenological and experimental communities who are using lattice estimates as input for phenomenological studies. In addition to sharing the expertise among several communities, the aim of the programme is to identify key quantities which allow for tests of the CKM paradigm with greater accuracy and to discuss the procedures in order to arrive at more reliable global estimates.

The deadline for registration is Wednesday, 15 April 2015. Please register at this link.

Thursday, March 12, 2015

QNP 2015, Day Five

Apologies for the delay in posting this. Travel and jetlag kept me from attending to it earlier.

The first talk today was by Guy de Teramond, who described applications of light-front superconformal quantum mechanics to hadronic physics. I have to admit that I couldn't fully take in all the details, but as far as I understood an isomorphy between AdS2 and the conformal group in one dimension can be used to derive a form of the light-front Hamiltonian for mesons from an AdS/QCD correspondence, in which the dilaton field is fixed to be φ(z)=1/2 z2 by the requirement of conformal invariance, and a similar construction in the superconformal case leads to a light-front Hamiltonian for baryons. A relationship between the Regge trajectories for mesons and baryons can then be interpreted as a form of supersymmetry in this framework.

Next was Beatriz Gay Ducati with a review of the pheonomenology of heavy quarks in nuclear matter, a topic where there are still many open issues. The photoproduction of quarkonia on nucleons and nuclei allows to probe the gluon distribution, since the dominant production process is photon-gluon fusion, but to be able to interpret the data, many nuclear matter effects need to be understood.

After the coffee break, this was followed by a talk by Hrayr Matevosyan on transverse momentum distributions (TMDs), which are complementary to GPDs in the sense of being obtained by integrating out other variables starting from the full Wigner distributions. Here, again, there are many open issues, such as the Sivers, Collins or Boer-Mulders effects.

The next speaker was Raju Venugopalan, who spoke about two outstanding problems in QCD at high parton densities, namely the question of how the systems created in heavy-ion collisions thermalise, and the phenomenon of "the ridge" in proton-nucleus collisions, which would seem to suggest hydrodynamic behaviour in a system that is too small to be understood as a liquid. Both problems may have to do with the structure of the dense initial state, which is theorised to be a colour-glass condensate or "glasma", and the way in which it evolves into a more dilute system.

After the lunch break, Sonny Mantry reviewed some recent advances made in applying Soft-Collinear Effective Theory (SCET) to a range of questions in strong-interaction physics. SCET is the effective field theory obtained when QCD fluctuations around a hard particle momentum are considered to be small and a corresponding expansion (analogous to the 1/m expansion in HQET) is made. SCET has been successfully applied to many different problems; an interesting and important one is the problem of relating the "Monte Carlo mass" usually quoted for the top quark to the top quark mass in a more well-defined scheme such as MSbar.

The last talk in the plenary programme was a review of the Electron-Ion Collider (EIC) project by Zein-Eddine Meziani. By combining the precision obtainable using an electron beam with the access to the gluon-dominated regime provided by a havy ion beam, as well as the ability to study the nucleon spin using a polarised nucleon beam, the EIC will enable a much more in-depth study of many of the still unresolved questions in QCD, such as the nucleon spin structure and colour distributions. There are currently two competing designs, the eRHIC at Brookhaven, and the MEIC at Jefferson Lab.

Before the conference closed, Michel Garçon announced that the next conference of the series (QNP 2018) will be held in Japan (either in Tsukuba or in Mito, Ibaraki prefecture). The local organising committee and conference office staff received some well-deserved applause for a very smoothly-run conference, and the scientific part of the conference programme was adjourned.

As it was still in the afternoon, I went with some colleagues to visit La Sebastiana, the house of Pablo Neruda in Valparaíso, taking one of the city's famous ascensores down (although up might have been more convenient, as the streets get very steep) before walking back to Viña del Mar along the sea coast.

The next day, there was an organised excursion to a vineyard in the Casablanca valley, where we got to taste some very good Chilean wines (some of the them matured in traditional clay vats) and liqueurs with a very pleasant lunch.

I got to spend another day in Valparaíso before travelling back (a happily uneventful, if again rather long trip).

Friday, March 06, 2015

QNP 2015, Day Four

The first talk today was a review of experimental results in light-baryon spectroscopy by Volker Credé. While much progress has been made in this field, in particular in the design of so-called complete experiments, which as far as I understand measure multiple observables to unambiguously extract a complete description of the amplitudes for a certain process, there still seem to be surprisingly many unknowns. In particular, the fits to pion photoproduction in doubly-polarised processes seem to disagree strongly between different descriptions (such as MAID).

Next was Derek Leinweber with a review of light hadron spectroscopy from the lattice. The de facto standard method in this field is the variational method (GEVP), although there are some notable differences in how precisely different groups apply it (e.g. solving the GEVP at many times and fitting the eigenvalues vs. forming projected correlators with the eigenvectors of the GEVP solved at a single time -- there are proofs of good properties for the former that don't exist for the latter). The way in which the basis of operators for the GEVP is build is also quite different as used by different groups, ranging from simply using different levels of quark field smearing to intricate group-theoretic constructions of multi-site operators. There are also attempts to determine how much information can be extracted from a given set of correlators, e.g. recently by the Cyprus/Athens group using Monte Carlo simulations to probe the space of fitting parameters (a loosely related older idea based on evolutionary fits wasn't mentioned).

This was followed by a talk by Susan Gardner about testing fundamental symmetries with quarks. While we know that there must be physics beyond the Standard Model (because the SM does not explain dark matter, nor does it provide enough CP violation to explain the observed baryon asymmetry), there is so far no direct evidence of any BSM particle. Low-energy tests of the SM fall into two broad categories: null tests (where the SM predicts an exact null result, as for violations of B-L) and precision tests (where the SM prediction can be calculated to very high accuracy, as for (g-2)μ). Null tests play an important role in so far as they can be used to impose a lower limit for the BSM mass scale, but many of them are atomic or nuclear tests, which have complicated theory errors. The currently largest tensions indicating a possible failure of the Standard Model to describe all observations are the proton radius puzzle, and (g-2)μ. A possible explanation of either or both of those in terms of a "dark photon" is on the verge of being ruled out, however, since most of the relevant part of the mass/coupling plane has already been excluded by dark photon searches, and the rest of it will soon be (or else the dark photon will be discovered). Other tests in the hadronic sector, which seem to be less advanced so far, are the search for non-(V-A) terms in β-decays, and the search for neutron-antineutron oscillations.

After the coffee break and the official conference photo, Isaac Vidaña took the audience on a "half-hour walk through the physics of neutron stars". Neutron stars are both almost-black holes (whose gravitation must be described in General Relativity) and extremely massive nuclei (whose internal dynamics must be described using QCD). Observations of binary pulsars allow to determine the masses of neutron stars, which are found to range up to at least two solar masses. However, the Tolman-Oppenheimer-Volkov equations for the stability of neutron stars lead to a maximum mass for a neutron star that depends on the equation of state of the nuclear medium. The observed masses severely constrain the equation of state and in particular seem to exclude models in which hyperons play an important role; however, it seems to be generally agreed that hyperons must play an important role in neutron stars, leading to a "hyperon puzzle", the solution of which will require an improved understanding of the structure and interactions of hyperons.

The last plenary speaker of the day was Stanley Brodsky with the newest developments from light-front holography. The light-front approach, which has in the past been very successful in (1+1)-dimensional QCD, is based on the front form of the Hamiltonian formalism, in which a light-like, rather than a timelike, direction is chosen as the normal defining the Cauchy surfaces on which initial data are specified. In the light-front Hamiltonian approach, the vacuum of QCD is trivial and the Hilbert space can be constructed as a straightforward Fock space. With some additional ansätze taken from AdS/CFT ideas, QCD is reduced to a Schrödinger-like equation for the light-cone wavefunctions, from which observables are extracted. Apparently, all known observations are described perfectly in this approach, but (as for the Dyson-Schwinger or straight AdS/QCD approaches) I do not understand how systematic errors are supposed to be quantified.

In the afternoon there were parallel talks. An interesting contribution was given by Mainz PhD student Franziska Hagelstein, who demonstrated how even a very small non-monotonicity in the electric form factor at low Q2 (where there are no ep scattering data) could explain the difference between the muonic and electronic hydrogen results for the proton radius.

The conference banquet took place in the evening at a very nice restaurant, and fun was had over cocktails and an excellent dinner.

Thursday, March 05, 2015

QNP 2015, Day Three

Today began with a talk by Mikhail Voloshin on QCD sum rules and heavy-quark states. The idea of exploiting quark-hadron duality to link perturbatively calculable current-current correlators to hadronic obervables and extract mesonic decay constants or quark masses is quite old, but has received a boost in recent years with the advent of three- and four-loop perturbative calculations in particularly from Chetyrkin and collaborators, which have also been used in conjunction with lattice results, e.g. by the HPQCD collaboration.

A review of hadron spectroscopy at B factories (including LHCb) by Roberto Mussa followed. The charmonium and bottomonium spectra are now measured to great detail, with recent additions being 1D and 3P states, and more states are also being discovered in the heavy-light (where the Bc(2S) has recently been discovered at ATLAS) and heavy-quark baryon (where the most recent discovery was the Ξb) sectors, and many more transitions being discovered and studied.

The next speaker was Raphaël Dupré, who spoke about colour propagation and neutralisation in strongly interacting systems. The idea here appears to be that in hadronisation processes, quarks first loose energy by radiating gluons and thus turn into colourless pre-hadrons, which then bind into hadrons on a longer timescale, and there seems to be experimental evidence supporting this energy-loss model.

After the coffee break, Javier Castillo reviewed quarkonium suppression and regeneration in heavy-ion collisions. Quarkonia are generally considered important probes of the quark-gluon plasma, because the production of heavy quark-antiquark pairs is a perturbative process that happens at high energies early in the collision, while their binding is non-perturbative and is expected to be suppressed by Debye screening in the coloured plasma. As a consequence, more tightly bound quarkonia, like the Y(1S), can exist at higher temperatures, while the more lightly bound charmonia or Y(3S) states will "melt" at lower temperatures. However, quarkonia can also be regenerated by thermalised heavy quarks rejoining into quarkonia at the phase boundary. Experimental data support the screening picture, with the J/ψ being more suppressed at the LHC than at STAR (because of the higher temperature), the Y(2S) more suppressed than the Y(1S), and transport models with a negligible regeneration component describing the data well. The regeneration component increases at low pT, and the elliptic flow of the charm quarks is inherited by the regenerated J/ψ mesons. Some more difficult to understand effects of the nuclear environment, called Cold Nuclear Matter (CNM) effects are beginning to be seen in the data.

Next was Zoltan Fodor with a talk about Lattice QCD results at zero and finite temperature from the BMW collaboration. By simulating QCD+QED with 1+1+1+1 flavours of dynamical quarks, BMW have been able to determine the isospin splitting of the nucleon and other baryonic systems. This work, which appears set to become a cover story in "Science", had to overcome a number of serious obstacles, in particular long-range autocorrelations (which could cured by a Fourier-accelerated HMC variant) and power-law finite-volume effects (which had to be fitted to results obtained at a range of volumes) introduced by the massless photon. In the finite-temperature regime, the crossover temperature is now generally agreed to be around 150-160 MeV, but the position and even existence of the critical endpoint is still contentious (and any existing results are not yet continuum-extrapolated in any case).

After the lunch break, Yiota Foka gave an overview of heavy-ion results from RHIC and the LHC. The elliptic flow is still found to be in agreement with perfect hydrodynamics, but people are now also studying higher harmonics, as well as the interplay between jets and flow, which provide important constraints on the physics of the quark-gluon plasma. At the LHC, it has been found that it is the mass, and not the valence quark content, that drives the flow behaviour of hadrons, as the φ meson has the same flow behaviour as the proton.

The next speaker was Carl Gagliardi, who reviewed results in nucleon structure from high-energy polarised proton-proton collisions. Proton-proton scattering is complementary to DIS in that it gives access to the gluonic degrees of freedom which are invisible to electrons, and RHIC has a programme of polarised proton collisions to explore the spin structure of the nucleon. Without the RHIC data, the gluon polarisation ΔG is almost unconstrained, but with the RHIC data, it is seen to be clearly positive and contribute about 0.2 to the proton spin. Using W production, it is possible to separate polarised quark and antiquark distributions, and there is more to come in the near future.

The last plenary speaker of the day was Craig Roberts, who reviewed the pion and nucleon structure from the point of view of the Dyson-Schwinger equations approach. In this approach, the pion is closely linked to the quark mass function, which comes out of a quark gap equation and describes how the running quark mass at high energies turns into a much larger constituent quark mass at low energies. Landau-gauge gluons also become massive at low energies, and confinement is explained as the splitting of poles into pairs of conjugate complex poles giving an exponentially damped behaviour of the position space propagator. While this approach seems to be able to readily explain every single known experimental result, I do not understand how the systematic errors from the truncation of the infinite tower of DSEs are supposed to be controlled or quantified.

After the coffee break, there were parallel sessions. An interesting parallel talk was given by Johan Bijnens, who has determined the leading logarithms for the nucleon mass (and some other systems) to rather high orders (which also for effective theories can be done using only one-loop integrals from a consistency argument by Weinberg).

Wednesday, March 04, 2015

QNP 2015, Day Two

Hello again from Valparaíso. Today's first speaker was Johan Bijnens with a review of recent results from chiral perturbation theory in the mesonic sector, including recent results for charged pion polarisabilities and for finite-volume corrections to lattice measurements. To allow others to perform their own calculations for their own specific needs (which might include technicolor-like theories, which will generally have different patterns of chiral symmetry breaking, but otherwise work just the same way), Bijnens & Co. have recently published CHIRON, a general two-loop mesonic χPT package. The leading logarithms have been determined to high orders, and it has been found that the speed of convergence depends both on the observable and on whether the leading-order or physical pion decay constant is used.

Next was Boris Grube, who presented some recent results from light-meson spectroscopy. The light mesons are generally expected to be some kind of superpositions of quark-model states, hybrids, glueballs, tetraquark and molecular states, as may be compatible with their quantum numbers in each case. The most complex sector is the 0++ sector of f0 mesons, in which the lightest glueball state should lie. While the γγ width of the f0(1500) appears to be compatible with zero, which would agree with the expectations for a glueball, whereas the f0(1710) has a photonic width more in agreement with being an s-sbar state, in J/ψ -> γ (ηη), which as a gluon-rich process should couple strongly to glueball resonances, little or no f0(1500) is seen, whereas a glueball nature for the f0(1710) would be supported by these results. New data to come from GlueX, and later from PANDA, should help to clarify things.

The next speaker was Paul Sorensen with a talk on the search for the critical point in the QCD phase diagram. The quark-gluon plasma at RHIC is not only a man-made system that is over 300 times hotter than the centre of the Sun, it is also the most perfect fluid known, as it close to saturates the viscosity bound η/s > 1/(4π). Studying it experimentally is quite difficult, however, since one must extrapolate back to a small initial fireball, or "little bang", from correlations between thousands of particle tracks in a detector, not entirely dissimilar from the situation in cosmology, where the properties of the hot big bang (and previous stages) are inferred from angular correlations in the cosmic microwave background. Beam energy scans find indications that the phase transition becomes first-order at higher densities, which would indicate the existence of a critical endpoint, but more statistics and more intermediate energies are needed.

After the coffee break, François-Xavier Girod spoke about Generalised Parton Distributions (GPDs) and deep exclusive processes. GPDs, which reduce to form factors and to parton distributions upon integrating out the unneeded variables in each case, correspond to a three-dimensional image of the nucleon performed in the longitudinal momentum fraction and the transverse impact parameter, and their moments are related to matrix elements of the energy-momentum tensor. Experimentally, they are probed using deeply virtual Compton scattering (DVCS); the 12 GeV upgrade at Jefferson Lab will increase the coverage in both Bjørken-x and Q2, and the planned electron-ion collider is expected to allow probing the sea and gluon GPDs as well.

After the lunch break, there were parallel sessions. I chaired the parallel session on lattice and other perturbative methods, with presentations of lattice results by Eigo Shintani and Tereza Mendes, as well as a number of AdS/QCD-related results by various others.

Tuesday, March 03, 2015

QNP 2015, Day One

Hello from Valparaíso, where I continue this year's hectic conference circuit at the 7th International Conference on Quarks and Nuclear Physics (QNP 2015). Except for some minor inconveniences and misunderstandings, the long trip to Valparaíso (via Madrid and Santiago de Chile) went quite smoothly, and so far, I have found Chile a country of bright sunlight and extraordinarily helpful and friendly people.

The first speaker of the conference was Emanuele Nocera, who reviewed nucleon and nuclear parton distributions. The study of parton distributions become necessary because hadrons are really composed not simply of valence quarks, as the quark model would have it, but of an indefinite number of (sea) quarks, antiquarks and gluons, any of which can contribute to the overall momentum and spin of the hadron. In an operator product expansion framework, hadronic scattering amplitudes can then be factorised into Wilson coefficients containing short-distance (perturbative) physics and parton distribution functions containing long-distance (non-perturbative) physics. The evolution of the parton distribution functions (PDFs) with the momentum scale is given by the DGLAP equations containing the perturbatively accessible splitting functions. The PDFs are subject to a number of theoretical constraints, of which the sum rules for the total hadronic momentum and valence quark content are the most prominent. For nuclei, on can assume that a similar factorisation as for hadrons still holds, and that the nuclear PDFs are linear combinations of nucleon PDFs modified by multiplication with a binding factor; however, nuclei exhibit correlations between nucleons, which are not well-described in such an approach. Combining all available data from different sources, global fits to PDFs can be performed using either a standard χ2 fit with a suitable model, or a neural network description. There are far more and better data on nucleon than nuclear PDFs, and for nucleons the amount and quality of the data also differs between unpolarised and polarised PDFs, which are needed to elucidate the "proton spin puzzle".

Next was the first lattice talk of the meeting, given by Huey-Wen Lin, who gave a review of the progress in lattice studies of nucleon structure. I think Huey-Wen gave a very nice example by comparing the computational and algorithmic progress with that in videogames (I'm not an expert there, but I think the examples shown were screenshots of Nethack versus some modern first-person shooter), and went on to explain the importance of controlling all systematic errors, in particular excited-state effects, before reviewing recent results on the tensor, scalar and axial charges and the electromagnetic form factors of the nucleon. As an outlook towards the current frontier, she presented the inclusion of disconnected diagrams and a new idea of obtaining PDFs from the lattice more directly rather than through their moments.

The next speaker was Robert D. McKeown with a review of JLab's Nuclear Science Programme. The CEBAF accelerator has been upgraded to 12 GeV, and a number of experiments (GlueX to search for gluonic excitations, MOLLER to study parity violation in Møller scattering, and SoLID to study SIDIS and PVDIS) are ready to be launched. A number of the planned experiments will be active in areas that I know are also under investigation by experimental colleagues in Mainz, such as a search for the "dark photon" and a study of the running of the Weinberg angle. Longer-term plans at JLab include the design of an electron-ion collider.

After a rather nice lunch, Tomofumi Nagae spoke about the hadron physics programme an J-PARC. In spite of major setbacks by the big earthquake and a later radiation accident, progress is being made. A search for the Θ+ pentaquark did not find a signal (which I personally do not find surprising, since the whole pentaquark episode is probably of more immediate long-term interest to historians and sociologists of science than to particle physicists), but could not completely exclude all of the discovery claims.

This was followed by a take by Jonathan Miller of the MINERνA collaboration presenting their programme of probing nuclei with neutrinos. Major complications include the limited knowledge of the incoming neutrino flux and the fact that final-state interactions on the nuclear side may lead to one process mimicking another one, making the modelling in event generators a key ingredient of understanding the data.

Next was a talk about short-range correlations in nuclei by Or Henn. Nucleons subject to short-range correlations must have high relative momenta, but a low center-of-mass momentum. The experimental studies are based on kicking a proton out of a nucleus with an electron, such that both the momentum transfer (from the incoming and outgoing electron) and the final momentum of the proton are known, and looking for a nucleon with a momentum close to minus the difference between those two (which must be the initial momentum of the knocked-out proton) coming out. The astonishing result is that at high momenta, neutron-proton pairs dominate (meaning that protons, being the minority, have a much larger chance of having high momenta) and are linked by a tensor force. Similar results are known from other two-component Fermi systems, such as ultracold atomic gases (which are of course many, many orders of magnitude less dense than nuclei).

After the coffee break, Heinz Clement spoke about dibaryons, specifically about the recently discovered d*(2380) resonance, which taking all experimental results into account may be interpreted as a ΔΔ bound state

The last talk of the day was by André Walker-Loud, who reviewed the study of nucleon-nucleon interactions and nuclear structure on the lattice, starting with a very nice review of the motivations behind such studies, namely the facts that big-bang nucleosynthesis is very strongly dependent on the deuterium binding energy and the proton-neutron mass difference, and this fine-tuning problem needs to be understood from first principles. Besides, currently the best chance for discovering BSM physics seems once more to lie with low-energy high-precision experiments, and dark matter searches require good knowledge of nuclear structure to control their systematics. Scattering phase shifts are being studied through the Lüscher formula. Current state-of-the-art studies of bound multi-hadron systems are related to dibaryons, in particular the question of the existence of the H-dibaryon at the physical pion mass (note that the dineutron, certainly unbound in the real world, becomes bound at heavy enough pion masses), and three- and four-nucleon systems are beginning to become treatable, although the signal-to-noise problem gets worse as more baryons are added to a correlation function, and the number of contractions grows rapidly. Going beyond masses and binding energies, the new California Lattice Collaboration (CalLat) has preliminary results for hadronic parity violation in the two-nucleon system, albeit at a pion mass of 800 MeV.

Friday, February 27, 2015

Back from Mumbai

On Saturday, my last day in Mumbai, a group of colleagues rented a car with a driver to take a trip to Sanjay Gandhi National Park and visit the Kanheri caves, a Buddhist site consisting of a large number of rather simple monastic cells and some worship and assembly halls with ornate reliefs and inscriptions, all carved out out of solid rock (some of the cell entrances seem to have been restored using steel-reinforced concrete, though).

On the way back, we stopped at Mani Bhavan, where Mahatma Gandhi lived from 1917 to 1934, and which is now a museum dedicated to his live and legacy.

In the night, I flew back to Frankfurt, where the temperature was much lower than in Mumbai; in fact, on Monday there was snow.

Friday, February 20, 2015

Perspectives and Challenges in Lattice Gauge Theory, Day Five

Today's programme started with a talk by Santanu Mondal on baryons in the sextet gauge model, which is a technicolor-style SU(3) gauge theory with a doublet of technifermions in the sextet (two index symmetric) representation, and a minimal candidate for a technicolor-like model with an IR almost-fixed point. Using staggered fermions, he found that when setting the scale by putting the technipion's decay constant to the value derived from identifying the Higgs vacuum expectation value as the technicondensate, the baryons had masses in excess of 3 TeV, heavy enough to not yet have been discovered by the LHC, but to be within reach of the next run. However, the anomaly cancellation condition when embedding the theory into the Standard Model of the electroweak interactions requires charge assignments such that the lightest technibaryon (which would be a stable particle) would have a fractional electrical charge of 1/2, and while the cosmological relic density can be made small enough to evade detection, the technibaryons produced by the cosmic rays in the Earth's atmosphere should have been able to accumulate (there currently appear to be no specific experimental exclusions for charge-1/2 particles though).

Next was Nilmani Mathur speaking about mixed action simulations using overlap valence quarks on the MILC HISQ ensembles (which include the radiative corrections to the lattice gluon action from the quarks). Tuning the charm quark mass via the kinetic rather than rest mass of charmonium, the right charmonium hyperfine splitting is found, as well as generally correct charmonium spectra. Heavy-quark baryons (up to and including the Ωccc) have also been simulated, with results in good agreement with experimental ones where the latter exist. The mixed-action effects appear to be mild small in mixed-action χPT, and only half as large as those for domain-wall valence fermions on an asqtad sea.

In a brief note, Gunnar Bali encouraged the participants of the workshop to seek out opportunities for Indo-German research collaboration, of which there are still only a limited number of instances.

After the tea break, there were two more theoretical talks, both of them set in the framework of Hamiltonian lattice gauge theory: Indrakshi Raychowdhury presented a loop formulation of SU(2) lattice gauge theory based on the prepotential formalism, where both the gauge links and their conjugate electrical fields are constructed from harmonic oscillator variables living on the sites using the Schwinger construction. By some ingenious rearrangements in terms of "fusion variables", a representation of the perturbative series for Hamiltonian lattice gauge theory purely in terms of integer-valued quantum numbers in a geometric-combinatorial construction was derived.

Lastly, Sreeraj T.P. presented a derivation of an analogy between the Gauss constraint in Hamiltonian lattice gauge theory and the condition of equal "angular impulses" in the SU(2) x SU(2) description of the SO(4) symmetry of the Coulomb potential to derive a description of the Hilbert space of SU(2) lattice gauge theory in terms of hydrogen atom (n,l,m) variables located on the plaquettes subject only to the global constraint of vanishing total angular momentum, from where a variational ansatz for the ground state can be constructed.

The workshop closed with some well-deserved applause for the organisers and all of the supporting technical and administrative staff, who have ensured that this workshop ran very smoothly indeed. Another excellent lunch (I understand that our lunches have been a kind of culinary journey through India, starting out in the north on Monday and ending in Kerala today) concluded the very interesting workshop.

I will keep the small subset of my readers whom it may interest updated about my impressions from an excursion planned for tomorrow and my trip back.

Thursday, February 19, 2015

Perspectives and Challenges in Lattice Gauge Theory, Day Four

Today was dedicated to topics and issues related to finite temperature and density. The first speaker of the morning was Prasad Hegde, who talked about the QCD phase diagram. While the general shape of the Columbia plot seems to be fairly well-established, there is now a lot of controversy over the details. For example, the two-flavour chiral limit seems to be well-described by either the O(4) or O(2) universality class, it isn't currently possible to exclude that it might be Z(2), and while the three-flavour transition appears to be known to be Z(2), simulations with staggered and Wilson quarks give disagreeing results for its features. Another topic that gets a lot of attention is the question of U(1)A restoration; of course, U(1)A is broken by the axial anomaly, which arises from the path integral measure and is present at all temperatures, so it cannot be expected to be restored in the same sense that chiral symmetry is, but it might be that as the temperature gets larger, the influence of the anomaly on the Dirac eigenvalue spectrum gets outvoted by the temporal boundary conditions, so that the symmetry violation might disappear from the correlation functions of interest. However, numerical studies using domain-wall fermions suggest that this is not the case. Finally, the equation of state can be obtained from stout or HISQ smearing with very similar results and appears well-described by a hadron resonance gas at low T, and to match reasonably well to perturbation theory at high T.

The next speaker was Saumen Datta speaking on studies of the QCD plasma using lattice correlators. While the short time extent of finite-temperature lattices makes it hard to say much about the spectrum without the use of techniques such as the Maximum Entropy Method, correlators in the spatial directions can be readily used to obtain screening masses. Studies of the spectral function of bottomonium in the Fermilab formalism suggest that the Y(1S) survives up to at least twice the critical temperature.

Sorendu Gupta spoke next about the equation of state in dense QCD. Using the Taylor expansion (which was apparently first invented in the 14th-15th century by the Indian mathematician Madhava) method together with Padé approximants to reconstruct the function from the truncated series, it is found that the statistical errors on the reconstruction blow up as one nears the suspected critical point. This can be understood as a specific instance of the "no-free-lunch theorem", because a direct simulation (were it possible) would suffer from critical slowing down as the critical point is approached, which would likewise lead to large statistical errors from a fixed number of configurations.

The last talk before lunch was Bastian Brandt with an investigation of an alternative formulation of pure gauge theory using auxiliary bosonic fields in an attempt to render the QCD action amenable to a dual description that might allow to avoid the sign problem at finite baryon chemical potential. The alternative formulation appears to describe exactly the same physics as the standard Wilson gauge action at least for SU(2) in 3D, and in 2D and/or in certain limits, its a continuum limit is in fact known to be Yang-Mills theory. However, when fermions are introduced, the dual formulation still suffers from a sign problem, but it is hoped that any trick that might avoid this sign problem would then also avoid the finite-μ one.

After lunch, there were two non-lattice talks. The first one was given by Gautam Mandal, who spoke about thermalisation in integrable models and conformal field theories. In CFTs, it can be shown that for certain initial states, the expectation value of an operator equilibrates to a certain "thermal" expectation value, and a generalisation to integrable models, where the "thermal" density operator includes chemical potentials for all (infinitely many) conserved charges, can also be given.

The last talk of the day was a very lively presentation of the fluid-gravity correspondence by Shiraz Minwalla, who described how gravity in Anti-deSitter space asymptotically goes over to Navier-Stokes hydrodynamics in some sense.

In the evening, the conference banquet took place on the roof terrace of a very nice restaurant serving very good European-inspired cuisine and Indian red wine (also rather nice -- apparently the art of winemaking has recently been adapted to the Indian climate, e.g. the growing season is during the cool season, and this seems to work quite well).

Wednesday, February 18, 2015

Perspectives and Challenges in Lattice Gauge Theory, Day Three

Today's first talk was given by Rainer Sommer, who presented two effective field theories for heavy quarks. The first one was non-perturbatively matched HQET, which has been the subject of a long-running effort by the ALPHA collaboration. This programme is now reaping its first dividends in the form of very reliable fully non-perturbative results for B physics observables. Currently, the form factors for B->πlν decays, which are very important for determining the CKM matrix element Vub (currently subject to some significant tension between inclusive and exclusive determinations) are in the final stages of analysis. The other effective theory was QCD with Nf<6 flavours -- which is of course technically an effective theory where the heavy quarks have been integrated out! Rainer presented a new factorisation formula that relates the mass of a light hadron in the theory with a heavy quark to that of the same hadron in a theory in which the heavy quark is massless by a factor dependent on the hadron and a universal perturbative factor. The factorisation formula has been tested for gluonic observables in the pure gauge theory matched to the two-flavour theory.

After tea, we had a session focussed on algorithms and machines. The first speaker was Andreas Frommer speaking about multigrid solvers for the Dirac equation in lattice QCD. A multigrid solver consists of a smoother and a coarse-grid correction. For the smoother for the Dirac equation, the Schwartz Alternating Procedure (SAP) is a natural choice, whereas for the coarse-grid correction, aggregate-based interpolation (essentially the same idea as Lüscher-style inexact deflation) can be used. The resulting multigrid algorithm is very similar to the domain-decomposed algorithm used in the DD-HMC and openQCD codes, but generalises to more than two levels, which may lead to better performance. Applications to the overlap operator were presented.

Next, Stephan Solbrig presented the QPACE2 project, which aims to build a supercomputer based on Intel Knight's Corner (Xeon Phi) cards as processors, where each node consists of four Xeon Phis linked to each other, a weak host CPU used only for booting, and to an Infiniband card via a PCIe switch. The whole system uses hot water cooling, building on experience gathered in the iDataCool project. The 512bit wide registes of the Xeon Phi necessitate several programming tricks such as site fusing to make optimal use of computing resources; the resulting code seems to scale almost perfectly as long as there are sufficient numbers of domains to keep all nodes busy. An interesting side note was that apparently there are extremophile bacteria that thrive in the copper pipes of water-cooled computer clusters.

Pushan Majumdar rounded off the session with a talk about QCD on GPUs. The special programming model of GPUs (small amount of memory per core, restrictions on branching, CPU/GPU data transfer as a bottleneck) makes programming GPUs challenging. The OpenACC compiler standard, which aims to offload the burden of dealing with GPU particulars onto the compiler vendor, may offer a possibility to easily port OpenMP-based code written for CPUs on GPUs, and Pushan showed some worked examples of Fortran 90 OpenMP code adapted for OpenACC.

After lunch, I had to retire to my room for a little (let me hasten to add that the truly excellent lunch provided by the extremely hospitable TIFR is definitely absolutely blameless in this), and thus missed the afternoon's first two talks, catching only the end of Jyotirmoy Maiti's talk about exploring the spectrum of the pure SU(3) gauge theory using the Wilson flow.

Gunnar Bali closed the day's proceeding with a very nice colloquium talk for a larger scientific audience, summarising the Standard Model and lattice QCD in an accessible manner for non-experts before proceeding to present recent results on the sea quark content and spin structure of the proton.

Tuesday, February 17, 2015

Perspectives and Challenges in Lattice Gauge Theory, Day Two

Today's first session started with a talk by Wolfgang Söldner, who reviewed the new CLS simulations using 2+1 flavours of dynamical fermions with open boundary conditions in the time direction to avoid the freezing of topology at small lattice spacing. Besides the new kind of boundary conditions, these simulations use a number of novel tricks, such as twisted mass reweighting, to make the simulations more stable at light pion masses. First studies of the topology and of the scale setting look promising, and there will likely be some interesting first physics results at the lattice conference in Kobe.

After the tea break, Asit Kumar De talked about lattice gauge theory with equivariant gauge fixing. This is an attempt to evade the Neuberger 0/0 problem with BRST invariance on a lattice by leaving a subgroup of the gauge group unfixed. As a result, on gets four-ghost interactions in the gauge fixed action (this seems to be a general feature of theories trying to extend BRST symmetry; the Curci-Ferrari model for massive gauge fields also has such an interaction).

This was followed Mughda Sarkar speaking about simulations of the gauge-fixed compact U(1) gauge theory. Apparently, the added parameters of the gauge fixing part appear to allow for changing the nature of the phase transition between strong and weak coupling from first to second order, although I didn't quite understand how that is compatible with the idea of having all gauge-invariant quantities be unaffected by the gauge fixing.

After lunch, we had an excursion to the island of Elephanta, where there are some great temples carved out of the rock. Today was a festival of Shiva, so admission was free (otherwise the price structure is quite interesting: र10 for Indians, र250 for foreigners), and there were many people on the island and in the caves. The site is certainly well worth the visit, although many of the statues have been damaged quite severely in the past.

Monday, February 16, 2015

Perspectives and Challenges in Lattice Gauge Theory, Day One

Hello from Mumbai, where I'm attending the workshop "Perspectives and Challenges in Lattice Gauge Theory" at the Tata Institute for Fundamental Research. I arrived on Sunday at an early hour, and had some opportunity to see some of the sights of Mumbai while trying to get acclimatized and jetlag-free.

Today was the first day of the workshop, which started with a talk by Gergely Endrődi on the magnetic response of isospin-asymmetric QCD matter. This is relevant both for heavy-ion collisions and for the astrophysics of neutron stars, where in both cases strong magnetic fields interact with nuclear matter that has more neutrons than protons. From analytical calculations it is known that free quarks would form a paramagnetic state of matter, whereas pions would yield diamagnetism. As QCD matter at low energies should be mostly a hadron gas, and at high temperatures a quark-gluon plasma, the expectation would be that the behaviour of QCD at zero chemical potential changes from diamagnetic to paramagnetic as the temperature increases. On the other hand, at zero temperature and non-zero isospin chemical potential, at small isospin chemical potential the magnetic susceptibility vanishes (by the "Silver Blaze" effect), before suddenly going negative from pion condensation when the chemical potential exceeds half the pion mass, and again going positive as the chemical potential is increased further. Lattice simulations confirm this overall picture, although the susceptibility remains finite at μI=1/2 mπ since the pions already start to melt rather than to condense into a superconductor).

After the coffee break, it was my turn to talk about recent work we have done at Mainz regarding the importance of excited-state effects on nucleon form factors. Briefly summarised, the splitting to the first excited state (nucleon+pion P-wave, or nucleon+2 pions S-wave) gets very small in the chiral regime, but the errors on the nucleon two- and three-point functions grow exponentially as the source-sink separation is increased, making it very hard to find a Euclidean time region of both clean ground-state signal and reasonable statistical precision. Treating the excited states using different methods (summation method and explicit two-state fits) yields indications hinting that the current discrepancy between the nucleon charge radius obtained from lattice simulations and experiment may be due mostly to excited-state effects.

This was followed by Andreas Schäfer speaking about much more ambitious hadron structure observables, namely Transverse Momentum Distributions (TMDs), Parton Distribution Functions (PDFs) and Generalised Parton Distributions (GPDs). Knowledge of these is important to clarify systematics for some of the LHC measurements, so lattice results could certainly have a huge impact here, but the necessary calculations appear quite involved.

After the lunch break, Stefan Dürr reviewed some of the newer inhabitants of the fermion zoo, namely firstly the Brillouin fermions obtained by replacing the standard discretisation of the Laplacian in the Wilson action with its Brillouin discretisation, and the symmetric derivative with its isotropic alternative, and secondly the staggered Wilson fermions of Adams (Adams fermions). In particular for heavier quark masses, the Brillouin fermions seem to do much better than standard Wilson fermions, including by giving a much more continuum-like dispersion relation.

After a more technical talk on simulating the Gross-Neveu model with Boriçi-Creutz fermions by Jinshu Goswami, Kalman Szabo gave a colloquium for a more general audience explaining the origin of mass from QCD, electromagnetism and the Higgs effect (which is roughly the order of importance for ordinary matter), and how to determine the proton-neutron mass difference (which is after all of great anthropic significance, since an even slightly smaller value would leave hydrogen atoms unstable under inverse β-decay, whereas a somewhat larger value would create too much of a bottleneck in the creation of heavier elements) on the lattice. The lattice results are certainly impressive both in terms of the theoretical and computational effort needed to obtain them and in the accuracy with which they reproduce the experimentally-known situation.

Monday, January 19, 2015

Upcoming conference/workshop deadlines

This is just a short reminder of some upcoming deadlines for conferences/workshops in the organization of which I am in some way involved.

Abstract submission for QNP 2015 closes on 6th February 2015, and registration closes on 27th February 2015. Visit this link to submit and abstract, and this link to register.

Applications for the Scientific Programme "Fundamental Parameters from Lattice QCD" at MITP close on 31st March 2015. Visit this link to apply.