Welcome back to our blog coverage of the Lattics 2017 conference in Granada.

Today's first plenary session started with an experimental talk by Arantza Oyanguren of the LHCb collaboration on B decay anomalies at LHCb. LHCb have amassed a huge number of b-bbar pairs, which allow them to search for and study in some detail even the rarest of decay modes, and they are of course still collecting more integrated luminosity. Readers of this blog will likely recall the B

_{s} → μ

^{+}μ

^{-} branching ratio result from LHCb, which agreed with the Standard Model prediction. In the meantime, there are many similar results for branching ratios that do not agree with Standard Model predictions at the 2-3σ level, e.g. the ratios of branching fractions like Br(B

^{+}→K

^{+}μ

^{+}μ

^{-})/Br(B

^{+}→K

^{+}e

^{+}e

^{-}), in which lepton flavour universality appears to be violated. Global fits to data in these channels appear to favour the new physics hypothesis, but one should be cautious because of the "look-elsewhere" effect: when studying a very large number of channels, some will show an apparently significant deviation simply by statistical chance. On the other hand, it is very interesting that all the evidence indicating potential new physics (including the anomalous magnetic moment of the muon and the discrepancy between the muonic and electronic determinations of the proton electric charge radius) involve differences between processes involving muons and analogous processes involving electrons, an observation I'm sure model-builders have made a long time ago.

This was followed by a talk on flavour physics anomalies by Damir Bečirević. Expanding on the theoretical interpretation of the anomalies discussed in the previous talk, he explained how the data seem to indicate a violation of lepton flavour universality at the level where the Wilson coefficient C

_{9} in the effective Hamiltonian is around zero for electrons, and around -1 for muons. Experimental data seem to favour the situation where C

_{10}=-C

_{9}, which can be accommodated is certain models with a Z' boson coupling preferentially to muons, or in certain special leptoquark models with corrections at the loop level only. Since I have little (or rather no) expertise in phenomenological model-building, I have no idea how likely these explanations are.

The next speaker was Xu Feng, who presented recent progress in kaon physics simulations on the lattice. The "standard" kaon quantities, such as the kaon decay constant or f

_{+}(0), are by now very well-determined from the lattice, with overall errors at the sub-percent level, but beyond that there are many important quantities, such as the CP-violating amplitudes in K → ππ decays, that are still poorly known and very challenging. RBC/UKQCD have been leading the attack on many of these observables, and have presented a possible solution to the ΔI=1/2 rule, which consists in non-perturbative effects making the amplitude A

_{0} much larger relative to A

_{2} than what would be expected from naive colour counting. Making further progress on long-distance contributions to the K

_{L}-K

_{S} mass difference or ε

_{K} will require working at the physical pion mass and treating the charm quark with good control of discretization effects. For some processes, such as K

_{L}→π

^{0}ℓ

^{+}ℓ

^{-}, even the sign of the coefficient would be desirable.

After the coffee break, Luigi Del Debbio talked about parton distributions in the LHC era. The LHC data reduce the error on the NNLO PDFs by around a factor of two in the intermediate-x region. Conversely, the theory errors coming from the PDFs are a significant part of the total error from the LHC on Higgs physics and BSM searches. In particular the small-x and large-x regions remain quite uncertain. On the lattice, PDFs can be determined via quasi-PDFs, in which the Wilson line inside the non-local bilinear is along a spatial direction rather than in a light-like direction. However, there are still theoretical issues to be settled in order to ensure that the renormalization and matching the the continuum really lead to the determination of continuum PDFs in the end.

Next was a talk about chiral perturbation theory results on the multi-hadron state contamination of nucleon observables by Oliver Bär. It is well known that until very recently, lattice calculations of the nucleon axial charge underestimated its value relative to experiment, and this has been widely attributed to excited-state effects. Now, Oliver has calculated the corrections from nucleon-pion states on the extraction of the axial charge in chiral perturbation theory, and has found that they actually should lead to an overestimation of the axial charge from the plateau method, at least for source-sink separations above 2 fm, where ChPT is applicable. Similarly, other nucleon charges should be overestimated by 5-10%. Of course, nobody is currently measuring in that distance regime, and so it is quite possible that higher-order corrections or effects not captured by ChPT overcompensate this and lead to an underestimation, which would however mean that there is some instermediate source-sink separation for which one gets the experimental result by accident, as it were.

The final plenary speaker of the morning was Chia-Cheng Chang, who discussed progress towards a precise lattice determination of the nucleon axial charge, presenting the results of the CalLAT collaboration from using what they refer to as the Feynman-Hellmann method, a novel way of implementing what is essentially the summation method through ideas based in the Feynman-Hellmann theorem (but which doesn't involve simulating with a modified action, as a straightforward applicaiton of the Feynman-Hellmann theorem would demand).

After the lunch break, there were parallel sessions, and in the evening, the poster session took place. A particular interesting and entertaining contribution was a quiz about women's contributions to physics and computer science, the winner of which will win a bottle of wine and a book.