Tuesday, December 21, 2010

Lattice 2011 website online

This Christmas season being rather snowy, at least here in Germany, many people will be thinking of winter sports. Thinking of winter sports, they might (possibly) be thinking of the Squaw valley ski resort, thinking of which they might (if they happen to be lattice theorists) think of Lattice 2011. All of which is just a roundabout way of saying that the Lattice 2011 website is now online, and while still under construction will soon contain a wealth of relevant information for participants.

Thursday, October 14, 2010

Just some links

Another ultra-lazy links-only post; I promise there'll be some new content soon, though.

Tim Gowers has observed a little physics problem in the ever interesting subfield of bathtub hydrodynamics. Perhaps some fluids expert is reading this and can offer an explanation to his questions.

Tuesday, October 05, 2010

Nobel prize for a (kind of) lattice

The Nobel Prize in physics 2010 has been awarded to Andre Geim and Konstantin Novoselov of the University of Manchester "for groundbreaking experiments regarding the two-dimensional material graphene".

Graphene is a novel form of carbon, in which the carbon atoms are bound into a hexagonal lattice covering a single flat two-dimensional layer. Graphite consists of lots of pieces of graphene jumbled together into a three-dimensional whole, so graphene is actually quite common, but Geim and Novoselov were the first to systematically isolate it and elucidate its unusual properties.

Graphene has a number of unique properties, not the least of which is that it has gapless excitations which are described by a Dirac equation -- massless electrons, so to speak. It is this particular feature of the graphene lattice which has inspired the study of graphene-like structures in higher dimensions as a means of obtaining minimally doubled fermions, i.e. lattice fermions that have the minimal number (=2) of doublers prescribed by the Nielsen-Ninomiya theorem. So even if the technological promise of graphene (described e.g. at the Nobel site) were not to be realised, it has at least given theoretical particle physicists something to think about.

Wednesday, September 01, 2010

Bloggalia varia mixtaque


  • Conference blogging from the best of the best: Fields medalist Tim Gowers has been covering the International Congress of Mathematicians (ICM 2010) in Hyderabad on his blog

  • John Baez has turned from higher category theory to saving the planet and blogs about it at his new blog

  • Rob Knop has taken up blogging again at the new Scientopia website, which hosts the bloggers that left ScienceBlogs as a consequence of "PepsiGate" or for other reasons

Saturday, July 24, 2010

Blogging ICHEP 2010

I'm currently at the ICHEP 2010 conference in Paris, from where I'm blogging at the official ICHEP 2010 blog. I'll post a summary here later, but for now come over and follow me and the wonderful other bloggers at Blogging ICHEP 2010!

Saturday, June 19, 2010

Lattice 2010, Day Five

The day started with plenary sessions again. The first plenary speaker was Chris Sachrajda on the topic of phenomenology from the lattice. Referring to the talks on heavy and light quarks, spectroscopy and hadron structure for those topics, he covered a mix of various phenomenologically interesting quantities, starting from those that have been measured to good accuracy on the lattice and progressing to those that still pose serious or perhaps even unsurmountable problems. The accurate determination of Vus/Vud from fK/fπ and of Vus from the Kl3 form factor f+(0), where both the precision and the agreement with the Standard Model are very good, clearly fell into the first category. The determination of BK is less precise and there is a 2σ tension in the resulting value of K|. Even more challenging is the decay K --> ππ, for which however progress is being made, whereas the yet greater challenge of nonleptonic B-decays cannot be tackled with presently known methods. Chris closed his talk by reminding the audience that at another lattice conference held in Italy, namely that of 1989 (i.e. when I was just a teenager), Ken Wilson had predicted that it would take 30 years until precise results could be attained from lattice QCD, and that given that we still have nine years we are well on our way.

The next plenary talk was given by Jochen Heitger, who spoke about heavy flavours on the lattice. Flavour physics is an important ingredient in the search for new physics, because essentially all extensions to the Standard Model have some kind of flavour structure that could be used to find them from their contributions to flavour processes. On the lattice, "gold-plated" processes with no or one hadron in the final state and a well-controlled chiral behaviour play a crucial role because they can be treated accurately. Still, treating heavy quarks on the lattice is difficult, because on needs to maintain a multiscale hierarchy of 1/L << mπ << mQ << 1/a. A variety of methods are currently in use, and Jochen nicely summarised results from most of them, including, but not limited to, the current-current correlators used by HPQCD, ETMC's interpolation of ratios between the static limit and dynamical masses, and the Fermilab approach, paying special attention to the programme of non-perturbative HQET pursued by the ALPHA collaboration.

The second plenary session started with a talk by Mike Peardon about improved design of hadron creation operators. The method in question is the "distillation" method that has been talked about a lot for about a year now. The basic insight at its root is that we generally use smeared operators to improve the signal-to-noise ratio, and that smearing tends to wipe out contributions from high-frequency modes of the Laplacian. If one then defines a novel smearing operator by projecting on the lowest few modes of the (spatial) Laplacian, this operator can be used to re-express the large traces appearing in correlation functions with smaller traces over the space spanned by the low-modes. If the smearing or "distillation" operator is D(t)=V(t)V(t)+, one defines the "perambulator" τ(t,t')=V(t)+M-1(t,t')V(t') that takes the place of the propagator, and reduced operators Φ(t)=V(t)+ΓV(t), in terms of which to write the small traces. Insertions needed for three-point functions can be treated similarly by defining a generalised perambulator. Unfortunately, this method as it stands has a serious problem in that it scales very badly with the spatial volume -- the number of low-modes needed for a given accuracy scales with the volume, and so the method scales at least like the volume squared. However, this problem can be solved by using a stochastic estimator that is defined in the low-mode space, and the resulting stochastic method appears to perform much better than the usual "dilution" method.

The last speaker of the morning was Michele Pepe with a talk on string effects in Yang-Mills theory. The subject of the talk was the measurement of the width of the effective string and the observation of the decay of unstable k-strings in SU(2) gauge theory. By using a multilevel simulation technique proposed by Lüscher and Weisz, Pepe and collaborators have been able to perform these very challenging measurements. The results for the string width agree with theoretical expectations from the Nambu-Goto action, and the expected pattern of k-string decays (1 --> 0, 3/2 --> 1/2, and 2 --> 1 --> 0) could be nicely seen in the plots.

The plenary session was closed by the announcement that LATTICE 2011 will be held from 10-16th July 2011 at the Squaw Valley Resort in Lake Tahoe, California, USA.

In the afternoon there were again parallel sessions.

Friday, June 18, 2010

Lattice 2010, Day Four

Today's first plenary session was started by Kazuyuki Kanaya with a talk on finite-temperature QCD. Many groups are looking for the transition temperature between the confined and deconfined phases, but since in the neighbourhood of the physical point, the transition is most likely a crossover, the value of the "critical" temperature found may be dependent on the observable studied. There was further some disagreement even between different studies using the same observables, but those discrepancies seem to have gone mostly away.

Next was Luigi Del Debbio speaking about the conformal window on the lattice. The motivation for those kinds of studies is the hope that the physics of electroweak symmetry breaking by originate not from a fundamental scalar Higgs, but from a fermionic condensate similar to the chiral condensate in QCD arising from a gauge theory ("technicolor") living at higher energy scales, perhaps around 1 TeV. To make these kinds of models viable, the coupling needs to run very slowly. One is then motivated to look for gauge theories having an infrared fixed point. Lattice simulations can help studying the question which combinations of Nc, the number of colours, and Nf, the number of fermion flavours, actually exhibit such behaviour. The Schrödinger functional can be used to study such questions, but while there are a number of results, no very clear picture appears to have emerged yet.

The second plenary session of the morning was opened with a talk on finite-density QCD by Sourendu Gupta. QCD at finite density, i.e. finite chemical potential, is plagued by a sign problem because the fermionic determinant can no longer be real in general. A number of ways around this problem have been proposed. The most straightforward is reweighting, the most ambitious a reformulation of the theory that manages to eliminate the sign problem entirely. On the latter front, there has been progress in that the 3D XY model, which also has a sign problem, has been successfully reformulated in different variables in which it does no longer suffer from its sign problem; whether something similar might be possible for QCD remains to be seen. Other approaches try to exploit analyticity to evade the sign problem, either by Taylor-expanding around zero chemical potential and measuring the Taylor coefficients as susceptibilities at zero chemical potential, or by simulating at purely imaginary chemical potential (where there is no sign problem) and extrapolating to real chemical potential. In this way, various determinations of the critical point of QCD have been performed, which agree more or less with each other. All of them lie in a region through which the freeze-out curve of heavy-ion experiments is expected to pass, so the question of the location of the critical point may become accessible experimentally.
The last plenary talk of the morning was Takeshi Yamazaki talking on a determination of the binding energy of helium nuclei in quenched QCD. The effort involved is considerable (there are more than 1000 different contractions for 4He, and the lattices considered have to be very large to be able to accommodate a helium nucleus and to distinguish between true bound states and attractive scattering states), even though the simulations were quenched and the valence quarks used corresponded to a pion mass of about 800 MeV. The study found that helium nuclei are indeed bound.

In the afternoon there were parallel sessions.

Thursday, June 17, 2010

Lattice 2010, Days Two and Three

Yesterday was an all-parallels day, so there are no plenary talks to summarise. In the evening there was the poster session.

The internet connection at the resort does not really have the capacity to deal with 360 computational physicist all reading their email, checking on their running computer jobs, browsing the hep-lat arXiv or writing their blog at the same time; this may lead to late updates from me, so please be patient.

Today's first plenary session was the traditional non-lattice plenary. The first talk was by Eytan Domany, who spoke about the challenges posed to computational science by the task of understanding the human genome. A large part of his talk was an introduction to the biological concepts involved, such as DNA, chromosomes, genes, RNA, transcription, transcription factors, ribosomes, gene expression, exons, introns, "junk" DNA, regulation networks and epigenetics. These days, it is possible to analyse the expression of thousands of genes in a sample by means of a single chip, and the data obtained by performing this kind of analysis on large numbers of samples (e.g. from different kinds of cells or from different patients) can be seen as an expression matrix with rows for genes and columns for samples. The difficult task is then to use this kind of large data matrix to infer regulation networks or connections between gene expression and phenotypes. Apparently, there are physicists working in this area together with the biologists, bringing in their computational expertise.

The second plenary talk was an LHC status summary given by Slawek Tkaczyk. The history of the LHC is of course well known to readers of this blog; so far, the first data are being analysed to "rediscover" the Standard Model with the aim of discovering new physics in the not too distant future, but there was no evidence of e.g. the Higgs or SUSY shown (yet?).

The second plenary session was devoted to non-QCD lattice simulations. The first talk was Renate Loll speaking on Lattice Quantum Gravity, specifically on causal dynamical triangulations. This approach to Quantum Gravity starts from the path integral for the Einstein-Hilbert action of General Relativity and regularises it by replacing continuous spacetime with a discrete triangulation. The discrete spacetime is then a simplicial complex satisfying certain additional requirements, and the Wick-rotated path integral can be treated using Monte Carlo techniques. In one phase of the (three-parameter) theory, the macroscopic structure of the resulting spacetime has been found to agree with de Sitter-space. Another surprising and interesting result of this approach has been that the spectral dimension associated with the diffusion of particles on the discrete spacetime is continuously going from around 2 at short (Plackian) to 4 at large distances.

Next was a talk on exact lattice SUSY by Simon Catterall. Normally, a lattice regularisation completely ruins supersymmetry, but theorists have found a way to formulate certain classes of supersymmetric theories (including N=4 Super-Yang-Mills) on a special kind of lattice, giving a local, gauge-invariant action with a doubler-free fermion formulation. This may offer a chance to study quantum gravity by simulations of lattice SUSY via the AdS/CFT correspondence.

In the afternoon there were excursions. I had signed up to the only excursion for which places were still available, which was a tour of a Sardinian winery with a wine tasting. The tour was not too interesting, as everything was very technologically modern, and as somebody said, we can go and look at the LHC if we want to see modern technology. The wines tasted were very nice, though.

Monday, June 14, 2010

Lattice 2010, Day One

Hello from the Atahotel Tanka Village Resort in Villasimius, Sardinia, Italy, where I am at the Lattice 2010 conference.

The conference started this morning with a talk by Martin Lüscher about "Topology, the Wilson flow and the HMC algorithm". It is by now well known in the lattice community that Monte Carlo simulations of lattice QCD suffer from a severy problem with long autocorrelations of the topological charge of the gauge field. This problem affects the HMC algorithm and its variants that are used in lattice simulations with dynamical fermions just as well as the simple link updating schemes (Metropolis, heat bath) that can be used for pure gauge or quenched calculations. The autocorrelation time of the topological charge grows roughly like the fifth power of the inverse lattice spacing a as a is taken to zero. This is a real problem because it indicates the presence in the system being simulated of modes that are updates only very slowly, and as a consequence the statistical errors of observables measured from Monte Carlo simulations may be seriously underestimated, because the contribution to the error coming from the long tails of the autocorrelation function that stem from those modes are not properly taken into account. Martin Lüscher then introduced the Wilson flow, which is an evolution in field space generated by the Wilson plaquette action, and which can in some sense be seen as consisting of a sequence of infinitesimal stout link smearings. For the case of an abelian gauge theory, the flow equation can be solved exactly via the heat kernel, and it can be shown that it gives renormalised smooth solutions. For QCD, the same can be seen to be true numerically. Defining a transformed field V(U) by running with the Wilson flow for a specified time t0, it can then be shown that the path integral over U is the same as the path integral over V(U) with an additional term in the action that comes from the Jacobian of the transformation and is proportional to g0/a times the integral of the Wilson plaquette action along the flow trajectory. As a goes to zero, the latter term will act to suppress large value of the plaquette. An old theorem of Lüscher shows that the submanifold of field space with a plaquette values less than 0.067 divides into topological sectors, and hence the probability to be "between" topological sectors decays in line with the suppression of large plaquettes by the g0/a term. This explains the problem seen, but also offers hope for a solution, since one might now try to develop algorithms that make progress by making large changes to the smooth fields V.
This was followed by two review talks. The first was a review of the state of the art in hadron spectroscopy and light pseudoscalar decay constants by Christian Hölbling emphasizing the reduction of systematic errors achieved by decreasing lattice spacings and pion masses and increasing simulation volumes.

The second review talk of the morning was given by Constantia Alexandrou, who reviewed hadron structure and form factor calculations from the lattice, drawing attention to the many remaining uncertainties in this important area, where in particular the axial charge gA of the nucleon is consistently measured to be significantly lower on the lattice than in nature.

The last plenary speaker of the day was Gregorio Herdoiza, who spoke about the progress being made towards 2+1+1 flavour simulations. The collaborations currently pursuing the ambitious goal of including a fully dynamic charm quark in their simulations are ETMC and MILC. MILC is using the Highly Improved Staggered Quark (HISQ) action to reduce discretisation errors, whereas ETMC is relying on a variant of twisted mass fermions with an explicit breaking of the mass degeneracy for the strange/charm doublet. In the former case, the effects of reduced lattice artifacts are clearly seen, while in the latter case the O(a2) mass splitting between the neutral and charged pion increases with the number of flavours. In either case, a significant effort is necessary to tune the strange and charm quark masses to their physical values, but the effort is definitely well-spent if it leads to Nf=2+1+1 predictions from lattice QCD that include all effects of an active charm quark.

In the afternoon there were parallel talks. Two that I'd like to highlight were the talk of Bastian Knipschild from Mainz, who presented an efficient method to strongly reduce the systematic error on nucleon form factors coming from excited state contributions, and David Adam's talk in which he presented a generalisation of the overlap operator to staggered fermions that gives a chiral two-flavour theory.

Tuesday, May 18, 2010

Another chink in the armor of the Standard Model?

Via Resonaances: The D0 collaboration has a new paper on the arXiv in which they present their observations of a like-sign muon charge asymmetryin B meson decays.

Neutral B mesons can decay into an antimuon, a mu neutrino and other stuff (B0 --> μ+νμ Xc) via the weak interaction \bar{b} --> \bar{c} W+, and neutral anti-B mesons can accordingly decay into an muon, a mu antineutrino and other stuff. However, neutral B mesons can oscillate into their antiparticles and back, so that if a B-Bbar pair is created in a collision, and one particle of the pair decays into a muon-neutrino pair while in its original state whereas the other decays into a muon-neutrino pair while turned into the antiparticle of its original state, both of them will decay into muons, or both into antimuons -- a like-sign muon decay.

If CP was an exact symmetry of nature, the rates for the oscillation and decays would be equal between B and anti-B mesons, but since it is not, CP violation leads to a difference in the rate at which the initial B-Bbar pair decays into positive and negative like-sign muon pairs -- a charge asymmetry. The Standard Model predicts a very small such charge asymmetry stemming from the complex phase in theCKM matrix.

What the D0 collaboration have done is to measure the charge asymmetry, carefully subtracting all (hopefully) sources of background, and obtained a result thatis about two orders of magnitude larger than the Standard Model prediction! Of course the experimental result has statistical and systematic errors, and thus the relevant measure of deviation from the Standard Model is only about 3σ ... still, this is another chink in the armor of the Standard Model.

What I find interesting is that all of the evidence of flavour physics beyond the Standard Model comes from particles containing a strange (rather than an up or down) quark besides a heavy flavour. The contribution to the charge asymmetry from B0d decays is well constrained by other experiments, so most of the D0 result would appear to be coming from the B0s system. I'm not a BSM phenomenologist, but I could imagine this to be relevant input for an understanding of possible BSM physics.

The Standard Model predictions rely on hadronic quantities such as decay constants, form factors and mixing parameters of the B meson, which must be determined nonperturbatively in lattice QCD. Better accuracy here could have real impact on the most stringent tests of the Standard Model that we have so far, and this is an area where significant progress is being made.

Monday, May 10, 2010

Bloggy stuff

This will be an unusaully bloggy post for this blog, consisting as it does of two unrelated remarks one of which is not terribly relevant to anything.

Firstly (and irrelevantly), I noticed that Google are now classifying the blogs on blogspot by some sort of content-matching algorithm -- if you click on "next blog" in the title bar, you are most likely going to see another physics blog, or at least a blog by some physicist; if that physicist happens to blog about cooking, the next blog after that might be another culinary blog, or if he happens to be based in Texas, it might be a blog about a trip to Texas, and so forth. That's a neat trick that makes the "next blog" feature actually at least somewhat useful.

Secondly, when people talk about how the Euro is loosing against the Dollar, which has to mean the end of the EU or even of civilisation as we know it, I wonder how short their memory spans are. Here's a little memory aid -- click on "10y" on the chart and behold, the Euro is 42.4% higher against the Dollar than it was in early 2000, when it was actually worth less than $1 ...

Thursday, May 06, 2010

ICHEP 2010 has a blog

As my readers will know, this blog is most active during the conference season, when I blog from the annual lattice conference and possibly also from other meetings. I believe that conference blogging is both a service to those members of the physics community who for whatever reasons cannot personally attend the conference, and also to the wider public, who can get an insight into what scientists do and talk about at their meetings. It is thus a great pleasure for me to be able to announce that ICHEP 2010 will have an official conference blog, where bloggers from the high energy particle physics community will post on the conference and on current topics in high energy physics in general.

Friday, January 29, 2010

Excited states from the lattice, 1 of n

This post is intended as the first in a series about techniques for the extraction of information on excited states of hadrons from lattice QCD calculations.

As a reminder, what we measure in lattice QCD are correlation functions C(t)=<O(t)O(0)> of composite fields O(t). From Feynman's functional integral formula, these are equal to the vacuum expectation value of the corresponding products of operators. Changing from the Heisenberg to the Schrödinger picture, it is straightforward to show that (for infinite temporal extent of the lattice) these have a spectral representation C(t)=Σnn|2 e-Ent, which in principle contains all information about the energies En and matrix elements ψn=<0|O|n> of all states in the theory.

The problem with getting that information from the theory is twofold: Firstly, we only measure the correlator on a finite number of timeslices; the task of inferring an infinite number of En and ψn from a finite number of C(tk) is therefore infinitely ill-conditioned. Secondly, and more importantly, the measured correlation functions have associated statistical errors, and the number of timeslices on which the excited states' (n>1) contributions are larger than the error is often rather small. We are therefore faced with a difficult data analysis task.

The simplest idea of how to extract information beyond the ground state would be to just perform a multi-exponential fit with a given number of exponentials on the measured correlator. This approach fails spectacularly, because multi-exponential fits are rather ill-conditioned. One finds that changing the number of fitted exponentials will affect the best fit values found rather strongly, leading to a large and unknown systematic error; moreover, the fits will often tend to wander off into unphysical regions (negative energies, unreasonablely large matrix elements for excited states). This instability therefore needs addressing if one wishes to use a χ2-based method for the analysis of excited state masses.

The first such stabilisation that has been proposed and is widely used is known as Bayesian or constrained fitting. The idea here is to augment the χ2 functional by prior information that one has about the spectrum of the theory (such as that energies are positive and less than the cutoff, but if one wishes also perhaps more stringent constraints coming e.g. from effective field theories or models). The reason one may do this is Bayes' theorem, which can be read as stating that the probability distribution of the parameters M given the data D is the product of the probability distribution of the data given the parameters times the probability distribution of the parameters absent any data: P(M|D)=P(D|M)/P(D) P(M); taking the logarithm of both sides and maximising of M, we then want to maximise log(P(D|M)) + log(P(M)). Now log(P(D|M)) is known to be proportional to 2, so if P(M) was completely flat, we would end up minimizing χ2. If we take P(M) to be Gaussian instead, we end up with an augmented χ2 that contains an additional term Σn (Mn-In)2n2 that forces the parameters Mn towards their initial guesses ("priors") In, and hence stabilises the fit -- in principle even with an infinite number of fit parameters. The widths σn are arbitrary in principle; fitted values Mn that noticeably depend on σn are determined by the priors and not the data and must be discarded. In practice the lowest few energies and matrix elements do not show a significant dependence on σn or on the number of higher states included in the fit, and may therefore be taken to have been determined by the data.

Bayesian fitting is a very powerful tool, but not everyone is happy with it. One objection is that adding any external information, even as a constraint, compromises the status of lattice QCD as a first-principles determination of physical quantities. Another common worry is the GIGO (garbage in-garbage out) principle with regards to the priors.

A way to address the former concern that has been proposed is the Sequential Empirical Bayes Method (SEBM). Here, one first performs an unstabilised single-exponential fit at large times t, where the ground state is known to dominate. Then one performs a constrained two-exponential fit over a larger range of t using the first fit result as a prior (with its error as the width). The result of this fit is then used as the prior in another three-exponential fit over an even larger time range, and so forth. (There is some variation as to the exact procedure followed, but this is the basic idea). In this way, all priors have been determined by the data themselves.

In the next post of this series we will look at a completely different approach to extracting excited state masses and matrix elements that does not rely on χ2 at all.

Friday, January 08, 2010

New book on the lattice

There was a time when the only textbooks on lattice QCD were Montvay&Münster and Creutz. Not so any more. Now the new textbook "Quantum Chromodynamicson the Lattice: An Introductory Presentation" by Christof Gattringer and Christian Lang (Lecture Notes in Physics 788, Springer) offers a thorough and accessible introduction for beginners.

Gattringer and Lang start from a derivation of the path integral in the context of Quantum Mechanics, and after deriving the naive discretisation of lattice fermions and the Wilson gauge action present first the lattice formulation of pure gauge theory, including the Haar measure and gauge fixing, with Wilson and Polyakov loops and the static quark potential as the observables of interest. Numerical simulation techniques for pure gauge theory are discussed along with the most important data analysis methods. Then fermions are introduced properly, starting from the properties of Grassmann variables and a discussion of the doubling problem and the Wilson fermion action, followed by chapters on hadron spectroscopy (including some discussion of methods for extracting excited states), chiral symmetry on the lattice (leading through the Nielsen-Ninomiya theorem and the Ginsparg-Wilson relation to the overlap operator) and methods for dynamical fermions. Chapters on Symanzik improvement and the renormalisation group, on lattice fermion formulations other than Wilson and overlap, on matrix elements and renormalisation, and on finite temperature and density round off the volume.

The book is intended as an introduction, and as such it is expected that more advanced topics are treated briefly or only hinted at. Whether the total omission of lattice perturbation theory (apart from a reference to the review by Capitani) is justified probably depends on your personal point of view -- the book clearly intends to treat lattice QCD as a fully non-perturbative theory in all respects. There are some other choices leading to the omission or near-omission of various topics of interest: The Wilson action is used both for gluons and quarks, although staggered, domain wall and twisted mass fermions, as well as NRQCD/HQET, are discussed in a separate chapter. The calculation of the spectrum takes the front seat, whereas the extraction of Standard Model parameters and other issues related to renormalisation are relegated to a more marginal position.

All of these choices are, however, very suitable for a book aimed at beginning lattice theorists who will benefit from the very detailed derivations of many important relations that are given with many intermediate steps shown explicitly. Very little prior knowledge of field theory is assumed, although some knowledge of continuum QFT is very helpful, and a good understanding of general particle physics is essential. The bibliographies at the end of each chapter are up to date on recent developments and should give readers an easy way into more advanced topics and into the research literature.

In short, this book is a gentle, but thorough introduction to the field for beginners which may also serve as a useful reference for more advanced students. It definitely represents a nice addition to your QCD bookshelf.