To simulate lattice theories with dynamical fermions, one wants an exact algorithm that performs global updates, because local updates are not cheap if the action is not local (as is the case with the fermionic determinant), and which can take large steps through configuration space to avoid critical slowing down. An algorithm satisfying these demands is Hybrid Monte Carlo (HMC). HMC is based on the idea of simulating a dynamical system with Hamiltonian H = 1/2 p

^{2}+ S(q), where one introduces fictitious conjugate momenta p for the original configuration variables q, and treats the action as the potential of the fictitious dynamical system. If one now generates a Markov chain with fixed point distribution e

^{-H(p,q)}, then the distribution of q ignoring p (the "marginal distribution") is the desired e

^{-S(q)}.

To build such a Markov chain, one alternates two steps: Molecular Dynamics Monte Carlo (MDMC) and momentum refreshment.

MDMC is based on the fact that besides conserving the Hamiltonian, the time evolution of a Hamiltonian system preserves the phase space measure (by Liouville's theorem). So if at the end of a Hamiltonian trajectory of length τ we reverse the momentum, we get a mapping from (p,q) to (-p',q') and

*vice versa*, thus obeying detailed balance: e

^{-H(p,q)}P((-p',q'),(p,q)) = e

^{-H(p',q')}P((p,q),(-p',q')), ensuring the correct fixed-point distribution. Of course, we can't actually exactly integrate Hamilton's in general; instead, we are content with numerical integration with an integrator that preserves the phase space measure exactly (more about which presently), but only approximately conserves the Hamiltonian. We make the algorithm exact nevertheless by adding a Metropolis step that accepts the new configuration with probability e

^{-δH}, where δH is the change in the Hamiltonian under the numerical integration.

The Markov step of MDMC is of course totally degenerate: the transition probability is essentially a δ-distribution, since one can only get to one other configuration from any one configuration, and this relation is reciprocal. So while it does indeed satisfy detailed balance, this Markov step is hopelessly non-egodic.

To make it ergodic without ruining detailed balance, we alternate between MDMC and momentum refreshment, where we redraw the fictitious momenta at random from a Gaussian distribution without regard to their present value or that of the configuration variables q: P((p',q),(p,q))=e

^{-1/2 p'2}. Obviously, this step will preserve the desired fixed-point distribution (which is after all simply Gaussian in the momenta). It is also obviously non-ergodic since it never changes the configuration variables q. However, it does allow large changes in the Hamiltonian and breaks the degeneracy of the MDMC step.

While it is generally not possible to prove with any degree of rigour that the combination of MDMC and momentum is ergodic, intuitively and empirically this is indeed the case. What remains to see to make this a practical algorithm is to find numerical integrators that exactly preserve the phase space measure.

This order is fulfilled by symplectic integrators. The basic idea is to consider the time evolution operator exp(τ d/dt) = exp(τ(-∂

_{q}H ∂

_{p}+∂

_{p}H ∂

_{q})) = exp(τh) as the exponential of a differential operator on phase space. We can then decompose the latter as h = -∂

_{q}H ∂

_{p}+∂

_{p}H ∂

_{q}= P+Q, where P = -∂

_{q}H ∂

_{p}and Q = ∂

_{p}H ∂

_{q}. Since ∂

_{q}H = S'(q) and ∂

_{p}H = p, we can immediately evaluate the action of e

^{τP}and e

^{τQ}on the state (p,q) by applying Taylor's theorem: e

^{τQ}(p,q) = (p,q+τp), and e

^{τP}= (p-τS'(q),q).

Since each of these maps is simply a shear along one direction in phase space, they are clearly area preserving; so are all their powers and mutual products. In order to combine them into a suitable integrator, we need the Baker-Campbell-Hausdorff (BCH) formula.

The BCH formula says that for two elements A,B of an associative algebra, the identity

log(e

^{A}e

^{B}) = A + (∫

_{0}

^{1}((x log x)/(x-1))

_{{x=ead Aet ad B}}dt) (B)

holds, where (ad A )(B) = [A,B], and the exponential and logarithm are defined via their power series (around the identity in the case of the logarithm). Expanding the first few terms, one finds

log(e

^{A}e

^{B}) = A + B + 1/2 [A,B] + 1/12 [A-B,[A,B]] - 1/24 [B,[A,[A,B]]] + ...

Applying this to a symmetric product, one finds

log(e

^{1/2 A}e

^{B}e

^{1/2 A}) = A + B + 1/24 [A+2B,[A,B]] + ...

where in both cases the dots denote fifth-order terms.

We can then use this to build symmetric products (we want symmetric products to ensure reversibility) of e

^{P}and e

^{Q}that are equal to e

^{τh}up to some controlled error. The simplest example is

(e

^{δτ/2 P}e

^{δτ Q}e

^{δτ/2 P})

^{τ/δτ}= e

^{τ(P+Q)}+ O((δτ)

^{2})

and more complex examples can be found that either reduce the order of the error (although doing so requires one to use negative times steps -δτ as well as positive ones) or minimize the error by splitting the force term P into pieces P

_{i}that each get their own time step δτ

_{i}to account for their different sizes.

Next time we will hear more about how to apply all of this to simulations with dynamical fermions.