Thursday, November 29, 2007

Algorithms for dynamical fermions -- preliminaries

It has been a while since we had any posts with proper content on this blog. Lest my readers become convinced that this blog has become a links-only intellectual wasteland, I hereby want to commence a new series on algorithms for dynamical fermions (blogging alongside our discussion seminar at DESY Zeuthen/Humboldt University, where we are reading this review paper; I hope that is not too lazy to lift this blog above the waste level...).

I will assume that readers are familiar with the most basic ideas of Markov Chain Monte Carlo simulations; essentially, one samples the space of states of a system by generating a chain of states using a Markov process (a random process where the transition probability to any other state depends only on the current state, not on any of the prior history of the process). If we call the desired distribution of states Q(x) (which in field theory will be a Boltzmann factor Z-1e-S(x)), and the probability that the Markov process takes us to x starting from y P(x,y), we want to require that the Markov process keep Q(x) invariant, i.e. Q(x)=Σy P(x,y) Q(y). A sufficient, but not necessary condition for this is the the Markov process satisfy the condition of detailed balance: P(y,x)Q(x)=P(x,y)Q(y).

The simplest algorithm that satisfies detailed balance is the Metropolis algorithm: Chose a candidate x at random and accept it with probability P(x,y) = min(1,Q(x)/Q(y)), or else keep the previous state y as the next state.

Another property that we want our Markov chain to have is that it is ergodic, that is that the probability to go to any state from any other state is non-zero. While in the case of a system with a state space as huge as in the case of a lattice field theory, it may be hard to design an ergodic Markov step, we can achieve ergodicity by chaining several different non-ergodic Markov steps (such as first updating site 1, then site 2, etc.) so as to obtain an overall Markov step that is ergodic. As long as each substep has the right fixed-point distribution Q(x), e.g. by satisfying detailed balance, the overall Markov step will also have Q(x) as its fixed-point distribution, in addition to being ergodic. This justifies generating updates by 'sweeping' through a lattice point by point with local updates.

Unfortunately, successive states of a Markov chain are not really very independent, but in fact have correlations between them. This of course means that one does not get truly independent measurements from evaluating an operator on each of those states. To quantify how correlated successive states are, it is useful to introduce the idea of an autocorrelation time.

It is a theorem (which I won't prove here) that any ergodic Markov process has a fixed-point distribution to which it converges. If we consider P(x,y) as a matrix, this means that it has a unique eigenvalue λ0=1, and all other eigenvalues λi (|λi+1|≤|λi|) lie in the interior of the unit circle. If we start our process on a state u=Σi civi (where vi is the eigenvector belonging to λi), then PNu = Σi λiN civi = c0v0 + λ1Nc1v1 + ..., and hence the leading deviation from the fixed-point distribution decays exponentially with a characteristic time Nexp=-1/log|λ1| called the exponential autocorrelation time.

Unfortunately, we cannot readily determine the exponential autocorrelation time in any except the very simplest cases, so we have to look for a more accessible measure of autocorrelation. If we measure an observable O on each successive state xt, we can define the autocorrelation function of O as the t-average of measurements that are d steps apart: CO(d)=<O(xt+d)O(xt)>t/<O(xt)2>t, and the integrated autocorrelation time AOd CO(d) gives us a measure of how many additional measurements we will need to iron out the effect of autocorrelations.

With these preliminaries out of the way, in the next post we will look at the Hybrid Monte Carlo algorithm.