I promised there were going to be some interesting posts, and I feel this is one of them. I want to talk about harnessing the power of evolution for the extraction of excited state masses from lattice QCD simulations.
OK, this sounds just outright crazy, right? Biology couldn't possibly have an impact on subnuclear physics (other than maybe by restricting the kinds of ideas our minds can conceive by the nature of our brains, which could of course well mean that the ultimate theory, if it exists, is unthinkable for a human being, but that is a rather pessimist view; I am also talking about QCD here). Well, biology doesn't have any impact on what is after all a much more fundamental discipline, obviously, but Darwin's great insight has applications far beyond the scope of mere biology. This insight, which I will roughly paraphrase as "starting from a set of entities which are subject to random mutations and from which those least adapted to some external constraints are likely to be removed and displaced by new entities derived from and similar to those not so removed, one will after a large enough time end up with a set of entities that are close to optimally adapted to the external constraints", is of course the basis of the very active field of computer science known as evolutionary algorithms. And optimisation is at the core of extracting results from lattice simulations.
What people measure in lattice simulations are correlators of various lattice operators at different (euclidean) times, and these can be expanded in an eigenbasis of the Hamiltonian as
(for periodic boundary conditions in the time direction the exponential becomes a cosh instead, but let's just ignore that for now), where the cn measure the overlap between the eigenstates of the operator and those of the Hamiltonian, and the En are the energies of the Hamiltonian's eigenstates. Of course only states that have quantum numbers compatible with those of the operator O will contribute (since otherwise cn=0).
In order to extract the energies En from a measurement of the correlator <O(ti)O(0)>, one needs to fit the measured data with a sum of exponentials, i.e. one has to solve a non-linear least-squares fitting problem. Now, there are of course a number of algorithms (such as Levenberg-Marquardt) that are excellent at solving this kind of problem, so why look any further? Unfortunately, there are a number of things that an algorithm such as Levenberg-Marquardt requires as input that are unknown in a typical lattice QCD data analysis situation: How many exponentials should the fitting ansatz use (obviously we can't fit all the infinitely many states)? Which range of times should be fitted (and which should be disregarded as dominated by noise or disregarded higher states)? A number of Bayesian techniques designed to deal with this problem have sprung up over time (such as constrained fitting), and some of those deserve a post of their own at some point.
From the evolutionary point of view, one can simply allow evolution to find the optimal values for difficult-to-optimise parameters like the fitting range and number of states to fit. Basically, one sets up an ecosystem consisting of organisms that encode a fitting function complete with the range over which it attempts to fit the data. The fitness of each organism is taken to be proportional to minus its χ2/(d.o.f.); this will tend to drive the evolution both towards increased fitting ranges and lower numbers of exponentials (to increase the number of degrees of freedom), but this tendency is counteracted by the worsening of χ2. The idea is that if one subjects these organisms to a regimen of mutation, cross-breeding and selection, evolution will ultimately lead to an equilibrium where the competing demands for small χ2 and large number of degrees of freedom balance in an optimal fashion.
After Rob Petry here in Regina brought up this idea, I have been toying around with it for a while, and so far I am cautiously optimistic that this may lead somewhere: for the synthetic data sets that I let this method look at, it did pretty well in identifying the right number of exponentials to use when there was a clear-cut answer (such as when only finitely-many were present to start with). So the general method is sound; it remains to be seen how well it does on actual lattice data.