Sunday, December 09, 2007

Algorithms for dynamical fermions -- Hybrid Monte Carlo

In the previous post in this series parallelling our local discussion seminar on this review, we reminded ourselves of some basic ideas of Markov Chain Monte Carlo simulations. In this post, we are going to look at the Hybrid Monte Carlo algorithm.

To simulate lattice theories with dynamical fermions, one wants an exact algorithm that performs global updates, because local updates are not cheap if the action is not local (as is the case with the fermionic determinant), and which can take large steps through configuration space to avoid critical slowing down. An algorithm satisfying these demands is Hybrid Monte Carlo (HMC). HMC is based on the idea of simulating a dynamical system with Hamiltonian H = 1/2 p2 + S(q), where one introduces fictitious conjugate momenta p for the original configuration variables q, and treats the action as the potential of the fictitious dynamical system. If one now generates a Markov chain with fixed point distribution e-H(p,q), then the distribution of q ignoring p (the "marginal distribution") is the desired e-S(q).

To build such a Markov chain, one alternates two steps: Molecular Dynamics Monte Carlo (MDMC) and momentum refreshment.

MDMC is based on the fact that besides conserving the Hamiltonian, the time evolution of a Hamiltonian system preserves the phase space measure (by Liouville's theorem). So if at the end of a Hamiltonian trajectory of length τ we reverse the momentum, we get a mapping from (p,q) to (-p',q') and vice versa, thus obeying detailed balance: e-H(p,q) P((-p',q'),(p,q)) = e-H(p',q') P((p,q),(-p',q')), ensuring the correct fixed-point distribution. Of course, we can't actually exactly integrate Hamilton's in general; instead, we are content with numerical integration with an integrator that preserves the phase space measure exactly (more about which presently), but only approximately conserves the Hamiltonian. We make the algorithm exact nevertheless by adding a Metropolis step that accepts the new configuration with probability e-δH, where δH is the change in the Hamiltonian under the numerical integration.

The Markov step of MDMC is of course totally degenerate: the transition probability is essentially a δ-distribution, since one can only get to one other configuration from any one configuration, and this relation is reciprocal. So while it does indeed satisfy detailed balance, this Markov step is hopelessly non-egodic.

To make it ergodic without ruining detailed balance, we alternate between MDMC and momentum refreshment, where we redraw the fictitious momenta at random from a Gaussian distribution without regard to their present value or that of the configuration variables q: P((p',q),(p,q))=e-1/2 p'2. Obviously, this step will preserve the desired fixed-point distribution (which is after all simply Gaussian in the momenta). It is also obviously non-ergodic since it never changes the configuration variables q. However, it does allow large changes in the Hamiltonian and breaks the degeneracy of the MDMC step.

While it is generally not possible to prove with any degree of rigour that the combination of MDMC and momentum is ergodic, intuitively and empirically this is indeed the case. What remains to see to make this a practical algorithm is to find numerical integrators that exactly preserve the phase space measure.

This order is fulfilled by symplectic integrators. The basic idea is to consider the time evolution operator exp(τ d/dt) = exp(τ(-∂qH ∂p+∂pH ∂q)) = exp(τh) as the exponential of a differential operator on phase space. We can then decompose the latter as h = -∂qH ∂p+∂pH ∂q = P+Q, where P = -∂qH ∂p and Q = ∂pH ∂q. Since ∂qH = S'(q) and ∂pH = p, we can immediately evaluate the action of eτP and eτQ on the state (p,q) by applying Taylor's theorem: eτQ(p,q) = (p,q+τp), and eτP = (p-τS'(q),q).

Since each of these maps is simply a shear along one direction in phase space, they are clearly area preserving; so are all their powers and mutual products. In order to combine them into a suitable integrator, we need the Baker-Campbell-Hausdorff (BCH) formula.

The BCH formula says that for two elements A,B of an associative algebra, the identity

log(eAeB) = A + (∫01 ((x log x)/(x-1)){x=ead Aet ad B} dt) (B)

holds, where (ad A )(B) = [A,B], and the exponential and logarithm are defined via their power series (around the identity in the case of the logarithm). Expanding the first few terms, one finds

log(eAeB) = A + B + 1/2 [A,B] + 1/12 [A-B,[A,B]] - 1/24 [B,[A,[A,B]]] + ...

Applying this to a symmetric product, one finds

log(e1/2 AeBe1/2 A) = A + B + 1/24 [A+2B,[A,B]] + ...

where in both cases the dots denote fifth-order terms.

We can then use this to build symmetric products (we want symmetric products to ensure reversibility) of eP and eQ that are equal to eτh up to some controlled error. The simplest example is

(eδτ/2 Peδτ Qeδτ/2 P)τ/δτ = eτ(P+Q) + O((δτ)2)

and more complex examples can be found that either reduce the order of the error (although doing so requires one to use negative times steps -δτ as well as positive ones) or minimize the error by splitting the force term P into pieces Pi that each get their own time step δτi to account for their different sizes.

Next time we will hear more about how to apply all of this to simulations with dynamical fermions.

Thursday, November 29, 2007

Algorithms for dynamical fermions -- preliminaries

It has been a while since we had any posts with proper content on this blog. Lest my readers become convinced that this blog has become a links-only intellectual wasteland, I hereby want to commence a new series on algorithms for dynamical fermions (blogging alongside our discussion seminar at DESY Zeuthen/Humboldt University, where we are reading this review paper; I hope that is not too lazy to lift this blog above the waste level...).

I will assume that readers are familiar with the most basic ideas of Markov Chain Monte Carlo simulations; essentially, one samples the space of states of a system by generating a chain of states using a Markov process (a random process where the transition probability to any other state depends only on the current state, not on any of the prior history of the process). If we call the desired distribution of states Q(x) (which in field theory will be a Boltzmann factor Z-1e-S(x)), and the probability that the Markov process takes us to x starting from y P(x,y), we want to require that the Markov process keep Q(x) invariant, i.e. Q(x)=Σy P(x,y) Q(y). A sufficient, but not necessary condition for this is the the Markov process satisfy the condition of detailed balance: P(y,x)Q(x)=P(x,y)Q(y).

The simplest algorithm that satisfies detailed balance is the Metropolis algorithm: Chose a candidate x at random and accept it with probability P(x,y) = min(1,Q(x)/Q(y)), or else keep the previous state y as the next state.

Another property that we want our Markov chain to have is that it is ergodic, that is that the probability to go to any state from any other state is non-zero. While in the case of a system with a state space as huge as in the case of a lattice field theory, it may be hard to design an ergodic Markov step, we can achieve ergodicity by chaining several different non-ergodic Markov steps (such as first updating site 1, then site 2, etc.) so as to obtain an overall Markov step that is ergodic. As long as each substep has the right fixed-point distribution Q(x), e.g. by satisfying detailed balance, the overall Markov step will also have Q(x) as its fixed-point distribution, in addition to being ergodic. This justifies generating updates by 'sweeping' through a lattice point by point with local updates.

Unfortunately, successive states of a Markov chain are not really very independent, but in fact have correlations between them. This of course means that one does not get truly independent measurements from evaluating an operator on each of those states. To quantify how correlated successive states are, it is useful to introduce the idea of an autocorrelation time.

It is a theorem (which I won't prove here) that any ergodic Markov process has a fixed-point distribution to which it converges. If we consider P(x,y) as a matrix, this means that it has a unique eigenvalue λ0=1, and all other eigenvalues λi (|λi+1|≤|λi|) lie in the interior of the unit circle. If we start our process on a state u=Σi civi (where vi is the eigenvector belonging to λi), then PNu = Σi λiN civi = c0v0 + λ1Nc1v1 + ..., and hence the leading deviation from the fixed-point distribution decays exponentially with a characteristic time Nexp=-1/log|λ1| called the exponential autocorrelation time.

Unfortunately, we cannot readily determine the exponential autocorrelation time in any except the very simplest cases, so we have to look for a more accessible measure of autocorrelation. If we measure an observable O on each successive state xt, we can define the autocorrelation function of O as the t-average of measurements that are d steps apart: CO(d)=<O(xt+d)O(xt)>t/<O(xt)2>t, and the integrated autocorrelation time AOd CO(d) gives us a measure of how many additional measurements we will need to iron out the effect of autocorrelations.

With these preliminaries out of the way, in the next post we will look at the Hybrid Monte Carlo algorithm.

Thursday, November 01, 2007

arXiv API

Via Jacques Distler: The arXiv now has an API intended to allow web application developers access to all of the arXiv data, search and linking facilities. They have a Blog and a Google group about it, as well. Anybody wants to guess when we'll see a "My arXiv papers" application for Facebook?

Monday, October 22, 2007

Monday, September 24, 2007

Brief post

This is just a brief note saying that I am still alive and still blogging here, but that I have been too busy (moving across the Atlantic, settling into my new job here in Zeuthen, finding a flat in Berlin, and so forth) to write anything since my last post.

To catch up with the aftermath of the lattice meeting: the slides for the plenary and parallel talks are now online (just click on the "pdf" link next to each talk), as are some photos (including one showing me looking into the empty air above, thinking deeply about what to write here). The proceedings are also in progress (the deadline for submissions is 1st October, so expect a flurry of preprints on the lattice arXiv this week).

I'll be back with post that have actual content at some point, but don't expect to much for the next month or so.

Saturday, August 18, 2007

Lattice 2007 -- Day Six

My apologies for the delay in posting this. A cold and various personal matters kept me from posting it earlier.

The first plenary talk today was Walter Wilcox speaking about deflation methods for fermion inverters. Deflation methods like GMRES-DR are based on Krylov subspace ideas, where the Krylov space is augmented by some (approximate) eigenvectors to remove the corresponding eigenvalues from the system, thus improving convergence.

Next was Falk Bruckmann, who spoke about exploring the QCD vacuum with lattice QCD. The nonperturbative degrees off freedom relevant for the QCD vacuum are topological objects (vortices, monopoles and instantons). Studying these on the lattice is hard, but progress is being made.

The third talk of the session, about renormalization-group flows in multi-parameter in φ4 theories, was given by Ettore Vicari. Critical phenomena can be described in terms of a few critical exponent; one way to determine these is by looking at fixed points of renormalisation group flows. Since there are only a certain number of universality classes into which those critical points can fall, one can study these by looking at φ4 models falling into different classes (Landau-Ginzburg-Wilson models); this may even have some applications to determining the nature of the QCD phase transition.

After the coffee break, Michele Della Morte got a plenary session of his own for his talk about determining heavy quark masses. A number of determinations of heavy-quakr observables were summarised, and a more detailed overview of recent progress in determining the b-quark mass using HQET was given.

After that, the organisers thanked the staff who had made the conference possible, and they received a round of well-deserved applause. The organisers got some equally well-deserved applause of their own, and all partcipants were invited to attend Lattice 2008 in Williamsburg, VA, which will be held July 14-19, 2008. Looking forward beyond next year, Lattice 2009 was announced to take place in Beijing, and so the meeting adjourned.

Finally I had some time to look around the city properly, and so I visited the Johannes Kepler-Gedächtnishaus (Kepler's dying place, and today a museum about his life) with some colleagues. After that, highlights on our tour round the city were the romanesque Schottenkirche (the church of a monastery build in the 11th century by Iro-Scottish monks) and St. Emmeram (the church of a former monastery that now serves as the palace of the Princess of Thurn and Taxis). I will do some more sightseeing tomorrow morning, but since I don't think it will interest my readers too much, this closes my coverage of Lattice 2007.

Friday, August 03, 2007

Lattice 2007 -- Day Five

The opulent banquet, late hours and probable overconsumption of Bavarian beer afterwards led to a notable decrease in the occupation number of the seats at the first plenary session today. The first plenary talk was Jo Dudek speaking about radiative charmonium physics. Experimentally theses are part of the research program at CLEO, but until now have been studied mostly in potential models. Radiative decays have now been studied on the lattice by analysing three-point function, but two-photon decays require some new theoretical developments based on combining QED perturbation theory and the LSZ reduction formula with lattice simulations.

The second speaker was Johan Bijnens talking about quark mass dependence from continuum Chiral Perturbation Theory at NNLO. After a quick overview of Chiral Perturbation Theory ideas and methods, he presented the results that have been obtained in NNLO light meson χPT during the past few years.

Next was Silvia Necco who spoke about the determination of low-energy constants from lattice simulations in both the p- and ε-regimes. The ε-regime is particularly useful because the influence of higher-order LECs is small there, so that the leading-order LECs Σ and F can be determined accurately.

After the coffee break, Philip Hägler talked about hadron structure from lattice QCD, giving a review of recent determinations of hadron electric polarisabilities and form factors, the nucleon spin fractions and other hadron structure observables.

The next talk was by Sinya Aoki, who spoke about the determination of hadronic interactions from QCD. ππ scattering can be studied on the lattice using Lüscher's finite-volume method, and this has been used to obtain results for the ρ meson decay width as well. Baryon-baryon potentials can be computed by computing the energy of a Qqq-qqQ system as a function of QQ separation, where Q denotes static quarks, and similarly for mesons. A different approach defines a potential from a measured wavefunction and its energy via an auxiliary Schrödinger equation.

The last plenary speaker for today was Gert Aarts with a talk about transport and spectral functions in high-temperature QCD. A prominent topic in this field is the fate of charmonium states in the quark-gluon plasma state. Another is the hyhdrodynamics of the QGP, which has been observed to be a nearly ideal fluid experimentally. Key to solving these problems is the analysis of spectral functions, which can be obtained from lattice correlators by means of a maximum extropy method.

In the afternoon there were parallel session again. The most remarkable talk was a summary of a proposed proof that SU(N) gauge theory is confining at all values of the coupling using a renormalisation group blocking technique by Terry Tomboulis. I am sure this proof will be closely scrutinised by the experts, and if it holds up, that would be a major breakthrough.

Thursday, August 02, 2007

Lattice 2007 -- Day Four

The first plenary session today started with a talk about Kaon physics on the lattice by Andreas Juettner. The leptonic decays of kaons are important in order ot determine the CKM matrix element Vus. A large number of determinations of |Vus| from Kl2 and Kl3 decays have been performed in the last couple of years, which are mutually compatible for the most part. An important feature of kaon physics is CP violation in neutral kaon decays. Determinations of BK have been done in a number of different formulations, which show a number of minor discrepancies due to different error estimates, although they all seem to be compatible with the best global fit.

Next was a survey of large-N continuum phase transitions by Rajamani Narayanan. Large-N QCD in the 't Hooft limit (g2N fixed, g to 0, N to &infinity;) has been studied analytically in two dimensions where it can be reduced to an Eguchi-Kawai model, and numerically in three and four dimensions. It exhibits a variety of phase transitions in coupling, box size and temperature, too many in fact for me to properly follow the talk.

After the coffee break, a presentation on the BlueGene/P architecture and future developments was given by Alan Gara of IBM. The limits of the growth of supercomputer performance still seem to be far away, and Exaflop performance allowing dynamc simulations of 1283x256 lattices was predicted for 2023.

A talk on QCD thermodynamics by Frithjof Karsch followed. The question he addressed was whether there was evidence for different temperatures for chiral symmetry restoration and deconfinement, or whether these two transitions coincided. On the realtively coarse lattices that are available, improved actions are needed to approach the continuum limit. In spite of progress in the analysis of the various sources of systematic error, there appears to be a discrepancy in the answer to this question obtained by different groups.

A second QCD thermodynamics talk was given by Zoltan Fodor, who also addressed the nature of the QCD phase transition, outlining the evidence that the transition is in fact a crossover at zero chemical potential. Since a crossover does not have a unique transition temperature, the different transition temperatures found using chiral and deconfinement observables could be physical.

In the lunch break I was picked up by the police again in order to look at the suspect they had arrested in the meantime. It was the guy who had robbed me, and he apparently confessed even before I arrived to identify him. He "apologized" on seeing me, but at the same time tried to excuse the robbery with my refusal to hand over cash when asked "nicely" -- I suppose you can't afford to have too much of a conscience if your preferred lifestyle involves injecting yourself with illegal and poisonous substances on a regular basis. I must admit I feel a certain amount of pity for these guys, criminals though they are.

I also want to take this opportunity to sing the highest possible praises of the Regensburg police, who were incredibly polite and helpful and solved this case so quickly. Let me also add that apparently this kind of thing is very rare around here, so as not to give people a wrong impression of what is really a very lovely place.

There were two parallel sessions in the afternoon. Of note was the talk by Rob Petry, a graduate student at Regina, about work we had done on using evolutionary fitting methods to extract mass spectra from lattice correlators, which met with a lot of interest from the audience.

In the evening the conference banquet took place at "Leerer Beutel", apparently a former medieval storehouse that has been converted to an art gallery-and-restaurant. The banquet was a huge buffet dinner, with great German and Italian dishes, the surroundings were very nice, as was talking to people in a more relaxed environment.

Lattice 2007 -- Day Three

Today was the traditional excursion day, so there were no plenaries in the morning. Instead there were parallel sessions, including the one with my talk (which went fine). A number of other lattice perturbation theory talks took place in the same session, and it was nice to see the methods from our paper get picked up by other groups.

At lunchtime, the police came to see me in order to have me pick the likely suspect in my robbery out of a photo array.

In the afternoon there were excursions. The one I was on went to Weltenburg Abbey, one of the oldest Benedictine abbeys north of the Alps, famous both for the beer from its 950 years old brewery, and for its beautiful baroque church, the latter a work of painter-architect Cosmas Damian Asam, his brother, sculptor Egid Quirin Asam, and his son, painter Franz Asam, members of the famous Asam clan of baroque churchbuilders in Germany. Particularly remarkable is the life-size statue of St. George on his horse, complete with dragon and saved princess. We went to the abbey by boat through the Danube gorge, a rock formation where the Danube broke through a layer of sedimentary rocks millions of years ago, drastically altering its course and leaving us both a testament to the earth-shaping power of water and a very scenic piece of valley. At the abbey, we had a guided tour of the church with a very nice and very well-informed guide who was apparently an art historian (a rather pleasant break from the common pattern of tour guides who could learn from some of the tourists they supposedly guide). After a pleasant snack and beer in the abbey's beergarden, we went back the same way we came.

Tuesday, July 31, 2007

Lattice 2007 -- Day Two

The second day started with the annual experimental talk, which was given by Diego Bettoni, who spoke about FAIR (Facility for Antiproton and Ion Research). After an overview of the accelerator facilities involved, he spoke about charmonium spectroscopy. The advantage of studying charmonium systems in pbar-p annihilation reactions is that states of all quantum numbers can be produced directly, as opposed to e+-e- annihilation which gives only 1-- states directly and all others via radiative decays only. Studies of the χc and ηc states were presented. Planned studies are searches for exotic charmonium hybrids and for glueballs, measurements of the in-nuclear-medium mass shifts of the D meson mass, studies of double hypernuclei (nuclei with two nucleons replaced by hyperons), measurements of the proton form factor in the timelike region, and reversed deeply virtual Compton scattering, all at PANDA, and studies of nucleon structure with polarised antiprotons at PAX. As always, the experimental talk was somewhat sobering, as it pointed out the huge gaps in one's (or at least my) knowledge of experimental physics.

Next was Craig McNeile speaking about hadron spectroscopy. Topics were the η and η' mesons, the 0++ spectrum, the controversial κ meson, distinguishing qqbar mesons from tetraquarks and molecules, the glueball spectrum and the search for glueballs within the meson spectrum, the changing and mixing in the 0++ spectrum from unquenching, the f0(600)/σ meson, and comparisons between different unquenched studies, including the different values obtained for r0.

After the coffee break, we got to the "staggered wars" plenary. Mike Creutz opened with a talk on "why rooting fails". The crux of his argument as I understood it was that rooting averages over the four tastes, which have pairwise opposite chiralities, leading to a theory that is not a theory of a single chiral fermion. The postulated manifestation of this was an incorrect singular behaviour of the 't Hooft vertex in the rooted theory, which could lead to the wrong physics in singlet channels, particularly the mass of the η'.

The opposite point of view was presented by Andreas Kronfeld. He argued that the group structure of staggered symmetries is much more complex than usually considered, and that the "phantom" Goldstone bosons coming from the tastes removed by rooting cancel in physical correlation function. He then proceeded to counter the points raised in Creutz's criticism of rooted staggered quarks, arguing that rooting turns the quark mass m into its absolute value |m|, that the staggered taste-singlet chirality is not the same as naive chirality, and does in fact track the topological index correctly if the chiral and continuum limits are taken in the right order.

The final plenary talk was an ILDG status report delivered by Carleton DeTar. The ILDG (International Lattice Data Grid) is the union of national grid applications from Europe, the UK, Japan, Australia and the US, which is intended to allow sharing of lattice configurations, and eventually propagators, between collaborations. They have developped portable data formats (a markup language called QCDml and a binary format for lattice configurations), as well as the grid software. While the permissions policies of the various collaborations are still an issue in some cases, the general tendency seems to be that it is now easier to download unquenched configurations than to generate quenched configurations, which will put the last nail into the coffin of the (already quite dead) quenched approximation over the next couple of years.

After the lunch break, there were parallel sessions. Some remarkable talks were about non-QCD physics on the lattice: Julius Kuti talked about getting Higgs physics from the lattice by using a lattice theory as the UV completion of the Standard Model, Simon Catterall talked about exploring gauge-gravity duality through simulations of N=4 super-Yang-Mills quantum mechanics as the dual of a type IIa string theory with D0 branes, and Jun Nishimura talked about non-lattice Monte Carlo simulations of SYM quantum mechanics as the dimensional reduction of a theory that might be M-theory.

The poster session was interesting, if a tad chaotic, for which I blame the Bavarian beer. I didn't get to see all the posters, since I spent too much time talking to people I knew who had posters.

Monday, July 30, 2007

Lattice 2007 -- Day One

Hello again from Regenburg. The conference opened at 9 with a brief address by a representative of the university, who said the usual things about how wonderful it is to have us here and so on.

A few brief announcements from the organisers followed, and then the first plenary session started with a talk by Peter Boule speaking for the RBC and UKQCD collaborations about simulations with dynamical domain wall fermions. There was a lot of comparison between domain wall and overlap with their respective topological and chiral properties. Preliminary results for the SU(3) and SU(2) chiral perturbation theory low-energy constants were presented, as were preliminary predictions for pseudoscalar decay constants, light quark masses, BK and the Kl3 form factor. Nucleon form factors and structure were also mentioned, but I'm afraid a lot of it went too fast for me to follow, so you will have to wait for the proceedings.

Next was a talk about exploring the chiral regime with dynamical overlap fermions by Hideo Matsufuru speaking for the JLQCD collaboration. He started by discussing the properties of the overlap operator and the methods used to deal with the sign function discontinuity. The method they decided to use was including a topology fixing term. The results presented were for Nf=2 (an Nf=2+1 run is in progress), and included studies of the ε-regime, physics at fixed topology and its relation to θ=0 physics, the topological susceptibility and chiral extrapolations at NNLO.

After the coffee break, the theme of actions for light quarks continued with Carsten Urbach on behalf of the European Twisted Mass (ETMC) collaboration speaking about twisted mass QCD at maximal twist. After a brief overview of the general features of tmQCD at maximal twist, such as automatic O(a) improvement, he explained how to tune to maximal twist and presented some results on the behaviour and performance of simulation algorithms. Finally, there were some Nf=2 results for the pseudoscalar mass and decay constant including finite-size effects and comparisons with chiral perturbation theory. Other preliminary new results included a measurement of the pion mass splitting (which is difficult ot measure because of disconnected contributions for the neutral pion), a study of the ε-regime, and many others.

The plenary session concluded with a talk by Yoshinobu Kuramashi of the CP-PACS collaboration about using clover quarks and the Iwasaki gauge action to approach the physical point in Nf=2+1 simulations using Lüscher's domain-decomposed HMC algorithm.

I had to see the police again during the lunch break in order to go through photo arrays of potential suspects (without much success; I couldn't identify the robbers in the database, but there was a recent arrest which included a person I think was one of them; if he has extremely bad teeth, the police think it will be a sufficient ID to charge him, but that means I'll have to go to the police yet again to identify him in person as having the right kind of bad teeth; the economic damage from this robbery in terms of my time and the cops' time probably already greatly exceeds the 100 Euro taken in value...). This meant that I also missed the first parallel session.

From the second parallel session of the afternoon, I found Ulli Wolff's talk about cluster simulations of two-dimensional fermions very interesting. Basically, the partition function for theories of 2d fermions can be reformulated as the partition function for a theory of non-intersecting loops, which can be reformulated as a theory of Ising spins, which then can be simulated efficiently using cluster algorithms. Of course, 2d fermions are very special, so this is unlikely to carry over to 4d QCD.

Lattice 2007 -- Day Zero

Hello from Regensburg, where the Lattice 2007 conference started with an evening reception in the old town. Things got off to a nice start, and Regensburg is a very beuatiful town. Unfortunately, a certain dampener was put on my enthusiasm for it when it became the scene for my being robbed of 100 Euros at knifepoint on one of the high streets by a couple of thugs. While physically unharmed, I was understandably rather shaken, and being questioned about the event by police until well past midnight didn't really enhance the experience.

Friday, July 20, 2007

Quick catching-up post

I have been somewhat too busy to blog recently, because I had to work both on this paper, the ideas behind which I discussed in this post earlier on, and on my talk (about this work, also discussed here) for the Lattice meeting in Regensburg.

This is therefore mostly just a quick catching-up post with a few things I thought remarkable enough to note:

  • Using a massive brute-force computation, computer scientists at the University of Alberta have solved the game of checkers. More on the story here and here, or straight from the source.

  • The most recent issue of Physics World is all devoted to questions of energy, a topic which I have blogged about before both here and elsewhere. I am especially delighted at the "lateral thought" column pointing out the problem of electronic devices on stand-by, which make up for about 20% of the average household's energy consumption, a problem I am fond of pointing out to anyone listening. Go unplug your TV and stereo overnight!

  • I intend to cover the Lattice meeting in Regensburg in much the same manner as last year's meeting.
--

Friday, June 29, 2007

Lattice 2007 abstracts online

The list of abstracts for the Lattice 2007 conference is now online. The JavaScript popup feature -- hover your mouse over a title and see the abstract as a tooltip -- is very nice.

Tuesday, June 12, 2007

Vexillology and the Moon Hare

A friend of mine recently asked me a question regarding the moon, and I thought it might be good to share the answer with my readers.

The question was whether the fact that Europeans tend to see the face of a man in the moon (with the Mare Imbrium and Mare Serenitatis forming the eyes, and the Mare Nubium and/or Mare Humorum forming the Mouth), whereas Asians tend to see a hare or rabbit (with the Mare Foecunditatis and Mare Nectaris forming the ears), had any astronomical basis.

Now, longitude is of course a largely arbitrary quantity (for which reason it was historically so hard to determine that a large prize was offered for the development of a method to determine it reliably while at sea) and should not have any effect on the appearance of celestial bodies (except for the time at which they transit, some parallax and effects following from those, such as visibility of eclipses etc., but certainly not the size or orientation of a disc). Latitude, on the other hand, has a true astronomical meaning, and a moment's thought should show you that the angle that the crescent moon (which always points towards the sun, which moves accross the sky at different angles to the horizon at different latitudes) forms with the horizon varies with latitude -- in fact, it is something of a cliché that this is reflected in the flags of islamic countries at different latitudes: compare the flags of Turkey, Pakistan and Mauritania.


Since the part of the moon turned towards the sun is of course independent of the observer's latitude, it follows that from the point of view of an observer close to the equator the orientation of the moon disk is such that the rabbit in the moon is quite clearly visible, whereas an observer in the temperate zone sees the moon under an angle at which he would likely prefer the face, unless being told about the rabbit (which, at least for me, easily supersedes the face). I therefore hypothesize that the tradition of the moon rabbit spread into East Asia from South Asia, whereas the tradition of the face in the moon comes from Nothern Europe. Does anyone know whether that would appear to agree with the historical record? It seems rather plausible to me.

Monday, June 11, 2007

Unquenching meets improvement

In a recent post, I explained how the fact that the vacuum in quantum field theory is anything but empty affects physical calculations by means of Feynman diagrams with loops, and specifically how one has to take account of these contributions in lattice field theory via perturbative improvement. In this post, I want to say some words about the relationship between perturbative improvement and unquenching.

To obtain accurate results from lattice QCD simulations, one must include the effects not just of virtual gluons, but also of virtual quarks. Technically, this happens by including the fermionic determinant that arises from integrating over the (Grassman-valued) quark fields. Since the historical name for omitting this determinant is "quenching", its inclusion is called "unquenching", and since quenching gives rise to an uncontrollable systematic error, unquenched simulations are absolutely crucial for the purpose of precise predictions and subsequent experimental tests of lattice QCD.

However, the perturbative improvement calculations that have been performed so far correct only for the effects of gluon loops. This leads to a mismatch in unquenched calculations using the perturbatively improved actions: while the simulation includes all the effects of both gluon and quark loops (including the discretisation artifacts they induce), only the discretisation artifacts caused by the gluon loops are removed. Therefore the discretisation artifacts caused by the quark loops remain uncorrected. Now, for many quantities of interest these artifacts are small higher-order effects; however, increased scaling violations in unquenched simulations (when compared with quenched simulations) have been seen by some groups. It is therefore important to account for the effects of the quark loops on the perturbative improvement of the lattice actions used.

This is what a group of collaborators including myself have done recently. For details of the calculations, I refer you to our paper. The calculation involved the numerical evaluation of a number of lattice Feynman diagrams (using automated methods that we have developed for the purpose) on a lattice with twisted periodic boundary conditions at a number of different fermion masses and lattice sizes, and the extrapolation of the results to the infinite lattice and massless quark limits. The computing resources needed were quite significant, as were the controls employed to insure the correctness of the results (which involved both repeated evaluations using independent implementations by different authors and comparison with known physical constraints, giving us great confidence in the correctness of our results). The results show that the changes in the coefficients in the actions needed for O(αsa2) improvement caused by unquenching are rather large for Nf=3 quark flavours, which is the case relevant to most unquenched simulations.

Tuesday, May 22, 2007

QCD with cold atoms?

Via Chad Orzel, a PhysicsWeb new story reports a recent proposal to use a rather different kind of lattice than the one usually discussed here for understanding QCD. The authors of this PRL paper propose that ultracold fermionic atoms with three possible hyperfine states trapped in an optical lattice (a periodic potential created by crossing laser beams) would behave like quarks in QCD, including forming "baryonic" states and showing the same phase transitions as QCD matter.

I don't know enough about atomic and optical physics to be able to tell whether this proposal is reasonable. If it is, it could be seen as one of the first examples of the use of an analogue quantum computer to simulate an otherwise experimentally inaccessible quantum system. However, I can see no real evidence that the atomic system would really be simulating QCD (which includes gluons and sea quarks) rather than some kind of quark model, so I remain a little sceptical regarding that claim. In any case, this proposal shows how far atomic and optical physics has come in its ability to finely control the states and interactions of atoms, so even if it isn't QCD, it's pretty cool.

Monday, May 14, 2007

Stakes

Tomaso Dorigo has a great post on the difficult decisions experimentalists have to make when deciding on their triggers, and on the human behaviour that can be observed in the meetings where they discuss and make those decisions.

To someone looking into the academic world from outside, anecdotes like the one reported by Tomaso probably sound a lot like yet another example of "academic politics is so bitter because the stakes are so low," a quotation commonly misattributed to Henry Kissinger. But what needs to be kept in mind is that the stakes are in fact, extremely high: when a researcher devotes almost every waking minute to some research project, foregoing other (much better paying) career options and postponing, or even completely giving up on, such things as parenthood and home ownership, what is at stake in discussions about that project's future role in the greater structure of human knowledge and discovery is no less than that researcher's major purpose in life. And that is a huge stake for anyone, whether they are a lowly scientist (or historian or whatever) or a mighty CEO -- although the former are far more likely to face that kind of risk than the latter, who have their golden parachutes. So a certain amount of acrimony is really to be expected.

Thursday, May 03, 2007

SciFi'ish Sunshine Scene

Via Clifford Johnson, a link to a BBC report about the first commercially operating solar thermal power plant in Europe. Located in sunny Andalusia, it generates 11 MW of electrical power from sunshine alone by concentrating the light of the sun on the top of a 115 m tall tower by means of 600 heliostats, huge mirrors that track the Sun in the sky. The concentrated light is so intense that when scattered off the water vapour and dust in the air it creates the scifi-like special effect visible in the picture at the BBC link. If this kind of power plant is shown to work well commercially, the resulting increase in energy production could be a huge boost to the economy of developping countries in the subtropics, and the Gulf sheikhs will have nothing to worry about when the oil runs out -- they have plenty of sunshine in the Arabian desert after all.

Saturday, April 28, 2007

Carl Friedrich von Weizsäcker 1912-2007

The German physicist, philosopher and peace researcher Carl Friedrich von Weizsäcker has died on 28th April at the age of 94. The brother of former German president Richard von Weizsäcker was born on 28th June 1912.

Carl Friedrich von Weizsäcker studied physics under Werner Heisenberg and Niels Bohr. Working with Hans Bethe in nuclear physics, he discovered the Bethe-Weizsäcker formula for nuclear masses and the Bethe-Weizsäcker cycle of nuclear fusion that powers the heavy stars. He also developed a model for the evolution of the solar system.

During the Second World War, von Weizsäcker was a member of the elite team of German physicists working on the unsuccessful attempt to develop a nuclear weapon for Nazi Germany; later, von Weizsäcker always maintained that the failure of that project was due to the physicists' unwillingness to develop such a devastating weapon for the Nazis, rather than a lack of ability to do so.

After the war, von Weizsäcker was a prominent opponent of plans for the nuclear armament of West Germany, signing the declaration of the Göttingen Eighteen that publicly exposed and rejected defence minister Franz-Josef Strauß's plan to arm the newly refounded German Army (Bundeswehr) with tactical nuclear weapons, and that created enough public opposition to end those plans once and for all.

Carl Friedrich von Weizsäcker's opposition to nuclear weapons and his interest in the responsibility of scientists for the use of their research led him into peace research, and to founding and directing the Max-Planck-Institute for research into living conditions in a scientific-technical world. He also worked as a philosopher, trying to unify all of physics into a coherent system of Natural Philosophy based on the idea of the quantum dynamics of primal logical alternatives (Uralternativen) underlying physical reality.

Wednesday, April 25, 2007

Earth-2

The science news story of today is that according to an ESO press release, the first extrasolar planet orbiting its host star in the "Goldilocks" zone, the zone of temperatures allowing for liquid water, and hence capable of supporting life, has been found.

The new planet, whose radius is estimated 1.5 times the Earth's radius, orbits a red dwarf called Gliese 581 on a close orbit with an orbital period of just 13 days; this close orbit is the reason why astronomers were even able to detect such a realtively low-mass planet. Because a red dwarf like Gliese 581 is much dimmer and cooler than a yellow dwarf like our Sun, however, this still lies in its "Goldilocks" zone with surface temperatures estimated to lie between 0 and 40 degrees Celsius, and hence Gliese 581 c (as the new planet is called) could have oceans. It should be noted, though, that the assumption of an earth-like rocky planet is based on models, not observations, so far. The next step will presumably be to attempt to detect telltale spectral lines that might reveal the existence of an atmosphere or of liquid water.

And at just 20.4 lightyears distance, the good news is that once alien civilisations have been found on Gliese 581 c, we will even be able to keep up a meaningful conversation with them. Yes, that was just a joke, but this is going to be big news in the popular press, and I am sure some tabloid will report this as "Alien life discovered!" or some such nonsense.

In other news, Life on the Lattice has now been moved to the new Blogger, and this time, things seem to work for the most part.

Thursday, April 19, 2007

Some quick links

Superweak has an interesting post on blind analysis, which is the first technique that has been carried over from medicine into nuclear and particle physics (rather than the other way, as were NMR, PET and a host of others). More on blind analysis techniques in experimental particle physics can be found in this review. Reading this, I was wondering wether any lattice groups used blinding in their data analyses; I am not aware of any that do, and the word "blind" does not appear to occur in hep-lat abstracts (except for phrases like "blindly relying on" and such). It may not be necessary, because we don't do the same kind of analyses that the experimenters do (like imposing cuts on the data), but the possibility of some degree "experimenter's (!?) bias" may still exist in the choice of operators used, priors imposed on fits etc.

There is a new paper on the arXiv which reports on tremendous gains in simulation efficiency that the authors have observed when using a loop representation for fermions instead of the conventional fermion determinant. Unfortunately their method does not work with gauge theories (except in the strong coupling limit) because it runs into a fermion sign problem, so it won't revolutionise QCD simulations, but it is very interesting, not least because it looks a lot like some kind of duality (between a theory of self-interacting fermions and a theory of self-avoiding loops) is at work.

Wednesday, April 18, 2007

Artificial "plants" invented?

According to this press release, chemists at UCSD have realized the first step toward the creation of artificial "plants" that use solar radiation to convert CO2 into fuel. Their prototype still needs additional energy input, but they believe they will be able to optimize it so it will run on solar power alone. The device creates carbon monoxide (CO), which is an extremely toxic gas which is commonly used in suicides, but which also serves an important basis material for the chemical industry and can even be converted into liquid fuel via the Fischer-Tropsch process. Artificial photosynthesis sounds like a great idea, but I am not an expert on this, so maybe there are hidden caveats that the inventors are not talking about. Any additional information from experts would be most welcome.

Tuesday, April 03, 2007

New identifiers at the arXiv

The arXiv have changed their identifiers away from the familiar arch-ive/YYMMNNN (e.g. hep-lat/0605007) format to a new YYMM.NNNN (e.g. 0704.0274) format, which will be used across archives; the change was implemented on April Fool's Day. One consequence of the new identifiers is that the preprint numbers within an archive are no longer consecutive, making the "previous" and "next" functions on the abstract listings rather less useful. Existing papers will retain their old-style identifiers, though. It will remain to be seen how the community likes the change.

Another change, which at least I like quite a bit, is the new presentation format for abstracts. With the more commonly required pieces of information at the top, it looks a lot neater than the old one, which had a lot of less useful things (submission history etc.) in the first few lines.

Saturday, March 31, 2007

The Quantum Vacuum, Loops and Lattice Artifacts

This post was written for a general audience, and hence is written in a rather more popular language than our usual fare at Life on the Lattice. If you are familiar with the basic ideas behind perturbative improvement, you may want to skip this post.

When we think about the vacuum in classical physics, we think of empty space unoccupied by any matter, through which particles can move unhindered and in which fields are free from any of the non-linear interaction effects which make e.g. electrodynamics in media so much more difficult.

In Quantum Field Theory, the vacuum turns out to be quite different from this inert stage on which things happen; in fact the vacuum itself is a non-linear medium, a foamy bubble bath of virtual particles popping into and out of existence at every moment, a very active participant in the strange dance of elementary particles that we call the universe.

A metaphor which may make this idea a little clearer could be to think of the vacuum as a sheet of paper on which you write with your pen. Looked at on a large scale, the paper is merely a perfectly flat surface on which the pen moves unhindered. On a smaller scale, the paper is actually a tangle of individual fibers going in all directions and against which the pen keeps hitting all the time, thus finding the necessary friction to allow efficient writing.

In the case where the paper is the vacuum, the analogue of the paper fibres are the bubbles of virtual particle pairs that are constantly being created and annihilated in the quantum vacuum, the analogue of the pen is a particle moving through the vacuum, and the analogue of friction is the modification of the particle's behavior as compared with the classical theory which happens as a result of the particle interacting with virtual particle pairs.

At first sight, this description of the vacuum may appear like wild speculation, but it has in fact very observable consequences. In Quantum Electrodynamics (QED), the famous Lamb shift is a consequence of the interactions of the electron in a hydrogen atom with virtual photons, as are the anomalous magnetic moment of the electron and the scattering of light by light in the vacuum. In fact, none of the amazingly accurate predictions of QED (the most accurate theory we have) would work without taking into account the effects of the quantum vacuum.

In lattice QCD, we care about the vacuum because it affects how the discrete lattice theory relates to its continuum limit. By discretising a continuum theory, we introduce a discretisation error: When comparing an observable Oa measured on a lattice with lattice spacing a with the same observable in the continuum O0, we find that they are related as

O_a=O_0+c_1(\mu a)+c_2(\mu a)^2+\dots

where μ is some energy scale that is typical of the reactions contributing to the observable O. In the classical theory (or at "tree level" as we say because the Feynman diagrams corresponding to classical physics have no loops in them), we can then tune the lattice theory so that as many of the ci as we want to get rid of become zero, and the discrepancy between lattice and continuum becomes small.

At the quantum level, however, we get Feynman diagrams with loops in them that describe how particles traveling through the quantum vacuum interact with virtual particles; the problem with these is that the virtual particles exist at very short distances and hence can have very large momenta by virtue of Heisenberg's uncertainty relation. At very large momenta, the deviation of the lattice theory from the continuum becomes very evident, and hence the loops on the lattice contribute terms that differ a lot from what the same loops would contribute in the continuum. And then we find that this difference reintroduces the a-dependence that we got rid of classically by tuning our theory!

This is clearly no good. What we need to do is to get rid of the a-dependence (up to some order in a) in the quantum theory, too. There are a number of ways how to go about this, but the one most commonly used is called perturbative improvement. In perturbative improvement, we calculate the effect of the virtual particle loops by evaluating Feynman diagrams (a Feynman diagram isn't just a pretty picture: there is a well-defined mathematical expression corresponding to each Feynman diagram) on the lattice and extracting their contribution to the lattice artifacts ci to some order in a. Once we have these contributions, we can then tune our theory again so that these contributions to the ci are cancelled, and the discrepancy between lattice and continuum becomes small again.

Unfortunately, evaluating Feynman diagrams on the lattice is much harder than in the continuum in many ways, so that we need some rather advanced methods to do this, and there aren't very many people doing it. So this is an area where progress has been slow for a while. The next post will tell you how a group of collaborators including myself recently made some pretty significant progress in this field.

Monday, March 12, 2007

Fitness and Fitting

I promised there were going to be some interesting posts, and I feel this is one of them. I want to talk about harnessing the power of evolution for the extraction of excited state masses from lattice QCD simulations.

OK, this sounds just outright crazy, right? Biology couldn't possibly have an impact on subnuclear physics (other than maybe by restricting the kinds of ideas our minds can conceive by the nature of our brains, which could of course well mean that the ultimate theory, if it exists, is unthinkable for a human being, but that is a rather pessimist view; I am also talking about QCD here). Well, biology doesn't have any impact on what is after all a much more fundamental discipline, obviously, but Darwin's great insight has applications far beyond the scope of mere biology. This insight, which I will roughly paraphrase as "starting from a set of entities which are subject to random mutations and from which those least adapted to some external constraints are likely to be removed and displaced by new entities derived from and similar to those not so removed, one will after a large enough time end up with a set of entities that are close to optimally adapted to the external constraints", is of course the basis of the very active field of computer science known as evolutionary algorithms. And optimisation is at the core of extracting results from lattice simulations.

What people measure in lattice simulations are correlators of various lattice operators at different (euclidean) times, and these can be expanded in an eigenbasis of the Hamiltonian as

$C(t)=\left\langle O(t)O(0)\right\rangle = \sum_n c_n e^{-E_n t}$

(for periodic boundary conditions in the time direction the exponential becomes a cosh instead, but let's just ignore that for now), where the cn measure the overlap between the eigenstates of the operator and those of the Hamiltonian, and the En are the energies of the Hamiltonian's eigenstates. Of course only states that have quantum numbers compatible with those of the operator O will contribute (since otherwise cn=0).

In order to extract the energies En from a measurement of the correlator <O(ti)O(0)>, one needs to fit the measured data with a sum of exponentials, i.e. one has to solve a non-linear least-squares fitting problem. Now, there are of course a number of algorithms (such as Levenberg-Marquardt) that are excellent at solving this kind of problem, so why look any further? Unfortunately, there are a number of things that an algorithm such as Levenberg-Marquardt requires as input that are unknown in a typical lattice QCD data analysis situation: How many exponentials should the fitting ansatz use (obviously we can't fit all the infinitely many states)? Which range of times should be fitted (and which should be disregarded as dominated by noise or disregarded higher states)? A number of Bayesian techniques designed to deal with this problem have sprung up over time (such as constrained fitting), and some of those deserve a post of their own at some point.

From the evolutionary point of view, one can simply allow evolution to find the optimal values for difficult-to-optimise parameters like the fitting range and number of states to fit. Basically, one sets up an ecosystem consisting of organisms that encode a fitting function complete with the range over which it attempts to fit the data. The fitness of each organism is taken to be proportional to minus its χ2/(d.o.f.); this will tend to drive the evolution both towards increased fitting ranges and lower numbers of exponentials (to increase the number of degrees of freedom), but this tendency is counteracted by the worsening of χ2. The idea is that if one subjects these organisms to a regimen of mutation, cross-breeding and selection, evolution will ultimately lead to an equilibrium where the competing demands for small χ2 and large number of degrees of freedom balance in an optimal fashion.

After Rob Petry here in Regina brought up this idea, I have been toying around with it for a while, and so far I am cautiously optimistic that this may lead somewhere: for the synthetic data sets that I let this method look at, it did pretty well in identifying the right number of exponentials to use when there was a clear-cut answer (such as when only finitely-many were present to start with). So the general method is sound; it remains to be seen how well it does on actual lattice data.

Friday, March 02, 2007

Around the blogs

Bee at Backreaction has a post on snow, which is indeed an important topic here in Canada. Regina doesn't even get that much snow by Canadian standards, and it still is very snowy around here (although it is worse when it doesn't snow, because that is when it gets really cold).

Christine Dantas, well known for her background independence, has a new blog called Theorema Egregium, presumably in homage to Gauss.

Also new to our blogroll is Resonaances, by an anonymous particle physicist known as "Jester", who blogs from CERN and allows everybody who is interested to obtain a glimpse into CERN's seminars.

Friday, February 16, 2007

New Book on the Lattice

There is a new book about lattice QCD by Tom DeGrand and Carleton DeTar (D&D). It is still quite new, and in fact I am still waiting for my copy to be delivered, but a senior colleague here in Regina was so nice to let me borrow his copy, so you can get my review.

D&D is a comprehensive overview of the current state of the art in lattice QCD. In the space of just 327 pages (excluding front and back matter) they manage to cover pretty much everything one needs to know about in order to be able to read the current research literature. To the best of my knowledge, this is the first lattice monograph to discuss such crucial topics as data analysis for lattice simulations, improved actions and operator matching, chiral extrapolations, and finite-volume effects.

Compared to Montvay and Münster (M&M) at 442 pages, and to Rothe at 481 pages, both of whom cover much less material, D&D are necessarily rather terse. There are no detailed derivations or proofs, and no discussion of the results of lattice simulations is given anywhere. The latter omission is very rightly justified by the authors, as to include them "would be to invite obsolescence". While the terseness of the presentation probably limits the usefulness of D&D as a graduate textbook, the authors' stated aim to bridge the gap between what a conventional (non-lattice) theorist already knows and the current research literature which often presupposes an enormous amount of specialised knowledge appear to have been met admirably well.

After a brief overview of continuum QCD and a quick introduction to path integrals for bosons and fermions, and to the renormalisation group, D&D turn to introducing the lattice discretisation of pure gauge theories, including topics such as gauge fixing and strong coupling expansions. A comprehensive overview of lattice fermion actions follows, covering naive, Wilson, twisted mass, staggered and exactly chiral fermions as well as heavy-quark actions (HQET, NRQCD and Fermilab action). This is succeeded by chapters discussing simulation algorithms for both gluonic and fermionic actions, including such state-of-the-art algorithms as RHMC, BiCGStab and Lüscher's implementation of Schwartz decomposition. Data analysis methods, including correlated fitting, bootstrap methods and Bayesian (constrained) fitting, are discussed in a chapter of their own. The design of improved lattice actions (covering both Symanzik and tadpole improvement, as well as "fat link" actions) also gets a chapter of its own, as does the design of measurement operators for spectroscopic quantities. This is followed by a chapter on Lattice Perturbation Theory (which even cites this paper) and one on matching operators between the lattice and the continuum. Chiral perturbation theory, including such difficult subjects as quenched and staggered χPT, also gets a chapter, as do finite-volume effects and their applications. An overview of Standard Model observables amenable to testing via lattice simulations and a brief introduction simulations of finite-temperature QCD round off this very comprehensive book (including even such specialised topics as dimensional reduction of thermal QCD or the maximum entropy method for extracting spectral functions). The bibliography and the index, while also rather terse, appear useful.

In short, D&D have written a comprehensive introduction to state-of-the-art lattice QCD, which should serve both as a useful introduction to those who know a little, but want to know much more, and as a quick reference for active researchers (although a more extensive bibliography will be missed by the latter). This book definitely belongs on the bookshelf of every lattice theorist as an important contemporary counterpoint to M&M's classic.

Monday, January 29, 2007

Alchemy

Everybody knows that we could in principle make gold from lead using a particle accelerator such as the LHC -- at billions of dollars per ounce, it just would be extremely expensive gold. Everybody also knows that gold is a chemical element, and hence cannot be produced from other elements by chemical means; you really need a particle accelerator or nuclear reactor to transmute elements (except of course for naturally radioactive elements that sort of transmute themselves).

But there are still those brave souls who will valiantly ignore the insights into the nature of things that science has gathered over the past 300 years, and go on a crucible-sade for the philosophers' stone. I am speaking of the members of the International Alchemy Guild, who will be gathering in Las Vegas for their first conference this year (I wonder if the choice of location is symbolic -- just like the roulette tables are money sinks that only con the stupid, so is alchemy?)

I'd be inclined to think this is a joke, but apparently this is a real organisation, whose membership benefits for those who "[d]emonstrate successful creation of the Vegetable or Mineral Stone in private laboratory work" (or are otherwise co-opted into this exclusive society, such as on account of having bought a mail-order degree from a "hermetic college") include a "gilded Certificate of Membership (suitable for framing)" with an accompanying "license to practice alchemy" (I wonder if that license can serve as a legal defence against fraud charges).

Tuesday, January 23, 2007

Evil, bad, diseased, or just ugly?

"Evil" is a word rarely heard in scientific discourse, at least among physicists, whose subject of study is after all morally neutral for pretty much any sensible definition of "morally". "Bad", "diseased" or "ugly" might be heard occasionally. But having all of them applied to a topic as relatively arcane as the fourth-root prescription for staggered fermions is, well, staggering. At last year's lattice meeting there was a lot of discussion as to whether this prescription was diseased or merely ugly. Now Mike Creutz has taken the discussion from the medicinally-aesthetic to the moral level by suggesting that rooting is actually evil. The arguments are much the same as before: The rooted staggered theory has a complicated non-locality structure at non-vanishing lattice spacing, and there is no complete proof (although there are strong arguments that many find very convincing) that this non-locality goes away in the continuum level. The debate will no doubt simmer on until a fully conclusive proof either way is found; the question is only, what kinds of unusual title words are we still going to see?

Monday, January 22, 2007

Still active

It has been a little quiet around here, mostly because I was staying at home in Germany (admittedly not a good excuse in the days of almost universal WiFi internet access). I wasn't completely lazy, however. And there is a pile of half-finished blog-posts waiting to appear.

Meanwhile, please help me welcome Sujit Datta to the physics blogosphere. Sujit is a Master's student at the University of Pennsylvania who researches quantum transport in carbon nanotubes and is especially interested in condensed matter physics and the study of complex systems. He started his blog on January 1, 2007, and so far has been a very prolific poster of interesting articles.

Wednesday, January 03, 2007

PhysicsWeb 2.0

The most recent issue of PhysicsWorld has a feature about how Web 2.0 technologies are transforming physics communications. Physicists' blogs feature quite prominently in this issue: besides a piece about blogs and wikis, which quotes physics bloggers Sabine Hossenfelder, Christine Dantas, Luboš Motl, Gordon Watts and Dave Bacon (apologies if I missed somebody) alongside the more critical voices of Nobel Laureates Philip Anderson and Jack Steinberger, they have an article by Sean Carroll of Cosmic Variance about his motivations for blogging.

And finally, they introduce a new column "Blog life", which starts with a profile of Chad Orzel's Uncertain Principles and is going to look at a different physics blog each month.

I suppose this makes physics blogging officially (at least almost) respectable. Now, if only PhysicsWorld would allow trackbacks ...

Update: Backreaction has some thoughts on this.