Saturday, April 28, 2007

Carl Friedrich von Weizsäcker 1912-2007

The German physicist, philosopher and peace researcher Carl Friedrich von Weizsäcker has died on 28th April at the age of 94. The brother of former German president Richard von Weizsäcker was born on 28th June 1912.

Carl Friedrich von Weizsäcker studied physics under Werner Heisenberg and Niels Bohr. Working with Hans Bethe in nuclear physics, he discovered the Bethe-Weizsäcker formula for nuclear masses and the Bethe-Weizsäcker cycle of nuclear fusion that powers the heavy stars. He also developed a model for the evolution of the solar system.

During the Second World War, von Weizsäcker was a member of the elite team of German physicists working on the unsuccessful attempt to develop a nuclear weapon for Nazi Germany; later, von Weizsäcker always maintained that the failure of that project was due to the physicists' unwillingness to develop such a devastating weapon for the Nazis, rather than a lack of ability to do so.

After the war, von Weizsäcker was a prominent opponent of plans for the nuclear armament of West Germany, signing the declaration of the Göttingen Eighteen that publicly exposed and rejected defence minister Franz-Josef Strauß's plan to arm the newly refounded German Army (Bundeswehr) with tactical nuclear weapons, and that created enough public opposition to end those plans once and for all.

Carl Friedrich von Weizsäcker's opposition to nuclear weapons and his interest in the responsibility of scientists for the use of their research led him into peace research, and to founding and directing the Max-Planck-Institute for research into living conditions in a scientific-technical world. He also worked as a philosopher, trying to unify all of physics into a coherent system of Natural Philosophy based on the idea of the quantum dynamics of primal logical alternatives (Uralternativen) underlying physical reality.

Wednesday, April 25, 2007

Earth-2

The science news story of today is that according to an ESO press release, the first extrasolar planet orbiting its host star in the "Goldilocks" zone, the zone of temperatures allowing for liquid water, and hence capable of supporting life, has been found.

The new planet, whose radius is estimated 1.5 times the Earth's radius, orbits a red dwarf called Gliese 581 on a close orbit with an orbital period of just 13 days; this close orbit is the reason why astronomers were even able to detect such a realtively low-mass planet. Because a red dwarf like Gliese 581 is much dimmer and cooler than a yellow dwarf like our Sun, however, this still lies in its "Goldilocks" zone with surface temperatures estimated to lie between 0 and 40 degrees Celsius, and hence Gliese 581 c (as the new planet is called) could have oceans. It should be noted, though, that the assumption of an earth-like rocky planet is based on models, not observations, so far. The next step will presumably be to attempt to detect telltale spectral lines that might reveal the existence of an atmosphere or of liquid water.

And at just 20.4 lightyears distance, the good news is that once alien civilisations have been found on Gliese 581 c, we will even be able to keep up a meaningful conversation with them. Yes, that was just a joke, but this is going to be big news in the popular press, and I am sure some tabloid will report this as "Alien life discovered!" or some such nonsense.

In other news, Life on the Lattice has now been moved to the new Blogger, and this time, things seem to work for the most part.

Thursday, April 19, 2007

Some quick links

Superweak has an interesting post on blind analysis, which is the first technique that has been carried over from medicine into nuclear and particle physics (rather than the other way, as were NMR, PET and a host of others). More on blind analysis techniques in experimental particle physics can be found in this review. Reading this, I was wondering wether any lattice groups used blinding in their data analyses; I am not aware of any that do, and the word "blind" does not appear to occur in hep-lat abstracts (except for phrases like "blindly relying on" and such). It may not be necessary, because we don't do the same kind of analyses that the experimenters do (like imposing cuts on the data), but the possibility of some degree "experimenter's (!?) bias" may still exist in the choice of operators used, priors imposed on fits etc.

There is a new paper on the arXiv which reports on tremendous gains in simulation efficiency that the authors have observed when using a loop representation for fermions instead of the conventional fermion determinant. Unfortunately their method does not work with gauge theories (except in the strong coupling limit) because it runs into a fermion sign problem, so it won't revolutionise QCD simulations, but it is very interesting, not least because it looks a lot like some kind of duality (between a theory of self-interacting fermions and a theory of self-avoiding loops) is at work.

Wednesday, April 18, 2007

Artificial "plants" invented?

According to this press release, chemists at UCSD have realized the first step toward the creation of artificial "plants" that use solar radiation to convert CO2 into fuel. Their prototype still needs additional energy input, but they believe they will be able to optimize it so it will run on solar power alone. The device creates carbon monoxide (CO), which is an extremely toxic gas which is commonly used in suicides, but which also serves an important basis material for the chemical industry and can even be converted into liquid fuel via the Fischer-Tropsch process. Artificial photosynthesis sounds like a great idea, but I am not an expert on this, so maybe there are hidden caveats that the inventors are not talking about. Any additional information from experts would be most welcome.

Tuesday, April 03, 2007

New identifiers at the arXiv

The arXiv have changed their identifiers away from the familiar arch-ive/YYMMNNN (e.g. hep-lat/0605007) format to a new YYMM.NNNN (e.g. 0704.0274) format, which will be used across archives; the change was implemented on April Fool's Day. One consequence of the new identifiers is that the preprint numbers within an archive are no longer consecutive, making the "previous" and "next" functions on the abstract listings rather less useful. Existing papers will retain their old-style identifiers, though. It will remain to be seen how the community likes the change.

Another change, which at least I like quite a bit, is the new presentation format for abstracts. With the more commonly required pieces of information at the top, it looks a lot neater than the old one, which had a lot of less useful things (submission history etc.) in the first few lines.

Sunday, April 01, 2007

The Quantum Vacuum, Loops and Lattice Artifacts

This post was written for a general audience, and hence is written in a rather more popular language than our usual fare at Life on the Lattice. If you are familiar with the basic ideas behind perturbative improvement, you may want to skip this post.

When we think about the vacuum in classical physics, we think of empty space unoccupied by any matter, through which particles can move unhindered and in which fields are free from any of the non-linear interaction effects which make e.g. electrodynamics in media so much more difficult.

In Quantum Field Theory, the vacuum turns out to be quite different from this inert stage on which things happen; in fact the vacuum itself is a non-linear medium, a foamy bubble bath of virtual particles popping into and out of existence at every moment, a very active participant in the strange dance of elementary particles that we call the universe.

A metaphor which may make this idea a little clearer could be to think of the vacuum as a sheet of paper on which you write with your pen. Looked at on a large scale, the paper is merely a perfectly flat surface on which the pen moves unhindered. On a smaller scale, the paper is actually a tangle of individual fibers going in all directions and against which the pen keeps hitting all the time, thus finding the necessary friction to allow efficient writing.

In the case where the paper is the vacuum, the analogue of the paper fibres are the bubbles of virtual particle pairs that are constantly being created and annihilated in the quantum vacuum, the analogue of the pen is a particle moving through the vacuum, and the analogue of friction is the modification of the particle's behavior as compared with the classical theory which happens as a result of the particle interacting with virtual particle pairs.

At first sight, this description of the vacuum may appear like wild speculation, but it has in fact very observable consequences. In Quantum Electrodynamics (QED), the famous Lamb shift is a consequence of the interactions of the electron in a hydrogen atom with virtual photons, as are the anomalous magnetic moment of the electron and the scattering of light by light in the vacuum. In fact, none of the amazingly accurate predictions of QED (the most accurate theory we have) would work without taking into account the effects of the quantum vacuum.

In lattice QCD, we care about the vacuum because it affects how the discrete lattice theory relates to its continuum limit. By discretising a continuum theory, we introduce a discretisation error: When comparing an observable Oa measured on a lattice with lattice spacing a with the same observable in the continuum O0, we find that they are related as

O_a=O_0+c_1(\mu a)+c_2(\mu a)^2+\dots

where μ is some energy scale that is typical of the reactions contributing to the observable O. In the classical theory (or at "tree level" as we say because the Feynman diagrams corresponding to classical physics have no loops in them), we can then tune the lattice theory so that as many of the ci as we want to get rid of become zero, and the discrepancy between lattice and continuum becomes small.

At the quantum level, however, we get Feynman diagrams with loops in them that describe how particles traveling through the quantum vacuum interact with virtual particles; the problem with these is that the virtual particles exist at very short distances and hence can have very large momenta by virtue of Heisenberg's uncertainty relation. At very large momenta, the deviation of the lattice theory from the continuum becomes very evident, and hence the loops on the lattice contribute terms that differ a lot from what the same loops would contribute in the continuum. And then we find that this difference reintroduces the a-dependence that we got rid of classically by tuning our theory!

This is clearly no good. What we need to do is to get rid of the a-dependence (up to some order in a) in the quantum theory, too. There are a number of ways how to go about this, but the one most commonly used is called perturbative improvement. In perturbative improvement, we calculate the effect of the virtual particle loops by evaluating Feynman diagrams (a Feynman diagram isn't just a pretty picture: there is a well-defined mathematical expression corresponding to each Feynman diagram) on the lattice and extracting their contribution to the lattice artifacts ci to some order in a. Once we have these contributions, we can then tune our theory again so that these contributions to the ci are cancelled, and the discrepancy between lattice and continuum becomes small again.

Unfortunately, evaluating Feynman diagrams on the lattice is much harder than in the continuum in many ways, so that we need some rather advanced methods to do this, and there aren't very many people doing it. So this is an area where progress has been slow for a while. The next post will tell you how a group of collaborators including myself recently made some pretty significant progress in this field.