Thursday, December 14, 2006

Off-Lattice Links

JoAnne at Cosmic Variance has a nice introductory post about particle detectors. More about the subject, with a description of the CLEO-c detector, can be found at Superweak.

Peter Steinberg writes about nuclear toys, specifically an A.C. Gilbert U-238 Atomic Energy Lab. Nuclear toys for nuclear boys may sound like a bizarre relic from another age, but in fact, as recently as 2005, the Boy Scouts of America instituted a Nuclear Science merit badge with the advice of the APS Nuclear Physics Division.

Monday, December 11, 2006

Christmas presents

I'm sure everyone working in theoretical physics knows the problem: Many, probably most, of the people who are most important in your life -- family, friends, loved ones -- don't have the background to understand what your research is all about. So what will you do? A strict secret-agent-style "don't talk about work when at home" policy is deeply unsatisfying; after all, you went into academic research rather than into a more lucrative field such as, say, finance, because you are driven by a consuming interest in finding out how the universe works. But any attempt at a conversation about our work tends to end in a muddle of confusion ("What do you mean by a symmetry? What on earth is a group? Why do you have to prove that, isn't it obvious (or clearly false)? What's a simulation? How can a computer solve the equations if you can't, I mean, don't you have to tell it how to solve them? ...?") or worse yet, misunderstandings ("--- studies particles that form groups and have links between them, or something like that!") when talking to people lacking the basic concepts necessary to understand contemporary research, i.e. to virtually all non-scientists. Of course there always is the pop-sci approach (leave out mathematics and every other bit of "technical" detail and just focus on the beauty and wonder of it all), but the people closest to you usually want more than that: they want to be able to ask "How was your day/week?" and get a meaningful answer that does not exclude your research, which is after all what you spend most of your time doing. But if their background is in the arts or humanities, they don't just lack the technical knowledge of, e.g. group theory -- that could be easily remedied; no, most of the time they are actually deeply unfamiliar with mathematical reasoning and the general modes and methods of scientific research.

At Christmastime, there's always the question "what can I give to ---", and what better than to give the gift of greater understanding? For introducing people with a humanities background to the kind of ideas and ways of thinking used in some of the more abstract fields of theoretical physics, I have tried titles from Oxford University Press's Very Short Introduction series of books, such as the one on Particle Physics by Frank Close, the one on Quantum Theory by John Polkinghorne, the one on Cosmology by Peter Coles, or the one on Mathematics by Fields medalist Tim Gowers. I think they do a very good job at giving a non-technical introduction to their respective subjects that goes a good way beyond the usual pop-sci stuff without trying to make experts out of their readers.

For the more down-to-earth kind of physics, the most recent issue of PhysicsWorld (the one with a lattice on the front page), contains a very positive review of Louis Bloomfield's "How Everything Works", which is said to be a general-market version of the author's textbook "How Things Work" for physics courses for non-scientists.

Thursday, December 07, 2006

Lattice QCD makes title page

The latest issue of PhysicsWorld has a feature article on Lattice QCD by Christine Davies describing the recent progress made in confronting theory with experiment through unquenched lattice simulations. Among the highlights she mentions are the correct prediction of the mass of the Bc and the fact that the determinations of the quark masses and the strong coupling constant αs from unquenched lattice QCD are now more accurate than all other sources combined.

The article is very well written and should be easily understandable for anyone with a background in physics, and I would think that an informed layperson should also be able to learn something from it.

Axions discovered?

There is a forthcoming paper which claims the discovery of two particles decaying into electron pairs, which are tentatively identified as axions, with masses of 7 and 19 MeV respectively.

I am a little sceptical of this claim, based as it is on two narrow peaks being 3 standard deviations above the best fit to the background. I'm no experimentalist, but the situation appears to me to be too similar to the "discovery" of the pentaquarks, which later dissolved into statistical fluctuations as better data became available.

If this discovery turns out to be real, though, this would be huge news: the discovery of the axion would solve the strong CP problem (why is the CP-violating θ-angle in QCD so close to, or even identically, zero, when it generically should be of order one?) and might also contribute to solving the riddle of dark matter. There are a number of experiments looking for the axion, the best known being PVLAS, who claimed to have found a candidate with a mass in the meV (milli-eV) range, and the new experiments at CERN aiming to test their results.

So we should wait and see if axions have indeed been observed. If so, it would be great news for theoretical particle physics: it would be, as far as I am aware, the first discovery of a particle whose existence was conjectured based on naturalness considerations alone. If not, it would show once again that results based on low statistics should always be taken with great care.

Update: More criticism (by experimentalists) of the claimed discovery can be found on Chad Orzel's blog and on Superweak.

Physics Result of the Year, Anyone?

Chad Orzel has a poll about which physics result of 2006 deserves to be called The Physics Result Of The Year. Surprisingly, so far nobody seems interested in nominating their favourite result for this honour. Which is strange, because A) the APS has a list from which you could simply pick one, and B) nobody says you can't nominate your own results if you feel so inclined. This isn't the Nobel Prize, after all.

On the lattice QCD front, I'm going to be both bold and modest at the same time and nominate the recent progress in the debate about the validity of the fourth-root trick for staggered fermions as the result of the year. It isn't really a result, because the debate isn't completely resolved, but there are results now where there was mostly just conjecture before, so this is definitely progress.

Anyway, please go Chad's fine blog and nominate your favourite result of the year. You wouldn't want the public to think physicists have no enthusiasm for their own work, would you?

Tuesday, December 05, 2006

Hadronic Physics from Lattice QCD

As a matter of fact, I have no idea how my small circle of reader is composed with respect to physics expertise or professional position, but I like to pretend that some of my readers are physicists with a genuine interest in, but no real experience with, lattice QCD. It is to these (imagined, and perhaps imaginary) readers that I want to issue a book recommendation, just in time for inclusion on their holiday wishlist.

The book in question is "Hadronic Physics from Lattice QCD", edited by Anthony M. Green, published by World Scientific. The aim of this book is to provide an introduction to lattice QCD for non-specialist readers such as nuclear and particle physicists, and while it cannot replace one of the various introductory testbooks (such as Montvay and Münster or Rothe) as required reading for people interested in pursuing original research in the field, I think it succeeds very well at giving the non-specialist a much better idea of the how and what, the strengths and the limitations, of lattice QCD.

The book is a collection of independent chapters by different authors, each of which focusses on a specific issue of interest that can be studied using lattice QCD.

The first chapter, by Craig McNeile, starts with a basic introduction to lattice QCD and its methods, including a discussion of systematic errors including how they can be reduced via unquenching, improved actions and chiral perturbation theory. He then proceeds to give an overview of the masses of stable mesons and baryons that can be measured accurately, as well as an introduction to the use of maximal entropy methods to determine spectral functions from lattice data, and some of the methods used to incorporate electromagnetic effects and to study unstable particles on the lattice, both of which are rather hard problems.

The second chapter, by Chris Michael, is devoted to a discussion of exotics, or states that are neither conventional mesons nor baryons: glueballs, and their mixing with scalar mesons of the same quantum numbers, hybrid mesons (mesons that contain a gluonic excitation along with a quark-antiquark pair), and hadronic molecules (states consisting of several hadrons bound by their residual strong interactions).

The third chapter, by Gunnar Bali, discusses the quark-antiquark potential, starting from the static quark potential and its relation to Wilson loops, the strong coupling expansion on the lattice, the confining string picture and perturbative calculations of the potential, and going on to discuss some aspects of quark-antiquark and nucleon-nucleon potentials for nonstationary particles.

The fourth chapter, by Rudolf Fiebig and Harald Markum, is concerned with the difficult topic of hadronic interactions in lattice QCD. After describing some of the issues that arise in a 2+1 dimensional "toy" model, they discuss the highly sophisticated techniques that are used to extract information on pion-nucleon, nucleon-nucleon and pion-pion interactions from lattice QCD. This chapter has an appendix which describes aspects of improvement of lattice actions, an important ingredient in any lattice project aiming for precise predictions.

The fifth chapter, by Anthony Green, discusses "bridges" between lattice QCD and nuclear physics, such as nuclear effective field theories and potential models that are founded upon, or at least inspired by, QCD.

All chapters have extensive bibliographies that should function as excellent starting places for readers who wish to learn more about the subject.

Monday, December 04, 2006

The arXiv is changing

Via Urbano Franca: The arXiv preprint archive is changing the way it labels papers with effect from 1st January 2007. The familiar arch-ive/YYMMNNN identifiers like hep-lat/0411026 will be gone (although they will be retained for old papers), and new identifiers of the form YYMM.NNNN will take their place. The stated reason for this is that the math archive is getting dangerously close to 1000 submissions a month, which would break the existing indentifier system. The new identifiers will no longer be assigned on an archive-by-archive basis; including the archive will be done as in 0701.1234 [hep-lat]. The new system is expected to be good for a number of years, and after that five-digit identifiers YYMM.NNNNN will be needed.

This change appears to be orthogonal to the other announced big change in the physics arXiv, although it is possible that the latter is considered redundant now. A slightly more open information policy on the part of the arXiv might be nice from time to time, but I suspect they are afraid that more openness might offer more inroads to cranks and crackpots, so I kind of understand their policy of semi-secret decision-making. Still, I think it probably couldn't hurt too much if they sent out informative emails to registered authors from time to time.

Tuesday, November 28, 2006

More on modern Fortran

From the echo on my earlier post about why we use Fortran for number crunching applications, I gather that many people still associate Fortran with the worst features of the now mostly obsolete FORTRAN 77 standard (fixed source form, implicit typing) and are mostly unaware of the great strides the development of the Fortran standard has made in the past 30 (sic!) years. So I feel that this might be a good opportunity to talk a little about the advanced features that make Fortran 95 so convenient for developping computational physics applications. [Note that in the following Fortran 95 will be referred to simply as "Fortran" for the sake of brevity.]

To start with, there are Fortran's superior array features which greatly facilitate working with vectors, matrices and tensors: Being able to write vector addition as

a = b + c

instead of some kind of for-loop is a great boon in terms of code legibility. Having the sum, product, dot_product and matmul intrinsic functions available to perform common mathematical operations on arrays using a (hopefully) efficient vendor-supplied math library also helps.

But where Fortran's array features really shine is when it comes to defining elemental functions and subroutines, which save a huge amount of coding. An elemental function or subroutine is one which is defined with scalar dummy arguments, but which fulfils certain technical conditions that allow it to be called with an array passed as the actual argument. When called in this way, an elemental function is evaluated on each element of the passed array argument, and the results are assembled into an array of the same shape. Most of the mathematical functions Fortran provides as intrinsics are elemental. So one can do something like

Pi = 4.*atan(1.)
x = (/ (i*Pi/n,i=0,n) /) ! array constructor with implied do-loop
y = sin(x) ! an elemental assignment operation

to load y(i) with sin(x(i)) for all i. And better yet, user-defined functions can also be elementary, so you only ever need to write the overloaded operators for your automatically-differentiating spinor type as scalars, and Fortran will take care of dealing with the arrays of those spinors that occur in your code.

Next in usefulness and importance comes the already mentioned support for overloading operators and intrinsic functions on user-defined types. Again, this provides a lot of convenience in terms of maintaining a coding style that stays as close to standard mathematical notation as possible, and of keeping the gory details of complex type operations (such as multiplying two automatically-differentiating spinors) transparent to the programmer/user on the next higher level. The ability to have public and private procedures and variables in modules also helps with this kind of encapsulation.

And that isn't all: namelist I/O provides a cheap kind of configuration files; selected_int_kind and selected_real_kind allow testing whether the system provides sufficient range/precision at compile time; the forall construct allows some measure of parallelization on parallel processors where the compiler supports it.

The next incarnation of the Fortran standard, Fortran 2003, exists only as a standard document at this time, although compiler vendors are beginning to add features from Fortran 2003 to their compilers in a piecewise fashion. Major new features include object orientation (single inheritance, polymorphism and deferred binding) and C interoperability (so you can call C system libraries from Fortran programs, and/or call your Fortran matrix-crunching code from a C application).

And after that, the future Fortran 2008 standard is expected to include co-arrays to support parallel and distributed computing, a bitfield type, a Fortran-specific macro preprocessor, among other things.

Advanced programmers keen to learn about Fortran 2003's new features may want to have a look at the Fortran 95/2003 book by Cohen, Metcalf and Reid. This is the successor to Metcalf and Reid's Fortran 90/95 book, which is still probably the best reference to modern Fortran, as most of the Fortran 2003 features aren't available on most platforms yet, whereas standard Fortran 95 is very portable.

For beginning programmers, or people who have never worked with any flavour of Fortran before, the book by Chapman (which I haven't read personally) may be a better idea from the reviews I've seen, but the reviews also indicate that it is not useful as a reference, so you may have to get the Metcalf et.al. book(s) anyway.

Lattice 2006 proceedings available

The proceedings for the Lattice 2006 conference are now available at Proceedings of Science, so you can read the written versions of the talks and get your own idea about most of what was being said and discussed in Tucson, if you weren't there, except for all the bits that had to be left out due to the fairly strict (for an online publication) page limits.

Wednesday, November 22, 2006

One year on the lattice

I just noticed that I let our first anniversary as the world's first and only lattice QCD group blog slip by unnoticed last week. Yes, it has been over a year now since I joined the blogosphere and took over the role of Lead Contributor to "Life on the Lattice" from Matt!
So far, I have thoroughly enjoyed the experience, and I would not hesitate to suggest it to any other lattice field theorist (or any other kind of physicist) who might be thinking about ways to connect to a wider audience.

Tuesday, November 21, 2006

Lattice Forecast for 2056

Via Cosmic Variance and BioCurious: New Scientist has some well-known scientist forecast where science will be in 50 years.

A lot of the predictions are of the kind that people made 50 years ago for today: AIs more intelligent than people, permanent colonies on other planets, immortality drugs, contact with alien civilisations. They haven't come true in the past 50 years, and (exponential growth laws notwithstanding) I see no reason why they should come true in the next 50 years. The other kind of prediction seems much more likely to come true: detection of gravity waves, important discoveries at the LHC, significant progress in neuroscience, solutions for all of the Millennium problems, a firm understanding of dark matter and dark energy, a means to grow human organs in vitro, working quantum computers. And of course, just like nobody 50 years ago predicted the internet or the role of mobile phones in today's world, we should really expect that something completely unexpected will become the leading technology in 50 years.

What really irks me, though, is that there is no forecast from a lattice field theorist. After all, lattice QCD has made huge progress over the past decade, but apparently it isn't sexy enough for New Scientist these days. So here I am going to contribute my own 50-year forecast:

Over the next few decades, parallel computing will make huge advances, with machines that make today's TOP500 look feeble by comparison becoming readily affordable even to smaller academic institutions. As a consequence, large-scale simulations using dynamical chiral fermions will become feasible and will once and for all lay to rest any remaining scepticism regarding the reliability of lattice simulation results.

Predictions of "gold-plated" quantities will achieve accuracies of better than 0.1%, outshining the precision of the experimental results. If the limits of the Standard Model are at all accessible, discrepancies between accurate lattice predictions and experimental results in the heavy quark sector will be a very likely mode of discovering these limits, and will hint at what comes beyond. The use of lattice QCD simulations of nuclear structure and processes will become commonplace, providing a first principles foundation for nuclear physics and largely replacing the nuclear models used today.

On the theoretical side, the discovery of an exact gauge dual to quantum gravity will allow the study of quantum gravity using Monte Carlo simulations of lattice gauge theory, leading to significant quantitative insights into the earliest moments of the universe and the nature of black holes.

Thursday, November 09, 2006

Lisa Randall online chat

At the request of Coco Ballantyne of Discover Magazine, here's a link to the online chat with Lisa Randall hosted by discover magazine today at 14:00 EST. Personally, I won't be joining the chat since I don't believe I have anything pertinent to say about warped extra dimensions (in fact, I doubt I even know enough to ask a relevant question), but some readers of this blog might be interested in chatting with a rather famous physicist about her research and her efforts to popularise modern physics ideas.

Thursday, November 02, 2006

Why we use Fortran and Python

From Mark Chu-Carroll, a post on why C/C++ aren't always fastest, which is of course well known in (large parts of) the scientific computing community: Fortran compilers can perform much better optimisation than C compilers, because Fortran has true arrays and loop constructs, as opposed to C's sugar-coated assembler. C is a great language to develop an operating system or a device driver, but not to write computationally intensive code that could benefit from parallelisation, where Fortran beats it easily. And what about C++? The object-oriented features of C++ are nice for scientific applications, sure; you can have matrix, vector and spinor types with overloaded operators and such. But Fortran 95 has those things, too, but doesn't suffer from the problems that C++'s C-heritage brings. And Fortran 95 has even nicer features, such as elemental functions; that's something that no C-based language can give you because of C's poor support for arrays. And in case there is some object-oriented feature that you feel is missing in Fortran 95, just wait for Fortran 2003, which includes those as well.

But what about developing graphical user interfaces? Fortran doesn't have good support for those, now, does it? No, it doesn't, but that's besides the point; Fortran ("FORmula TRANslation") is meant as a number-crunching language. I wouldn't want to write a graphical user interface for my code in either Fortran or C/C++. For these kinds of tasks, I use Python, because it is the most convenient language for them; the user interface is not computationally intensive, so speed isn't crucial, and for the number crunching, the front end calls the fast Fortran program, getting you the best of both worlds -- and without using any C anywhere (other than the fact that the Python interpreter, and probably the Fortran compiler, were written in C, which is the right language for those kinds of tasks).

Most lattice people I have personally worked with use Fortran, and a few use Python for non-numerical tasks. Of course there are large groups that use C or C++ exclusively, and depending on what they are doing, it may make sense, especially if there is legacy C or assembly code that needs to be linked with. But by and large, computational physicists are Fortran users -- not because they are boring old guys, but because they are too smart to fall for the traps that the cool C++ kids run into. (Oh, and did I mention that Fortran 95 code is a lot easier to read and debug than C++ code? I have debugged both, and the difference is something like an hour versus a day to find a well-hidden off-by-one bug.)

Thursday, October 19, 2006

Invisibility Lattice

Researchers at Duke University and Imperial College have created the world's first "cloaking device", a metamaterial ring that hides a copper piece from detection by microwaves. While still a far cry from the fictional technology known to Star Trek fans, this is the first experimental demonstration that metamaterials with a negative index of refraction can in fact be used to obtain this kind of effect (which had been theoretically predicted).

A metamaterial is a lattice-like arrangement of structures that acts like a material with electromagnetic properties not found in conventional materials (no, they have nothing to do with lattice QCD at all, but I liked the catchy caption). A negative index of refraction occurs in metamaterials whose effective electric permittivity ε and the magnetic permeability μ are both negative; in this case, the refractive index n=ε1/2μ1/2 becomes negative, and waves are refracted the opposite way as usual. For more on this pretty interesting stuff, see here and here, or (for people with appropriate subscriptions) here.

Tuesday, October 17, 2006

Nobel Prize Winning Opera Singer

2004 Nobel Prize winner Frank Wilczek has started a new career as an opera singer, starring in the mini-opera "Atom & Eve" (no, not this one, but this one) at the 2006 Alpbach Technology Conference.

The opera tells the love story between Atom, the lonely oxygen atom, and Eve, the atomic physicist, whose rather significant scale disparity causes a few problems that are finally overcome by means of Bose-Einstein condensation which allows Atom to exist on a macroscopic scale. Apparently the libretto as performed differed from the one linked to above in having a happy ending. And, according to Physik Journal, Wilczek has already set his sight on the next great prize to win: a Grammy.

Tuesday, October 03, 2006

Physics Nobel Prize 2006

The 2006 Nobel Prize in Physics goes to John C. Mather and George F. Smoot "for their discovery of the blackbody form and anisotropy of the cosmic microwave background radiation".

The award honours the achievements of the COBE (COsmic Background Explorer) mission, which was the first to measure the anisotropies in the cosmic microwave background radiation (for the discovery of which Penzias and Wilson were awarded the 1978 Nobel Prize). The cosmic microwave background (CMB for short) is the light emitted by the gas in the young (about 300,000 years after the Big Bang) universe when it had cooled down far enough for atoms to form (to about 3000 Kelvin), making it transparent for light for the first time. The expansion of the universe since then has caused the light emitted then to shift its wavelength to the microwave range by today (about 14,000,000,000 years after the Big Bang), causing it to look like that of a black body of temperature 2.728 Kelvin. In fact, one of the achievements honoured by the award is the demonstration that the measured CMB spectrum is the best fit to a perfect blackbody spectrum ever seen. The other achievement being honoured is the measurement of the tiny anisotropies in that background which were caused by density fluctuations in the primordial gas, which later would form galaxies and stars through gravitational collapse. This work has had a huge impact on our understanding of the early history of the universe. A more detailed study of the CMB is being done by WMAP, which also has made huge contributions to our understanding of the history and composition of the universe (so who knows, maybe there will be another Nobel Prize for CMB explorers in the future).

More about this from Backreaction, Dave Bacon, Cosmic Variance, Clifford Johnson, Andrew Jaffe, Rob Knop, Chad Orzel, Steinn SigurĂ°sson or the conventional media.

Sunday, September 24, 2006

Signs of Life

Sorry for the big silence -- the last month I was mostly out of range, visiting family and friends in Germany and England, and spending a two-week vacation in Venice, most of the time without any internet access.

I'm currently thinking about doing a series of expository posts about Lattice QCD for Non-Physicists, since I feel that most of the stuff posted on this blog so far is probably incomprehensible for most people without a physics degree, and blogs are an ideal means of outreach to the wider public after all. Comments (both encouragement and criticism) are welcome.

Monday, August 21, 2006

Singing Vikings

They have struck again, and removing all their odes about "spam, spam, spam, lovely spam", or rather their pages upon pages of link adverts for pr0n, pi11z and war3z, took me half an hour of my maybe-generally-not-too-precious-but-still-too-precious-for-this-garbage time. I have hence reenabled comment moderation, which I disabled after no comment spam had appeared for several months in the (apparently mistaken) belief that Blogger had come up with a working spam filter. Please accept my apologies for any inconvenience this measure may cause.

On the physics front, everything is now abuzz with news that the Chandra X-ray observatory has made the most direct observation of dark matter so far. Read more about it from Rob Knop, Steinn SigurĂ°sson, Chad Orzel, Clifford Johnson and Sean Carroll, or read the original NASA press release. The BBC is also reporting.

Thursday, August 17, 2006

More links

This is just another collection of links and such.

There is a new group blog in town: over at Jacques Distler's golem, arch-proto-blogger John Baez, String Coffe Table-host Urs Schreiber and philosopher of mathematics David Corfield have formed the n-Category Café, where they will discuss the mathematical, physical and philosophical impact and applications of n-Categories.

Also new to our blogroll are Cosmic Variance contributor Clifford Johnson's new personal blog Asymtotia, and the apparently anonymous Superweak.

The BBC has a number of astronomy stories among the news: Here they discuss the new definition of a planet, which for some reason really captivates the general public, and here they report on a recent observation by the Hubble telescope of the dimmest stars in the Milky Way, while here they report about a recent determination of the Milky Way's age using the VLT telescope array. No particle physics new there, unfortunately.

For many parallel talks given at the Lattice 2006 conference, slides are now also available (just follow the links that used to lead to the abstracts). In case anybody is interested (or bored) enough to care, here are mine. The first proceedings contributions should start appearing on the arXiv in the next couple of weeks or so, once the participants have received the necessary LaTeX files and instructions from Proceedings of Science.

Thursday, August 10, 2006

News summary

No, this is not about the threat to air travel (although that is something currently on my mind, as I am going to fly to Europe at the end of this month, and a couple of days in the UK were actually part of the plan); this is a physics blog, and I will leave publicly debating the current political events to people with greater expertise (and it appears I'm not alone in doing so). This post is about some science news that may have been buried among all the terror and war.

James Van Allen, a pioneer of space exploration and the discoverer of the radiation belts named after him, died yesterday. The NYT article mentions his participation in "Project Argus, the firing of three atomic bombs 300 miles aloft over the South Atlantic." I had never heard of this before, and I think it bears repeating that in spite of the threat of terrorism, the world is probably a safer place overall today than it was at the height of the Cold War, when the impending total annihilation of all life on Earth by all-out global thermonuclear warfare was a distinct possibility (I recently happened to switch on my TV when War Games was being shown, and I remember thinking "well, at least nuclear war is a very remote threat today").

The Perseids meteor shower peaks this Saturday, but the still almost full moon will probably make for less than ideal viewing conditions; still, any hobby astronomers and star-lovers out there should probably spend Saturday evening outdoors with their eyes on the sky.

Via Tommaso Dorigo, Superweak has an interesting post on Dalitz plots. Richard Dalitz, who died in January of this year, was the Ph.D. supervisor of my own Ph.D. supervisor, Ron Horgan; the world is small.

Wednesday, August 09, 2006

Lattice 2006 -- Summary

As threatened earlier, here is my personal review of the Lattice 2006 conference, in the form of an incomplete list of disjointed observations:

Driven by the RHIC data, QCD at finite temperature and/or chemical potential is rapidly becoming a leading subfield within lattice QCD; at this meeting, seven out of 22 plenary talks were about some aspect of QCD thermodynamics, and the number of parallel talks on "High temperature and density" topics was second only to that of the traditionally most numerous spectroscopy talks.

The debate about the validity of the fourth-root prescription for staggered fermions, which an anonymous observer called "the staggered wars", shows no sign of coming to an end. Although a lot of progress has been made recently towards showing the correctness of the rooting prescription, a number of unattractive features have been found at the same time, fueling the flames.

Progress regarding more accurate determinations of CKM matrix elements from lattice QCD is slow, but steady; a lot of this work is very difficult, since getting high precision requires good control over perturbative errors and chiral extrapolations, and both lattice perturbation theory and chiral perturbation theory are hard and suffer from a lack of practitioners.

The AdS/CFT correspondence is beginning to become a topic of interest to researchers working on QCD, and string theory returns to its origins in the strong interactions where it may become a helpful tool to build and solve models of QCD.

Dynamical simulations with overlap fermions are arriving, but it will be a while until they get to the range of lattice spacings, lattice sizes and quark masses that have been studied using staggered fermions.

Everyone will be able to form their own opinion on what was new, what was hot and what was not, once the proceeedings have been published by Proceedings of Science (and before that, when there will be a flood of new papers on the currently fairly quiet hep-lat arXiv).

Monday, July 31, 2006

Lattice 2006 -- Day Five

Hello from Regina, where I have now recovered from my flight back from Tucson, and hence am ready to report on the last day of this year's lattice meeting.

The last day consisted of plenary sessions only. The first plenary, chaired by Anna Hasenfratz, was right after another indoors breakfast. The first speaker was Richard Brower, who gave a non-lattice talk about QCD and string theory, or more specifically the search for a dual description of QCD in the form of string theory on an AdS background. He started out by giving a historical overview of the development of string theory from its beginnings as an attempt to describe the strong interactions based on the observed behaviour of Regge trajectories and s-t duality, recounting the well-known failure of string theory to capture the correct hard scattering behaviour in strong interactions, along with the need to incorporate gravity. The situation changed with the discovery of dualities and the AdS/CFT correspondence: now string theory on an AdS5 x S5 background is dual to N=4 Super-Yang-Mills theory in a 4d spacetime, with the strong coupling limit of SYM corresponding to the weak coupling limit of AdS. Of course we know that QCD is not a superconformal theory, so a description of QCD based on AdS/CFT has to break the conformal symmetry by introducing a boundary along the fifth-dimension of AdS5; there are a number of models of this kind, and while they manage to reproduce some qualitative features of the QCD glueball spectrum as seen on the lattice, other features are qualitatively different, and the quantitative agreement is usually rather poor. However, there is some hope that an exact string dual of QCD might still be found, returning string theory to its origins as a theory of the strong interactions.

The second talk was by Mark Alford, who spoke about colour superconductivity. Colour superconductivity arises via the BCS mechanism just like ordinary superconductivity, but instead of a weakly attractive phonon-mediated attraction, quarks attract via the much stronger strong interactions, making Cooper-pairing even more efficient. Hence we expect QCD matter to be colour superconducting at large chemical potentials, making this phase probably relevant for the study of the interior of neutron stars. Unfortunately, that region of the QCD phase diagram is not (yet) accessible on the lattice due to the sign problem. In the limit of infinite chemical potential, perturbative descriptions are possible; NJL models provide another qualitative description of this phase. What is found is that in this limit, for Nf=3 massless flavours, a curious phenomenon called colour-flavour locking (CFL) occurs: Light quarks of a given flavour only occur carrying a given colour charge, breaking the symmetry group from SU(3)c x SU(3)L x SU(3)R x U(1)B to SU(3)CFL x Z2. The electromagnetic gauge group U(1)Q, which was embedded in the SU(3)L x SU(3)R chiral group, is now changed into an U(1)Q' subgroup of SU(3)CFL due to photon-gluon mixing. This phase is therefore somewhat weird. It becomes complicated due to the fact that the strange mass isn't really zero, and also due to the weak interactions breaking flavour (while this is a weak effect, a compact star exists for a long time, giving the weak interactions time to act and affect the equilibrium); models indicate that this will lead to a complex phase structure in the regime of intermediate chemical potential. However, it is also known that a number of the phase found in the models, the so-called gapless phases, are artifacts and will not exist in full QCD; what will replace them is not known, and may not become known until a way to resolve the sign problem on the lattice is found.

After the coffee break the second plenary session was chaired by Peter Weisz, on behalf of the Local Organising Committee for next year's lattice meeting to be held in Regensburg, Germany. The session started with Tommy Burch extending a warm invitation to Regensburg to everyone and extolling its virtues as a lovely city and excellent conference venue. Lattice 2007 will be at Regensburg from 30th July to 4th August 2007, dates that should be in every lattice theorists diary. Peter Weisz thanked the Local Organizing Committe in Tucson for organizing such a splendid conference, which was met with lots of applause.

The first talk of the last session was Urs Heller speking about Lattice QCD at finite temperature (and zero chemical potential), concentrating especially on the nature of the transition as a function of the light quark masses, and on the QCD equation of state. On the first count, it seems conclusive by now that at the physical values of mu,d and ms the phase transition is in fact a crossover rather than a first-order transition. On the second count, the low-temperature description QCD matter by a hadron resonance gas and the high-temperature description by finite-temperature perturbation theory seem to match quite well onto the lattice data in their respective domains of validity. Some studies of non-static finite-temperature physics, such as transport coefficients, also are beginning to be undertaken on the lattice now.

The second speaker was Joel Giedt, who talked about lattice SUSY. Unfortunately this is a sufficiently technical field which is rather remote from my area of expertise, and thus I feel unable to give a reasonable summary of his talk. What I believed to understand was that a number of supersymmetric lattice theories are now known, that there is some problem with the Kähler potential being underconstrained by the symmetries and that actual lattice simulations might be helpful there, as well as in studying the AdS3/CFT2 correspondence.

The final talk was by Tom DeGrand, who was the only plenary (and probably simply the only) speaker to use foils and an overhead projector instead of a digital presentation to speak about the Nf=1 quark condensate. In the Nc --> &infinity; limit, it is found that Nf=1 QCD with quarks in the antisymmetric representation corresponds to N=1 Super-Yang-Mills theory. Nf=1 QCD is peculiar in that there are no light pions, only a massive η'. When overlap fermions are being used to simulate at a fixed gauge topology, it becomes possible to determine the quark condensate via the spectrum of the overlap Dirac operator; in this way, the 1/Nc corrections to the Nc --> &infinity; limit are found to be small even at Nc=3.

At noon, the symposium was adjourned, and the participants began to scatter.

Since my flight only left in the evening, I managed to go and sneak a look at a very interesting historical monument located near Tucson, the San Xavier de Bac mission church. This mission was founded by the Jesuits in the late 17th century and completed by Franciscans in the mid-18th century. The church itself is built in a colourful version of the baroque style with many elements of "naive" or peasant art in the ornamentation, suggesting that it was planned by the missionaries and executed by the local Natives, the Tohono O'odham, themselves. The white walls of the towers are visible from afar across the desert, giving this remarkable church the nickname "white dove of the desert".

As for the trip to Tucson, I feel little need to bore my readers with the details of our 15-hour zig-zag trip across the North American continent via L.A. and Toronto to Regina, and thus conclude my report on the Lattice 2006 meeting at this point. Thank you for reading; if and when I feel like it, I may follow-up with an overall summary of the conference later.

Friday, July 28, 2006

Lattice 2006 -- Day Four

Hello again from Tucson. Today started off somewhat unusual -- with rain, clouds and mist! So, no breakfast in the shade on the terrace; we didn't have to go hungry, though, as it was just relocated to the dining room instead.

The first plenary session of the morning was chaired by Anthony Williams. The first speaker was Kostas Orginos, who talked about recent lattice results on nucleon structure. Nucleons are tricky, because they have only light quarks, and it is known that the sea quarks actually play a bigger role than the valence quarks in determining the structure of the nucleons. However, with a lot of hard work and clever methods, people have made a lot of progress towards getting accurate results for the nucleon structure functions, momenta of generalised parton distributions, and various other structure-related quantities, and these results may one day soon help to lead to an understanding of e.g. the proton spin crisis.

The second speaker was Christian Schmidt, who spoke about lattice QCD at finite density. As mentioned yesterday, finite density QCD is hard on the lattice, because the action becomes complex and direct Monte Carlo simulations are no longer possible at non-zero chemical potential μ. The way to avoid this sign problem lies in one or another of a number of neat tricks such as reweighting configurations obtained at μ=0 to a finite value of μ, measuring Taylor expansions around μ=0 and resumming the series, simulations at imaginary μ (where the action remains real) with subsequent extrapolation to real μ, or some other method. A fair number of results exist now in this field, and while the quantitative precision still seems fairly low, there appears to be fair agreement on the qualitative features of the phase diagram. For large μ, however, new methods appear to be needed.

After the coffee break, the second plenary session, chaired by Sinya Aoki, had three speakers: First was Pilar Hernández, who reported on progress she and her collaborators had made towards understanding the ΔI=1/2 rule. This rule, which states that Kaon decays in which isospin changes by more than 1/2 are suppressed by a factor of approximately 20, is one of the longest-standing mysteries in QCD. Resolving it will require putting together a lot of work and know-how from both lattice QCD and chiral perturbation theory, and the people working on it seem to be far from a resolution in spite of a lot of recent progress.

Next was Michael Clark speaking about the Rational Hybric Monte Carlo (RHMC) algorithm. This algorithm is a variation on the well-known HMC algorithm and uses a rational approximation to maintain the exact nature of the HMC algorithm (which is needed in a many cases), while outperforming the Polynomial HMC (PHMC) algorithm through the better approximation properties of rational functions as opposed to polynomials. Apparently, with the proper implementation, this algorithm can push Wilson fermions into a speed range where they become competitive with staggered fermions.

Finally, Mikko Laine talked about warm dark matter (WDM) and hot QCD. One interesting candidate for a WDM particle are sterile right-handed neutrinos. These would have been created thermally in the early universe. As it turns out, for right-handed neutrino masses in the keV range, the production range peaks at temperatures of around the QCD scale, so that QCD contributions to the production rate, e.g. via u + d --> e- + νe, νe --> N1 might be dominant.

After lunch, there were parallel sessions again, featuring amongst others my talk (which went fine, thanks for asking) about our recent work on determining the QCD/NRQCD matching coefficients for leptonic widths of heavy quarkonia to O(αsv2) for realistic lattice NRQCD actions.

After the parallel sessions, we heard this year's keynote talk, delivered by Ann Nelson, who extended an invitation to all lattice theorists to work on beyond-the-Standard-Model physics, where models such as composite Higgs models could benefit from lattice simulations.

The day closed with dinner. There are going to be more plenary talks tomorrow, but you will have to wait for me to get back to Regina before I can report about them.

Thursday, July 27, 2006

Lattice 2006 -- Day Three

Hello again from Tucson.

Day three was the odd one out in that the program today was arranged a little differently from the other days. As usual, the day started off with a plenary session, chaired by Philippe de Forcrand. The first speaker was Misha Stephanov, who talked about the QCD phase diagram. The general features of the phase diagram (confinement at low temperature and density, quark-gluon plasma at high temperature, colour superconductivity and colour-flavour locked phase at high density, and the phase transition lines separating these phases) are fairly well known by now. What is a lot less well known is the location of the critical point at which the phase transition line from the confined phase terminates and the transition turns into a crossover. A number of models have given wilfly different predictions for its location, and since working at finite chemical potential on the lattice is only possible by some ingenious tricks (the action is no longer real with a real chemical potential, so Monte Carlo methods won't work directly), the lattice predictions are somewhat in disagreement with each other as well. On the experiments at RHIC are able to scan some region of the phase diagram by varying the center-of-mass energy in heavy ion collisions, so there is some hope of nailing it down in the near future, though.

Next came a much-expected talk by Stephen Sharpe, who summarized the debate on the validity of the fourth-root trick for staggered fermions. The options which he put up initially were "good" (works as desired without any problems), "bad" (wrong continuum limit, hence wrong physics) and "ugly" (right continuum limit, hence ultimately right physics, but lots of complications and unexpected features). Since rooted staggered fermions have been shown to be non-local, the "good" option was ruled out right away, which might seem worrying given that the stakes are so high with the best ensembles of configurations (by MILC) currently in existence relying on rooted staggered fermions. However, he pointed out that non-locality does not mean the theory is sick; an example were certain non-local Ising models which turn out to lie in the same universality class as the local model if the locality falls off fast enough at large separations. The replica trick and renormalisation group analysis elaborated in the parallel talks by Bernard, Golterman and Shamir were explained again, and Mike Creutz's objections to a number of features of rooted staggered fermions were answered in the next sections of this talk. The summary was that rooted staggered fermions were not "bad" (as shown by the Bernard-Golterman-Shamir analysis), but that they were "ugly" (as pointed out by Creutz's criticisms).

After the coffee break, the program changed from its usual format: a parallel session replaced the usual second plenary session. That plenary session took place after lunch instead, with Shoji Hashimoto in the chair. The first speaker was Anthony Duncan, who spoke about applications of methods from lattice field theory to problems in the theory of Coulomb gases appearing in biophysics. These problems can be transformed into Feynman path integrals defined with a lattice cutoff by some ingenious transformations, and Monte Carlo methods developed for lattice QCD can then be used to treat them.

The second talk was the traditional experimental talk, delivered by Alessandro Cerri, who gave an overview of recent advances in flavour physics. I had to sneak out of the room at the end of this talk, and hence I cannot report anything on the third talk, entitled Search for gluonic excitations in light unconventional mesons by Paul Eugenio.

In the later afternoon and evening we had an excursion to the Arizona Sonoran Desert Museum, which was much, much better than the excursion on the first day. The desert museum is a combination of botanical garden and zoo, which features the astounding variety, breathtaking beauty and sheer strangeness of this most extraordinary landscape. There were dozens of different kinds of cacti, agaves and other desert plants, mountain lions, wolves, coyotes, javelinas, coati, hummingbirds and (yes, that is not a typo) otters and beavers, colourful minerals and fossils and the scorching heat of the sun, all of which combined to leave a remarkable impression (besides making me scold myself again for being stupid enough to forget my camera). The day closed with the banquet, which was held in the grounds of the desert museum and was very pleasant, even if the chocolate cake for desert was a little too delicious.

Tuesday, July 25, 2006

Lattice 2006 -- Day Two

Hello again from the Lattice 2006 conference in Tucson, Arizona.

The second day started with plenary sessions again. The first session was chaired by Julius Kuti, and began with a talk by Leonardo Giusti about simulating light dynamical fermions on the lattice; the main focus of the talk was on new development using Wilson fermions, although some results on Ginsparg-Wilson and twisted mass fermions were mentioned as well, but staggered quarks were missing almost completely. Important areas covered were the need to control all systematic errors in a truly "first principles" approach, and the problems that Wilson fermions face because their spectral gap is not always positive, along with some proposals as to how this problem might be resolved, as well as direct comparisons with chiral perturbation theory results for the finite-size errors (which seem to show some significant discrepancies in many cases).

Next was a talk by Hank Thacker, who spoke about new types of extended topological objects: If the topological charge density is determined from the spectrum of the overlap operator via the index theorem, what is found is that there appear to be no instantons, but instead thin extended three-dimensional sheets of coherent topological charge, with two sheets of opposite topological charge always next to each other. Two-dimensional CPN-1 (toy) models show similar structure for N>3. These sheets may be identical to domain walls that appear in certain AdS/CFT models as the remnant of D6-branes wrapped around a 4-sphere, where they separate so-called k-vacua whose θ-parameter differs by 2πk. They may also be suggestive of some kind of relation between N=1 SYM and Nf=1 QCD. A point that was raised during questions was that, since the width of these sheets appears to be on the order of the lattice spacing, they don't scale and in this kind of picture the continuum limit would either not exist or at least look very weird.

After the tea break, the second plenary session of the morning had Maria Lombardo in the chair. The first talk was by Tetsuo Hatsuda, who spoke about RHIC physics and hot QCD. At the center was the possibility of using heavy flavours as probes to look into the RHIC fireball. Relevant lattice results concern the temperature dependence of the Debye screening mass and the spectral functions of charmonia, which can be reconstructed via MEM. What is found there is that the J/ψ and ηc persist well up to temperature of about 1.5 TC, whereas their orbital excitations disappear around TC.

The last talk of the morning was by Tetsuya Onogi and was a review of progress in heavy flavour physics from the lattice. This is such a large and active field that he actually had to apologize to all the people (including myself) who had sent him materials which he had no time to include in his talk. The physics goal in this area is largely to overconstrain the elements of the CKM matrix through determinations of heavy meson decay constants and mixing parameters; this is exciting because it might lead to the discovery of new physics beyond the Standard Model, and also because the errors on these quantities are currently dominated by the theoretical errors. So the results presented were largely determinations of fB, fBs, fD, fDs, BB etc. and various ratios and combinations thereof. Other results included determinations of mb and various parameters in HQET.

After lunch there are going to be parallel sessions. Stay tuned.

Update: The afternoon parallel sessions are over now. One of them was almost entirely devoted to talks aiming to resolve the debate about staggered fermions outlined earlier on this blog. Essentially, as far as I understand the argument, what is claimed is that firstly, rooted staggered fermions are non-local because of taste-breaking, but that secondly, the continuum limit exists nevertheless and is in the right universality class by renormalisation group arguments, and that thirdly, the correct chiral perturbation theory for rooted staggered fermions can be obtained from staggered chiral perturbation theory using a "replica trick" whereby one consider nR copies of the theory and takes nR=1/4 in the end. The speakers (Maarten Golterman, Yigal Shamir and Claude Bernard) got into some almost heated argument with Mike Creutz about the whole issue.

Still upcoming today: the poster session. Stay tuned.

Update: The poster session was only moderately exciting, which was probably due to the fact that there were a lot of posters that really were 20-page papers pinned to a wall, which I find rather deterring since you would have to read them in full before talking to the presenter. A good poster (at least in my opinion) is very different from a good paper; the poster should minimize the amount of unnecessary text and use figures and other graphical layout elements to emphasize the main point, since the details can always be filled in by the presenter.

There also was a little problem with the food, which was served only during the first hour of the session; this meant that people who presented their posters int the "A" section got nothing to eat.

The most unusual poster was a live presentation of the ILDG by people from the ILDG working group. Mike Creutz's poster on "diseases" with rooted staggered fermions also got a lot of attention. And the posters by the people from Regina were also nice, although I may of course be biased in their favour.

Lattice 2006 -- Day One

Hello from Tucson, Arizona, where I am at the Lattice 2006 conference.

Unfortunately, I am facing a similar technical problem to that Matt experience last year in Dublin: the wireless age is not quite upon us yet (at least not unless one is willing to pay outrageous internet fees to the hotel), so I will have to report after the event, rather than blog live.

This year the lattice conference takes place here in the middle of the very picturesque Arizona desert (sorry, I forgot my camera at home -- I'm already kicking myself for it, so you don't need to) at the extremely luxurious Starr Pass resort. Getting here from Regina was more than a little tedious, but I won't bore you with tales of endless lineups at US customs or long-delayed flights. Instead I'll jump medias in res:

After a welcome message from the President of the University of Arizona and a number of announcements (such as that we should remember to drink plenty of water), the first plenary session (chaired by Junko Shigemitsu) started with a talk by Weonjong Lee about recent progress in Kaon physics on the lattice. The main point of his talk was to emphasize how essential improvement is in order to reduce the impact of lattice artifacts, and to advertize HYP smearing over ASQTAD. The results presented included demonstrations of how taste-breaking effects in the pion spectrum with staggered fermions get supressed by improvement, determinations of fπ and fK in full QCD, of BK in quenched QCD with an outlook towards full QCD results that should become available next year, and of K->ππ and Kl3 decays. He closed by suggesting that the MILC collaboration should create a set of Fat7bar configurations in addition to their ASQTAD configurations to allow people to investigate the better suppression of lattice spacing artifacts expected there.

Next was a talk by Stefan Schaefer about algorithms for dynamical simulations with overlap fermions. While overlap fermions have the advantages of preserving chiral symmetry exactly, possess automatic O(a) improvement and their spectrum has an exact relation to gauge field topology via the index theorem, they are extremely expensive to simulate, due to the appearance of the operator sign function in the overlap Dirac operator. One cause of this is that the exact link with topology implies that the overlap operator is discontinuous at the surfaces in the space of gauge connections that separate different topological sectors. Three possibilities to treat this have been proposed: the first is to modify the time evolution algorithm that generates the configurations by taking the existence of these surface into account and to properly reflect or refract a trajectory that would cross them; this has the advantage of being exact, but is very expensive because it requires a full inversion of the overlap operator each time a sector boundary is crossed. The second possibility is to approximate the sign function by some smooth function; this is much easier to implement, but has to deal with large forces near sector boundaries where the approximation becomes steep, and also needs a good approximation of the determinant function to work. The third alternative are topology-preserving gauge actions, which are set up so as to disallow transitions between topological sectors. In summary, while a lot of progress has been made, large volumes are still unattainable with overlap fermions at this time.

After a tea break there was a second plenary session, chaired by Mike Peardon. The first talk, by Kim Splittorf, was about the sign problem in the epsilon regima of QCD at finite chemical potential. The problem there is that at finite chemical potential, the discontinuity of the chiral condensate at zero quark mass cannot be understood in the same terms (via the Banks-Casher relation) as at zero chemical potential, because the eigenvalues can now be complex. Instead, the spectral density also becomes complex and develops oscillations that lead to the discontinuity.

The next speaker was Carlos Pena, who talked about determinations of weak matrix elements using twisted mass lattice QCD, especially about results that the ALPHA collaboration has obtained for BK, and results for BB that are expected next year.

The session was rounded off by Karl Jansen presenting the status of the ILDG. For those not active in the field, the International Lattice Data Grid is a grid framework that allows lattice theorists to share and access their configurations between countries and collaborations by linking the different national grids into a global grid. This requires agreeing on some common data format, a way to describe metadata (such as lattice size, actions used etc.) by means of an XML schema defining a language known as QCDml, and various layers of software linking it all together. The people working on this have done a lot of hard work for the benefit of the lattice community, and by giving people outside the large collaborations access to unquenched configurations on large lattices using their action of choice, this should help a lot to advance the state of the field.

In the afternoon there were two parallel session with a break for refreshments and informal conversations in between. I see little point in recounting which talks I went to, since that would at most reflect my biases rather than anything about the work being done by others in general.

In the evening there was an excursion dinner to Old Tucson, which is a movie set outside Tucson, where Westerns have been produced since the 1930s. The excursion featured some nice food, almost unbearable heat, a staged shootout between Western actors, some fairly bizarre and allegedly funny goings-on on the stage of the local Saloon, and a bit of stargazing. If that sounds odd, it doesn't half reflect how odd it really was (or at least how odd I thought it to be, which again may simply reflect my cultural biases). I might try and obtain some pictures from those who managed to bring their cameras, and if I succeed, some pictures may be posted on this blog.

Wednesday, June 14, 2006

Peer review and Trial by jury

There has been a big shouting match debate going on in the physics blogosphere over the last couple of days. The topic under discussion is the role that democracy plays, can play or should play within science.

Now, it is easy to make up various kinds of strawmen and bash them to death, e.g. the idea of determining the values of the Standard Model parameters by public voting (which nobody advocates), or the notion of a scientific dictatorship where one single person decides on what is science and what isn't (which hopefully also nobody advocates).

To actually perform a serious analysis of what we want the scientific community to look like is much more difficult: On the one hand, there is clearly a lot to be said in favour of a scientific aristocracy of experts; on the other hand, do we really want some small self-recruiting in-group to decide about everyone else's funding, especially given that they will still be human and hence their decisions may be guided by personal like or dislike of a person just as well as by scientific analysis of his or her proposal?

These are not easy issues to discuss and decide (and of course any discussion of them in the blogosphere is going to have virtually no effect at all, given that the physics blogosphere is dominated by lowly postdocs, or at most assistant professors, and hence does not exactly represent the views of the major policy-makers within the community).

Here, I would just like to point out that the use of the word "democracy" may be slightly misleading, at least as far as common connotations go. Most people, when hearing "democracy", will think of voting, and possibly the absence of an individual or group with dictatorial powers, leading quickly to the kind of strawman arguments that dominated this debate. However, there is another crucial feature of (at least British and American) democracy: I'm speaking of trial by jury. This is a profoundly democratic institution; nobody can be found guilty of and punished for a crime, unless he either admits it himself by pleading guilty, or the prosecution manages to convince a panel of twelve people chosen at random from among the accused's peers (rather than some group of politicians or experts) of his guilt.

This is in many ways a much better analogy for the kind of democracy that can, should, and in fact does exist in the scientific community. Peer review is not that different from trial by jury, with the reviewers acting as the equivalent of jurors (randomly chosen peers of the author), and the editor as the equivalent of the judge. There are even appeals, and many journals have a kind of voir dire where potential conflicts of interest are examined before selecting referees. Of course the analogy is not perfect, because there are no opposing parties to the proceedings, but this is (at least in my opinion) a much closer analogy. In fact, in many respects the work of the scientist is somewhat similar to that of the judiciary (weighing evidence and coming to a conclusion), just as it is hugely different from that of the legislative and executive branches (which are used as flawed analogies in the strawman arguments mentioned above).

Comments are welcome.

Update: More on the debate in a new post by Sabine (to whom we extend our warmest congratulations on her recent marriage) on Backreaction.

Friday, June 09, 2006

The Satanic Papers

Since -- surprise, surprise -- no apocalyptic cataclysm occurred on 6/6/6 (which was 6/6/2006 really anyways, and is also variously known as 24/5/2006, 10/9/5766, 10/3/5766, 9/5/1427, 16/3/1385, 16/3/1928, 12.19.13.6.10, 2006-23-2, 2006-157 or even 18.9.CCXIV, not to mention that the count of years A.D. isn't even historically accurate), maybe we should look for the hand of Satan where it has a greater chance of manifesting itself: I mean in the arXiv, of course! So here are the papers bearing the cleverly disguised mark of the beast:

astro-ph/0606006: M. Meneghetti et al., Arc sensitivity to cluster ellipticity, asymmetries and substructures. Studies how the distribution of galaxies within a cluster affects the shape of the arcs created by gravitational lensing by that cluster.

cond-mat/0606006: J.-P. Wüstenberg et al., Spin- and time-resolved photoemission studies of thin Co2FeSi Heusler alloy films. Studies the properties of some material of interest in spintronics.

gr-qc/0606006: S. Deser, First-order Formalism and Odd-derivative Actions. Studies how a separate Palatini-type reformulation of standard and Chern-Simons terms affects electromagnetism and gravity, with some surprising results.

hep-ex/0606006: ICARUS Collaboration (A. Ankowski et al.), Measurement of Through-Going Particle Momentum By Means Of Multiple Scattering With The ICARUS T600 TPC. Describes the collaboration's liquid argon detector and the methods to be used to measure particle momenta with it.

hep-lat/0606006: C. Hagen et al., Search for the Theta^+(1540) in lattice QCD. Looks for the Θ+(1540) pentaquark in a quenched simulation and doesn't find it.

hep-ph/0606006: C.A. Salgado, Overview of the heavy ion session: Moriond 2006. Summary of proceedings.

hep-th/0606006: S. Creek et al., Braneworld stars and black holes. Studies sperically symmetric brane solutions in a Randall-Sundrum scenario.

math-ph/0606006: R.G. Smirnov and P. Winternitz, A class of superintegrable systems of Calogero type. Studies the relationship between the three-body Calogero system in one dimension and a class of three- and twodimensional superintegrable systems.

nucl-ex/0606006: S. Salur (for the STAR Collaboration), Statistical Models and STAR's Strange Data. Compares various models to the collaboration's data on strange hadron production in p+p and Au+Au collisions.

nucl-th/0606006: L. Platter, The Three-Nucleon System at Next-To-Next-To-Leading Order. Computes the triton binding energy to NNLO in an effective field theory with only contact interactions.

physics/0606006: W.A. Rolke and A.M. Lopez, A Test for the Presence of a Signal. Studies some methods for statistical hypothesis testing.

quant-ph/0606006: F. Cannata and A. Ventura, Scattering by PT-symmetric non-local potentials. Studies one-dimensional scattering by separable non-local potentials and looks at the contraints imposed on the transmission and reflection coefficients by hermiticity and P- and T-invariance.

Not sure if any of this looks particularly satanic, but it was fun to look (however superficially) at papers from archives I would normally never go near.

Quarkonia and MEM

On the arXiv today is a paper by Peter Petreczky about the spectral functions of heavy quarkonia at finite temperature.

People generally expect that at high temperatures, heavy quarkonia will be suppressed, because the gluons will be screened by thermal effects (Debye screening, and possibly chromomagnetic screening as well), leading to an exponential fall-off of the interquark potential at large distances and hence allowing the heavy quarks to drift apart. This suppression of quarkonia is supposed to be an important signature of the formation of a quark-gluon plasma, and hence confirming it in a model-independent way is important. One way to do this is to look at the spectral functions for the corresponding correlators and to see whether the peaks in the spectral function that correspond to the bound states in that channel will broaden and eventually vanish as the temperature is increased.

The results in this case are that the 1P charmonia (the $$J/\psi$$ and its kin) do dissolve just above the deconfinement transition, whereas other quarkonia appear to persist up to considerably higher temperatures.

Now how do people obtain these kinds of results? The spectral function is the function σ(ω) appearing in the Euclidean periodic-time equivalent of the Källén-Lehmann spectral representation

$$D(t) = \int_0^\infty d\omega \sigma(\omega)\frac{\cosh(\omega(t-\beta/2))}{\sinh(\omega\beta/2)}$$

where the latter expression is the correlator for a free particle of mass ω, with β being the extent in the Euclidean time direction. So if you have measured the correlator D(t), you just invert this to get the spectral function, which contains all the information of the spectrum of the theory.

There is one lie in this last sentence, and that lie is the little word "just". The reason is that you are trying to reconstruct a continuous function σ(ω) from a small number of measured data points D(β i/Nt), making this an ill-posed problem.

The way around that people use lies in a method called Maximum Entropy Method (MEM) image restoration, which is also used to restore noisy images in astronomy. (Unfortunately it is bound by the rules of logic and hence cannot do all the wonderful and impossible things, such as looking through opaque foreground objects or enlarging a section to reveal details much smaller than an original pixel, that the writers of CSI or Numb3rs are so fond of showing to an impressionable public in the interest of deterrence, but it is still pretty amazing -- just google and look at some of the "before and after" pictures.)

The basis for MEM is Bayes' theorem

$$P(A|B) = \frac{P(B|A)P(A)}{P(B)}$$

which relates the conditional probability for A given B to that for B given A. Using Bayes' theorem, the probability to have the spectral function σ given the data D and fundamental assumptions H (such as positivity and high-energy asymptotics) is

$$P(\sigma|D,H) = P(D|\sigma,H) P(\sigma|H)$$

where conventionally P(D|σ,H) is known as the likelihood function (it tells you how likely your data are under the assumptions), and P(σ|H) is known as the prior probability (it tells you how probable a given σ is prior to any observation D). The likelihood function may be taken to be

$$P(D|\sigma,H) = Z \exp\left(-\frac{1}{2}\chi^2\right)$$

where χ2 is the standard χ2 statistic for how well the D(t) given by σ fits your data D(β i/Nt), and Z is a normalisation factor. For the prior probability, on takes the exponential

$$P(\sigma|H) = Z' \exp(\alpha S)$$

of the Shannon-Jaynes entropy

$$S = \int_0^\infty d\omega \left[\sigma(\omega)-m(\omega)-\sigma(\omega)\log\left(\frac{\sigma(\omega)}{m(\omega)}\right)\right]$$

where m is a function called the default model, and α is a positive real parameter.

The most probable "image" σα for given α (and m) is then the solution to the functional differential equation

$$\frac{\delta Q_\alpha}{\delta \sigma_\alpha} = 0$$

where

$$Q_\alpha = \left(\alpha S - \frac{1}{2}\chi^2\right)$$

The parameter α hence parametrises a tradeoff between minimising χ2 and maximising S, which corresponds to making σ close to m. Some MEM methods take α to be an arbitrary tunable parameter, whereas in others, to get the final output σMEM, one still has to average over α with the weight P(α|D,H,m), which can be computed using another round of Bayes' theorem. In practice, people appear to use various kinds of approximations. It should be noted that the final result

$$\sigma_{MEM}(\omega) = \int d\alpha \sigma_\alpha(\omega) P(\alpha|D,H,m)$$


still depends on m, although this dependence should be small if m was a good default model.

This is pretty cool stuff.

Monday, May 29, 2006

Non-Relativistic QCD

This is another installment in our series about fermions on the lattice. In the previous posts in this series we had looked at various lattice discretisations of the continuum Dirac action, and how they dealt with the problem of doublers posed by the Nielsen-Ninomiya theorem. As it turned out, one of the main difficulties in this was maintaining chiral symmetry, which is important in the limit of vanishing quark mass. But what about the opposite limit -- the limit of infinite quark mass?

As it turns out, that limit is also difficult to handle, but for entirely different reasons: The correlation functions, from which the properties of bound states are extracted, show an exponential decay of the form $$C(T,0)\sim e^{-maT}$$, where $$t$$ is the number of timesteps, and $$ma$$ is the product of the state's mass and the lattice spacing. Now for a heavy quark, e.g. a bottom, and the lattice spacings that are feasible with the biggest and fastest computers in existence today, $$ma\approx 2$$, which means that the correlation functions for an $$\Upsilon$$ will decay like $$e^{-4T}$$, which is way too fast to extract a meaningful signal. (Making the lattice spacing smaller is so hard because in order to fill the same physical volume you need to increase the number of lattice points accordingly, which requires a large increase in computing power.)

Fortunately, in the case of heavy quark systems the kinetic energies of the heavy quarks are small compared to their rest masses, as evidenced by the relatively small splittings between the ground and excited states of heavy $$Q\bar{Q}$$ mesons. This means that the heavy quarks are moving at non-relativistic velocities $$v<<c$$ and can hence be well described by a Schrödinger equation instead of the full Dirac equation after integrating out the modes with energies of the order of $$E\gesim M$$. The corresponding effective field theory is known as Non-Relativistic QCD (NRQCD) and can be schematically written using the Lagrangian
$$\mathcal{L} = \psi^\dag \left(\Delta_4 - H\right)\psi$$
where $\psi$ is a non-relativistic two-component Pauli spinor and the Hamiltonian is
$$H = - \frac{\bm{\Delta}^2}{2M} + \textrm{(relativistic and other<br />corrections)}$$
In actual practice, this is not a useful way to write things, since it is numerically unstable for $$Ma<3$$; instead one uses an action that looks like
$$\mathcal{L} = \psi^\dag\psi - \psi^dag\left( 1 - \frac{a\delta H}{2} \right) \left( 1 - \frac{aH_0}{2n} \right)^n U_4^\dagger\left( 1 - \frac{aH_0}{2n} \right)^n \left( 1 - \frac{a \delta H}{2} \right)\psi$$
where
$$H_0 = - \frac{\bm{\Delta}^2}{2M}$$
whereas $$\delta H$$ incorporates the relativistic and other corrections, and $$n\ge 1$$ is a numerical stability parameter that makes the system stable for $$Ma>3/(2n)$$.

This complicated form makes NRQCD rather formidable to work with, but it can be and has been successfully used in the description of the $$\Upsilon$$ system and in other contexts. In fact, some of the most precise predictions from lattice QCD rely on NRQCD for the description of heavy quarks.

It should be noted that the covariant derivatives in NRQCD are nearest-neighbours differences -- the reasons for having to take symmetric derivatives don't apply in the non-relativistic case; hence there are no doublers in NRQCD.

Wednesday, May 24, 2006

Calling lattice bloggers!

As pointed out in Matthew's last post so far, this is the world's first (and only, hence best) group blog devoted to Lattice QCD. Unfortunately, Matthew's new job does not leave him too much time for blogging; therefore I'm running this blog all alone at the moment, which leads to the relatively low activity seen in recent weeks.

So I was wondering if there are any other lattice people out there who would like to join this blog and post here about their research work, the most recent papers on the arXiv or in the journals, interesting developments in the field, science in the news, and any other matters appropriate for a physics blog (as opposed to a mere physicists' blog). It would be great if this blog saw a little more activity!

Tuesday, May 23, 2006

Physics fun with sunglasses

Yesterday was a public holiday (Victoria Day) in Canada, and (as opposed to last year, when I went to my windowless office at the university in blissful ignorance of the Canadian holiday schedule, wondered why it was so empty and the lights on the corridors were off, and only figured it out when I was unable to obtain any lunch in the food court) I got to enjoy the sunshine on a lovely day.

I had completely forgotten how much fun sunglasses could be: I have these sunglass things (I don't really know what the technical term for them is) that can be clipped to my glasses to turn them into sunglasses; what makes them so much fun is that they really are nothing but polarisation filters! Of course polarisation filters make great sunglasses because the sunlight is unpolarised, and because the polarisation filter does not introduce a colour bias like an old-fashioned green or brown filter would. But as everybody remembers from their undergraduate optics course, light reflected from surfaces is partially polarised, and the same is true for scattered light. Therefore, when wearing your polarisation filters/sunglasses, the brightness of the road surface and of the blue sky will vary as you tilt your head towards the right or the left, which is quite fascinating. Unfortunately, other people will probably consider you to be crazy if they see you tilting your head from side to side while stepping forward and backward trying to determine the Brewster angle, or turning around your own axis trying to precisely locate the spot of maximal polarisation in the sky (which is how bees detect the direction towards the sun even if the sun itself is behind a cloud) -- but a real physicist shouldn't mind, right?

So I got to feel like an experimentalist for a little while, while also taking a pleasant walk in the park, sipping lime juice on a terrace above Wascana Lake and generally enjoying myself, all thanks to great weather and Her Majesty's official birthday in Canada: God save the Queen!

Oh, and of course those sunglasses are real fun to use with LCDs, too...

Thursday, May 18, 2006

Analytical (3+1)d Yang-Mills and ontology

A little while ago, there were two papers by Leigh, Minic and Yelnikov, in which they expanded on the previous work done by Karabali, Kim and Nair towards an analytical solution for (2+1)-dimensional pure Yang-Mills theory. By re-expressing the theory in terms of appropriate variables, they were able to find an ansatz for the vacuum wavefunctional in the Schrödinger picture which they could solve analytically, enabling them to find the spectrum of glueball masses. But can the same be done for the physical case of (3+1) dimensions?

In this paper, Freidel, Leigh and Minic seem to say "probably". Their generalisation to (3+1) dimensions is based on the idea of "corner variables", which are essentially untraced Wilson loops lying within the coordinate planes which go through the point at infinity. If the theory is expressed in terms of these, there are a lot of formal algebraic analogies with the (2+1)-dimensional case, which renders them hopeful that it may be possible to treat the (3+1)-dimensional theory in an analogous fashion. In this case the only problem left to solve would be to determine the kernel appearing in the ansatz for the wavefunctional.

There seems, however, to be a very important difference between the (2+1)d and (3+1)d cases, which they also mention but appear to consider as a relatively minor inconvenience that will be worked out: in (2+1) dimensions, the gauge coupling has a positive mass dimension: [g32]=[Mass], so the generation of a mass gap is expected on dimensional grounds just from looking at the Lagrangian, and it is even possible to compute the mass gap semi-perturbatively using self-consistent approximations. In (3+1) dimensions, there is no dimensionfull parameter in the Yang-Mills Lagrangian, so the existence of a mass gap is really an unexpected surprise. Of course an arbitrary mass scale will be introduced by regularisation, but even if this mass scale cancels from all mass ratios (as Freidel et.al. appear to assert it will), its arbitrariness still means that the overall mass scale of the theory will remain completely undetermined by the kind of analysis they propose. I am not sure if this can be a consistent situation.

The corner variables they use reminded me of a talk by the philosopher Holger Lyre given at a physics conference in Berlin in 2005. He discussed the Aharonov-Bohm effect and exhibited three possible ways of interpreting electrodynamics ontologically, which he called the A-, B- and C-interpretations. In the A-interpretation, the gauge potential A is assumed to be a real physical field: that is probably what most working physicists would reply when asked for the first time, and it has the advantage of making the locality of the interaction explicit; on the other hand, how can a quantity that depends on an arbitrary gauge choice be physically real? In the B-interpretation, the field strength B (and E) is considered to be physically real; this means physical reality is gauge-invariant, as it should be, but the interaction with matter becomes maximally nonlocal, which is very bad. In the C-interpretation, finally, the holonomies (C is for curves) of the gauge connection are taken to be the only physically real part of the theory: this leads to gauge-invariance and a form of locality (not a point interaction, but a Nahewirkungsprinzip). Ultimately, the C-interpretation would therefore appear to be the most palatable ontology of gauge theories. Finding a quantum formulation of gauge theories in the continuum that contains only Wilson loops as variables would be very desirable from this philosophical point of view alone, even if it does not lead to an analytical solution.

Friday, May 05, 2006

Language-dependent spectra

I just noticed that the sequence of colours in the visible electromagnetic spectrum seems to be named differently in English and German. In English, it generally appears to be red/orange/yellow/green/blue/indigo/violet (as in the mnemonic "Richard of York gave battle in vain."), whereas in German, it appears to be red/orange/yellow/light-green/dark-green/blue/violet (at least that is how I remember learning it in elementary school).

Now I wonder what the basic colours of the visible spectrum are called in other languages. In particular, I suppose they are rather different in languages that divide parts of the colour space differently anyway (such as, I believe, Gaelic and Russian, and probably lots of non-Indoeuropean languages). Does anybody have examples of how the spectrum is "different" in other languages?

Update: As a clarification: German-speakers don't call blue "dark-green" -- it's just that the conventional rendition of the colour spectrum in German splits the green part into two colour bands, whereas the English one does the same to the blue part. And a Franco-Canadian told me that in (Canadian) French it just is red/orange/yellow/green/blue/violet (six colours only).