Supercomputers help explain why Universe has almost no anti-matter
Powerful supercomputers have shed light on the behaviour of key sub-atomic particles, in a development that could help explain why there is almost no anti-matter in the Universe.
An international collaboration of scientists, including physicists from the Universities of Edinburgh and Southampton, has reported a landmark calculation of the decay of an elementary particle called a kaon, using breakthrough techniques on some of the world’s fastest supercomputers.
The calculation took 54 million processor hours on the IBM BlueGene/P supercomputer at the Argonne Leadership Class Facility (ALCF) at Argonne National Laboratory in the US.
The new research, reported in the March 30 issue of Physical Review Letters, represents an important milestone in understanding kaon decays - which are a fundamental process in physics. It is also inspiring the development of a new generation of supercomputers that will allow the next step in this research.
“It has taken several decades of theoretical developments and the arrival of very powerful supercomputers to enable physicists to control the interactions of the quarks and gluons, the constituents of the elementary particles, with sufficient precision to explore the limits of the standard model and to test new theories,” says Chris Sachrajda, Professor of Physics at the University of Southampton, one of the members of the research team publishing the new findings.
“The present calculation focuses on the fundamental question of how we arrived at a universe composed almost exclusively of matter with virtually no antimatter, but the theoretical and computational techniques of Lattice Quantum Chromodynamics (see below) will also be central to unraveling the underlying framework behind the discoveries anticipated at the Large Hadron Collider at CERN.”
The next generation of IBM BlueGene/Q machines is expected to have 10 to 20 times the performance of the current machines:
“With this dramatic boost in computing power we can get a more accurate and complete version of the present calculation, and other important details will come within reach, " said Dr Peter Boyle, University of Edinburgh. "This is a nice synergy between science and the computer — the science pushing computer developments and the advanced computers pushing science forward, to the benefit of the science community and also the commercial world.”
The process by which a kaon decays into two lighter particles known as pions was explored in a 1964 Nobel Prize-winning experiment. This revealed the first experimental evidence of a phenomenon known as charge-parity (CP) violation — a lack of symmetry between particles and their corresponding antiparticles that may explain why the Universe is made of matter, and not antimatter.
When kaons decay into lighter pions, the constituent sub-particles known as quarks undergo changes brought about by weak forces that operate at such a small scale. As the quarks move away, they exchange gluons – particles that cause the quarks to bind into the pions.
The computations are performed using the techniques of lattice quantum chromodyamics (QCD: the theory that describes fundamental quark-gluon interactions), in which the decay is input into a computer as a finite grid of space-time points. The problem of calculating the decay rate can be reduced to a statistical method, called the Monte Carlo method. The present calculation extends the range of lattice QCD calculations to a new class of process, weak decays with two strongly interacting particles in the final state.
Whilst the calculation reported here has determined fundamental quantities necessary for an understanding of the matter-antimatter asymmetry, it also marks the beginning of the next phase of the collaboration’s work. This will involve improving the precision of the computations and extending the range of physical quantities for which the effects of the strong nuclear force can be quantified.
Comparing experimental measurements of rare processes with the predictions of the standard model is a powerful tool to search for signatures of new physics and in discriminating between proposed theories. Lattice QCD will be a central tool in these studies, but in most cases even more computing power is required.
Dr Peter Boyle, University of Edinburgh, who co-authored the paper, said: “Fortunately the next generation of IBM supercomputers is being installed over the next few months in many research centres around the world, including the Blue-Gene/Q at Edinburgh, part of the DiRAC (Distributed Research utilising Advanced Computing) facility of which both the Edinburgh and Southampton groups are members, as well as at ALCF, the KEK laboratory in Japan, the Brookhaven National Lab and the Riken Brookhaven Research Center (RBRC) in the US.”
The project was carried out by physicists from the Brookhaven National Laboratory, Columbia University, the University of Connecticut, the University of Edinburgh, the Max-Planck-Institut für Physik, RBRC, the University of Southampton and Washington University.
The calculations were performed under the U.S. Department of Energy’s (DOE) Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program on the Intrepid BlueGene/P supercomputer in ALCF at Argonne National Laboratory and on the Ds Cluster at Fermi National Laboratory, computer resources of the U.S. QCD Collaboration. Part of the analysis was performed on the Iridis Cluster at the University of Southampton and the DiRAC Cluster at the University of Edinburgh.
The research was supported by DOE’s Office of Science, the U.K’s Science and Technology Facilities Council, the University of Southampton, and the RIKEN Laboratory in Japan.