I’m half way through the second week of my stay at the Aspen Center for Physics, where we have been discussing “Tightening the Gap Between Scattering Amplitudes and Events at the LHC at Higher Orders“.
What that somewhat wordy title means is that a mix of people, ranging from quite formal mathematical physicists through to experimentalists like me have been exchanging ideas about how we can more effectively make precise predictions from the Standard Model of particle physics, and confront them with data from CERN’s Large Hadron Collider (LHC). The people who made some of the calculations we compared to in my previous post are here, for example.

It’s a beautiful place.

The whole topic of this workshop will be crucial for the success of the LHC programme over the next several years, as we try to quantify how well the Standard Model describes nature at energies much higher than those at which it was developed.
Of course, we kind of hope that it doesn’t describe nature, and that the discrepancies give us some clues to a bigger, better theory that answers some of the questions that the Standard Model leaves open. Either way, having precise predictions to match the precise data we expect is essential. There are many challenges in achieving that, as the calculations get rapidly more complex, and often computationally expensive, as we improve and reduce the approximations we have to make. As I type this I am sitting in the corner of a room while a dozen or more colleagues discuss the limitations in how they estimate the uncertainties on their results. It’s quite technical, and I will leave that there, and instead point you at a public talk I gave here last week, which is hopefully non-technical enough to make it clear why we care.