Margin of Error

We have achieved 100k COVID-19 tests in day hurrah! Bang the pots and pans and stuff.

There’s a certain amount of cynicism, with even the BBC fact-checking itself. Some of the tests are in the post, and several were on the same person. But there’s enough coverage in reality for the claim to be made (it doesn’t take much) and actually I find myself wondering about Matt Hancock.

From an outsider non-expert point of view, increasing the testing capacity seems to be something that is vital in essentially all scenarios, so you would think it would be a government priority regardless. But then replenishing the UK’s PPE stockpile would seem to have been an equally obvious priority almost from the beginning of the year, and no-one did it.

So maybe the arbitrary round number of 100k tests was a necessary political move. Perhaps it was only by some kind of eye-catching high profile challenge that he could actually make his own government pay enough attention? In that scenario, which may be over-generous to him and overly harsh on others, he played a blinder; as long as the capacity has actually increased, whether its 100k or not is within the margin of error, and at least the media fan club are banging on about that rather than the fact that Boris Johnson has increased the number of children he has fathered — also within the margin of error apparently.

Arbitrary metrics, milestones and deadlines are a well-worn management tool. They get used a lot on my experiment, ATLAS. Used well, they can keep things moving, and perhaps more importantly, let you know whether your estimates were realistic (they rarely are) and where more effort is needed if the timeline is to be met. Used badly, they keep everyone in a constant state of stress over the next conference, data-deletion or software release, and they stop people ever taking the time to step back and think about what they’re doing, and maybe do it better or differently rather than urgently.

Two traps of research are on the one hand asymptotically approaching the perfect analysis but never actually delivering anything, and on the other hand rushing blindly to get things done without considering carefully enough how and why your are doing them. Steering between the two is exhilarating, and I would say is one of the main skills to be learned in order to deliver a PhD thesis. (I did a virtual PhD viva this week — Lewis has learned it.)

Time may tell whether this 100k thing was a useful goal to set, or a silly gloss on something that would have happened anyway. I tend to think it was a silly gloss on something that should have happened anyway but might not have without the silly gloss, such is the terrible state of our politics.

If you’re hankering for more physics, Riccardo has some cautionary tales on why we need to be careful, whatever the deadlines we may set.

 

About Jon Butterworth

UCL Physics prof, works on LHC, writes (books, Cosmic Shambles and elsewhere). Citizen of England, UK, Europe & Nowhere, apparently.
This entry was posted in Politics, Science and tagged , , , , , , . Bookmark the permalink.

1 Response to Margin of Error

  1. Peter Hobson says:

    You have summarised the challenge of the PhD very clearly and concisely. In my PhD analysis (looking for decays of charmed hadrons) I thought I had a method to measure from data the background distribution as an alternative to our simple – we are talking 1983/4 computing here – Monte Carlo simulation. Was I surprised (and pleased) when I also found the signal and thus provided a semi-independent cross-check of the “golden” analysis đŸ™‚

Comments are closed.