If you don’t know what the Imperial College model is, you probably know the number: 2.2 million deaths.
That’s what the mathematical model from the U.K. university predicted the United States would face if politicians and people did absolutely nothing differently. This result was misinterpreted — often deliberately — to state that we were on a collision course for that many deaths no matter what.
That was never true, of course. Now, computer programmers say that the Imperial College model may not have been true in the first place, even if no lockdowns were ever put into place.
A May 16 piece in The Telegraph questioned the computer modeling that Imperial College’s Neil Ferguson used to come up with the “totally unreliable” model, arguing the language it was programmed in was old enough that it made serious testing of the data practically impossible.
Ferguson, of course, is better known for two things now. First, he was in a central figure in the U.K.’s response to the coronavirus pandemic. Second, the reason why we’re referring to his leadership in the past tense is that he decided the lockdown rules he’d helped impose on Britain didn’t apply to him; after he decided to visit his lover’s apartment and the tryst was discovered, Ferguson stepped down.
However, his modeling has been considered mostly unimpeachable until now. David Richards and Konstantin Boudnik, both with distributed computing firm WANdisco, are trying to change that.
“In the history of the expensive software mistakes, Mariner 1 was probably the most notorious. The unmanned spacecraft was destroyed seconds after launch from Cape Canaveral in 1962 when it veered dangerously off-course due to a line of dodgy code,” their article last week began.
“But nobody died and the only hits were to NASA’s budget and pride. Imperial College’s modelling of non-pharmaceutical interventions for Covid-19 which helped persuade the UK and other countries to bring in draconian lockdowns will supersede the failed Venus space probe could go down in history as the most devastating software mistake of all time, in terms of economic costs and lives lost.”
All right, but surely there isn’t much in common between the two events aside from the fact that the authors believe both were in error, right? Well, not precisely. See, despite the fact that Mariner 1 was destroyed nearly 50 years ago, both projects used the same programming language.
“Imperial’s model appears to be based on a programming language called Fortran, which was old news 20 years ago and, guess what, was the code used for Mariner 1. This outdated language contains inherent problems with its grammar and the way it assigns values, which can give way to multiple design flaws and numerical inaccuracies. One file alone in the Imperial model contained 15,000 lines of code,” the article read.
“Try unravelling that tangled, buggy mess, which looks more like a bowl of angel hair pasta than a finely tuned piece of programming. Industry best practice would have 500 separate files instead. In our commercial reality, we would fire anyone for developing code like this and any business that relied on it to produce software for sale would likely go bust.”
That didn’t happen with Neil Ferguson’s model.
This isn’t just using a superannuated computer language, either. There’s something known as “separation of concerns” in computer model testing, something that goes back to the 1970s and is extremely difficult to test with Fortran. What it essentially means is that each part of the program is separate from the others.
This kind of compartmentalization makes it easy to test whether or not there’s an error in either programming or bedrock assumptions with a certain part of a model.
“Without this separation, it is impossible to carry out rigorous testing of individual parts to ensure full working order of the whole. Testing allows for guarantees. It is what you do on a conveyer belt in a car factory. Each and every component is tested for integrity in order to pass strict quality controls,” Richards and Boudnik wrote.
“Only then is the car deemed safe to go on the road. As a result, Imperial’s model is vulnerable to producing wildly different and conflicting outputs based on the same initial set of parameters. Run it on different computers and you would likely get different results. In other words, it is non-deterministic.”
Given this inherent flaw in the model, the authors said it “screams the question as to why our Government did not get a second opinion before swallowing Imperial’s prescription.”
It wasn’t just the United Kingdom that swallowed the Imperial study without checking it first. There were plenty of other problems with how it was handled. For starters, that 2.2 million number was bandied about as if it was what was going to happen unless we took drastic, draconian steps. But that’s not what the study said.
However, the fact that the assumptions that undergirded the study couldn’t have been checked in isolation was a major flaw in the methodology of the model. For a study that was mostly just a scare piece, so much of our policymaking hinged on the study — and we couldn’t even check it correctly, given how it was assembled.
“No surgeon would put a pacemaker into a cardiac patient knowing it was based on an arguably unpredictable approach for fear of jeopardising the Hippocratic oath,” Richards and Boudnik wrote. “Why on earth would the Government place its trust in the same when the entire wellbeing of our nation is at stake?”
Ferguson is no longer in a position of power thanks to his own inability to maintain social distancing continence. This, at the very least, is a good thing. Given this piece, however, the question should be why he was in a position to do away with our rights in the first place.
Truth and Accuracy
We are committed to truth and accuracy in all of our journalism. Read our editorial standards.