Home Covid-19 Why Ferguson's Lockdown Model Is A Load Of Hooey - Humans

Why Ferguson’s Lockdown Model Is A Load Of Hooey – Humans



The Ferguson – or Imperial – coronavirus model is a load of Hooey. But not, or not alone, for the reasons generally given that it’s a tangled mess of code that doesn’t even produce the same answer each time. Nor because its output was so useless that even the originator wouldn’t obey the implied rules from its use when seeking a shag.

No, Ferguson failed because his model failed to include human beings in it. Which is really very weird indeed when attempting to model, erm, human beings.

Dominic Lawson has one side of the argument today:

In short, the unprecedentedly draconian policies the government launched on March 23 were following, not leading, public behaviour. The same was true in other nations, regardless of exactly how tight, or not, the so-called lockdowns were. As Jonathan Kay, the Canadian editor of the online magazine Quillette, wrote: “I find the lockdown debate so phoney. It’s been fuelled on both sides by the presumption that government decrees work as a sort of magic wand that will bring our economies . . . back to life. But the data suggest there is no such magic wand. Much of the lockdown effect was imposed not by top-down fiat, but through millions of small decisions made every day by civic groups, employers, unions, trade associations, school boards and, most importantly, ordinary people. Here in Toronto . . . I know relatively few people whose decision to work from home (or not work at all) was dictated by government order.”

This is all entirely true. Lawson using it to point out that the damage to the economy has not all been caused by the lockdown but at least in part by that change in behaviour. We are all Keynesians now, the fact changed we changed our minds.

The obvious implication of this being that the lockdown has been very much less costly than generally thought, because the effects of the lockdown are only those additional losses that stem from it, we must subtract the losses from our initial, unforced, changes in behaviour from the total to get to the additional costs of the lockdown.


But there’s a corollary to this too. We changed our behaviour without being forced to – therefore we cannot attribute the changes in behaviour to, entirely at least, the lockdown. Which is why Ferguson’s model was wrong. For it had only two states, lockdown or normal. It did not contain reality, changes in behaviour without lockdown.

Bit of a pity that, eh? We run the entire nation on models that we know are wrong?



  1. Sure the model is wrong, it predicted 510k deaths in the UK, we only had 55k excess winter deaths, of which 36k attributed to COVID-19.
    How wrong, the model is off by a factor of ten, as it appears now, assuming the conditions of the unmitigated model predictions have been met, which they have not;
    If total UK SARS COV 2 exposure is 1% now, with 55k deaths, the model is too optimistic by a factor of ten.
    If total UK SARS COV 2 exposure is 10% now, with 55k deaths, the model is right.
    If total UK SARS COV 2 exposure is 100% now, with 55k deaths, the model is too pessimistic by a factor of ten.
    Have any of the critiques of the model been able to provide the total SARS COV 2 exposure, because without that validated figure, no criticism of the model is scientifically valid, it’s all just tosh.

  2. I’m not an epidemiologist but I have coded (financial) models in the past.

    I’ve not looked at their model, so I don’t know how far they take the complexity of their simulation. It’s entirely possible, or even probable, that the model does in fact take self-imposed changes of behaviour in a pandemic into account, prior to and during a lockdown. It’s plausible that the model includes an early over-reaction to the epidemic and also incorporates a later reduction in compliance as fatigue sets in. I’d be surprised if it doesn’t, in fact, as such fatigue seems to have been a primary concern influencing the decision on the timing of the start of lockdown, apparently in an attempt to avoid people being cloistered when the chance of infection was very low, and then starting to let their guard down later on, when the chances of infection were much higher, resulting in a large peak in infections. (The change of the slogan to “Stay alert” makes perfect sense given that possibility).

    I understand that their model includes multiple points where randomness is used to simulate human behaviour and other factors affected by chance. The criticism that the model “doesn’t even produce the same answer each time” is not a valid one if you are talking about the normal use of the code, which includes stochastic elements by design. In fact, identical outcomes would be seen as a sign of very poor model design precisely because the model’s outputs depend on simulating the results of multiple semi-random factors, including human behaviour.

    The non-repeatability criticism is valid in the contexts of code testing and validation. Best practice demands that the code should have been designed from its inception to have a test mode, where the model can be run with sensible, fixed values being used, instead of the random numbers generated in the code when it is run in live mode. Without a test mode, it becomes almost impossible to check that small additions or bug fixes don’t introduce unexpected changes in model output.

    It’s likely they tested by looking at the aggregate of multiple runs. This is valid to some extent, in that they could look for unexpected changes in average behaviour, but debugging the code must have been very difficult, and subtle errors would have been very hard to spot.

    The more complex the model becomes, the more necessary continuous testing becomes, but also the more difficult and time consuming it becomes to go back and add the necessary test code, which itself needs to be tested.

    It has been pointed out in various critiques of the model that academia is a terrible environment in which to develop complex applications of any kind. Many projects grow organically, and have constantly changing requirements and very limited budgets, making the burden of keeping code in a properly testable state significant, especially where management puts an understandable priority on model outputs rather than testability. That doesn’t excuse the problems with the code base, which apparently was in an almost unreadable state, but it does make the likely causes of this more understandable.

    I have some sympathy with Ferguson and his team, especially as at least some of the criticism they have received ignores the reality of how much, if not most, software development is actually done. Standards have improved since their work on the model started. These days, it is becoming more usual to release code and results together when publishing a paper, allowing the replication and checking of both by others. This requires a real cultural shift, as code has been seen as intellectual property to be guarded. Again, this is understandable given the amount of work that goes into software projects, and the competitive nature of academia. My hope is that the owners of many projects, including models which compete with Ferguson’s, will be reviewing their own code even as I type this. If Ferguson’s embarrassment results in better coding practices being adopted in academia generally, then some good will have come from it.

  3. Based on my experience here in the U.S., I’d say that most (by a large margin) of the benefit attributed to Lockdown really was due to changes people made on their own or would have if they hadn’t been compelled to do something else. When you include the harmful aspects of Lockdown – like forcing people inside in small spaces in urban areas – it may have done net harm, and I’m speaking solely of virus deaths here, not the other horrors (suicides, homicides, overdoses…).

    If the government had provided info and guidance, but not Lockdown, people would have modified their behavior and we’d have gotten almost all of the benefits of Lockdown with a much smaller negative impact.


Please enter your comment!
Please enter your name here


in British English
expunct (ɪkˈspʌŋkt)
VERB (transitive)
1. to delete or erase; blot out; obliterate
2. to wipe out or destroy

Support Us

Recent posts

American Hyperconsumerism Is Killing Fewer People!

This report does not say what the Guardian headline writers think it does: Three Americans create enough carbon emissions to kill one person, study finds The...

Contracts Often Lag New Revenue Streams

I've been - vaguely and not with any great interest - anticipating a story like this: Scarlett Johansson sues Walt Disney over Marvel’s Black Widow...

Richard Murphy Rediscovers Monetarism

We have a delightful example of how Richard Murphy simply doesn't understand the basic nuts and bolts of the economics he wants to impose...

Vox Is Missing The Point About Having A Constitution

Not that we should be all that surprised by this from the progressives at Vox. No government- well, no one not controlled by...

So Let’s Have An Elitist Technocracy Instead!

There's been a certain amount - OK, a lot - of squealing in the US about how democracy is the ultimate value and we...

Recent comments