So much for the data. What about the assumptions of the models? These are many and complex, including, among other things, ideas about virulence, infection rates and population susceptibility, all of which are supported only weakly if at all by directly measured evidence. But to give an example from left field (which is exactly the sort of thing that destroys predictions): what do the models say about transmission between humans and animals? Apparently a tiger in a zoo has caught Covid-19 (what this implies about the two metre rule I don’t know). But could our cats therefore be susceptible to the disease and could they spread it between us? If true, would that make a difference to the validity of the model? Of course it could. Did the model predict or discuss this? Of course it didn’t.
More surprisingly perhaps, the Imperial College paper published on 30 March states that ‘Our methods assume that changes in the reproductive number – a measure of transmission – are an immediate response to these interventions being implemented rather than broader gradual changes in behaviour’ (my emphasis). That is to say: in this study, if the virus transmission slows it is ‘assumed’ that this is due to the lockdown and not (for example) that it would have slowed down any way. But surely this is a key point, one that is absolutely vital to understanding our whole situation? I may be missing something, but if you are presenting a paper trying to ascertain if the lockdown works, isn’t it a bit of a push to start with an assumption that lockdown works?
And what about the fact that the Imperial model seems to mirror the measured numbers of cases so precisely – when we know that these figures are all over the place, depend on test availability, and are measured completely differently in different countries? In the UK it is not even necessary to have a positive Covid-19 test to implicate it in the cause of death on the death certificate.