But the right way to look at it is: I have some prior assessment of how likely it is that masks help prevent spread. I’d say quite likely, because the virus travels in water droplets, and presumably a mask traps some of them; and also because of earlier evidence. Let’s say I think it’s 80% likely. Then I get some new evidence, and I use it to update my beliefs. A p=0.05 result might make me update to something like 95% sure, depending on how much you trusted the study.
If you thought for some reason it was really unlikely that masks worked, then you’d update from a lower base: say if you thought it was only 1% likely, then you might end up saying it’s now 15% likely. This is Bayesian thinking again. But it’s also just reasoning under uncertainty. You can never be certain: you just make the best guess you can with the evidence available.
Whether this study is good evidence is up for debate: a stats-savvy friend warns that when you see p-values around, and especially just under, 0.05, it’s a red flag that some dodgy manipulations have gone on.
Join the conversation as a VIP Member