Public Services

The shaky relations between modelling, policy formation, and decision-making

Many of the most important decisions which Governments nowadays make are influenced by models of the future as conceived under different scenarios. Take the ongoing Covid-19 pandemic as an example. The heavy task of choosing which civil liberties, if any, to restrict was initially based almost entirely on modelling done by researchers from Imperial College London. Given its obvious significance, then, what should we make of the reliance on modelling in policy-making?

Before offering my thoughts on this question, it must be said that modellers are mainly  concerned with determining what the future will be like in a range of scenarios: It is not about predicting the future, but about reasoning through hypotheticals. Policymakers, by contrast, are mainly concerned with what Government ought (not) to do. Modelling can inform policymaking but cannot, to the dismay of technocrats, replace it. Those making policy also have to consider a labyrinth of ethical and cultural matters, and decide on what ends we should be aiming towards.

Cutting to the chase, let’s look at some issues of model comprehensibility and reliability. Models tend to rely on highly technical methods, so policymakers need to be able to trust that the models are of a high quality. But if policymakers are unable to understand the technical details themselves, how can they be reasonably confident that the models are up to standard? Taking the word of a single modeller, or even a small team, simply is not good enough because even world-renowned experts disagree with each other and are not above making serious mistakes. Who then checks the assumptions, the details, and the code?

The ideal situation is that other independent experts check the work, criticise it, and it gets revised. Then, the process is repeated a few times and the final product gets compared – and possibly combined – with the work of other experts. A summary can then be used to inform policy. This ideal is clearly time-consuming, so it is only possible for issues that are long recognised as worth investigating. In the case of Covid-19, the best that could be done, owing to urgency, was to adapt previous work on modelling of a possible influenza pandemic. Unfortunately, the quality of this  model was tenuous at best owing to having very limited amounts of data which was of questionable reliability. All that, as well the appalling error of using poorly designed, untested, and undocumented code.

It is tempting to think that the absence of good, well-tested models implies that a laxer response to the pandemic would have been more appropriate than what was implemented. After all, very few are happy with the present situation, so any reason to say it is unjustified is understandably appealing regardless of its merits. This, however, assumes that decision-making requires having good arguments. In the real world, decision-making sometimes has to be done in a hurry without much evidence to go on. How should that be done in such situations where the problem might spread rapidly and might be highly consequential? The rule of thumb for dealing with insufficient knowledge in such contexts is to accommodate the ignorance. Lean heavily on the side of caution early on, gather more information, determine the options, then respond appropriately. This can be done without draconian measures while – as Nicola Sturgeon likes to say but never to do – treating people like grown-ups capable of making good decisions for themselves.

The take-home message is that good decision-making is not the same thing as having sufficient knowledge or a good model. That is not to say that good modelling or science is of no value for decision-making. Of course it can be: For example, take that the building of a dam will depend on estimates of future maximum water levels. Too small is useless, while too large wastes resources. The point I am stressing is that decision-making cannot be reduced to modelling. To see this more clearly, just think that societies have been dealing with pandemics for far longer than the Germ Theory of Disease has been understood, or the techniques used in modern modelling have existed. These societies used – often without Government imposition – measures such as social distancing and quarantining the infected, without understanding why they work. It is evident then that people can know what to do in various situations without necessarily knowing, or being able to explain, why those actions work.

Let’s next think about how we can judge a model’s quality. Models are effectively arguments about how the future may bear out under some suppositions. Like all arguments, they have assumptions, inferences, and conclusions. Frequently, people judge arguments based on the correctness of their conclusions. The principal problem with taking this approach is that it is possible to be right even though the assumptions may be false or that the reasoning may be flawed. To see this, consider the silly argument: Some men are tall; Jack is a man; therefore, Jack is tall. Jack may indeed be tall, but the argument is still invalid.

When chance is involved, it is even harder to judge on conclusions; and no competent modeller makes definitive unqualified statements, although media reports often suggest otherwise. Rather, a good modeller ascribes chances to potential outcomes under various scenarios. They can do this in one of two ways. The first is that they can specify their degree of confidence in their assumptions, then use their model – and a special mathematical rule – to transfer it onto potential future scenarios. Doing this treats the chance of a possible event in terms of a confidence measure that is calculated using the model. The other, more common, approach is to calculate the proportion of times an event happens when different assumptions have been used. This treats the chance of an event in terms of (relative) frequency. All too often, these two different notions of chance are mixed up, even by professional statisticians. In case you’re wondering, it is the second notion of chance that is being used in the Covid-19 modelling done by the Government’s advisers.

Both approaches are faced with major problems which are not yet well-resolved. A problem common to both approaches is that the calculated probability of an event depends on the model used, rather than on observation. A problem specific to the first approach is how to specify the initial confidence using prior, but incomplete, knowledge. A problem specific to the second approach is that only what really happens is observed, and not what happens in the full range of scenarios. We can only then evaluate the model against what really happens, but there is no indication as to whether it would be accurate in the other scenarios.

It would be overly ambitious to try to address these problems here. For now, it is enough to stress that models are generally better judged – ahead of the events – based on the assumptions they make rather than on their conclusions. When it comes to decision-making, this is the best that we can do because our decisions are aimed at influencing the future rather than predicting it. In following this recommendation, the three major concerns are: (1) determining the present using available data; (2) how well the model captures the relevant processes that determine the outcomes; and (3) the stability of model conclusions with regard to small differences in the starting assumptions.

In the case of Covid-19, we had trouble with all of these. The absence of reliable data meant that we could not accurately gauge where we were starting from, and it meant that we knew very little about what we were dealing with. Owing to the virus’ contagious nature, small alterations to initial assumptions result in a tremendously wide range of different outcomes even in short amounts of time. 

Wrapping up, policymakers need to put more effort into thinking about how to make better decisions when confronted with ignorance, and worry less about whether we have good predictive models or not. To reiterate the central point, good decision-making is not the same as having good models.

Ben Jones is a PhD student in Mathematical Statistics at Cardiff University.

If you want to write for us, drop us an email at or send us a message on social media!