Public Services

When modelling breaks down: why the future will always surprise us

Why long-term models always seem to eventually fail is a commonly asked question, with few good answers usually proposed. Ben Jones tries to rectify this in a follow up to his article published last week.

Why long-term models always seem to eventually fail is a commonly asked question, with few good answers usually proposed. Ben Jones tries to rectify this in a follow up to his article published last week.

Before providing some reasons, let’s recall some key points I made in a recent post about the relationship between modelling and decision-making. There it was explained that thinking of modelling as prediction is a mistake, when it is rather about reasoning through various scenarios of how the future may unfold. I noted that competent modellers always emphasise the role of chance, rather than make definitive statements. I also advised that models, being a kind of argument, should be judged on the basis of their assumptions and reasoning rather than on their conclusions. The core message was that decision-making is not reducible to modelling. After all this context, I would now like to change tack to explain why models – no matter how good – inevitably break down as time passes.

To build up to that, let’s start by looking at a particular policy area where long-term modelling is heavily relied upon: climate change, particularly with respect to changes to long-term global average temperatures. As any attentive observer of the news media will be aware, there has been much coverage devoted to this issue in recent years. A lot of it has been of a speculative – rather than descriptive – nature, without paying sufficiently due regard to the uncertainties involved. Let’s try to fill in some of the gaps by first weighing up the strengths of such models and examining some of the major uncertainties in this area.

We have a good understanding of what physical processes are involved in determining global temperatures. It boils down to the arrival of heat energy from the sun, conversion of any heat energy into other forms of energy, and the release of heat energy back out into space. These are affected by a mix of factors including: levels of solar activity, cloud coverage levels, the distance of the Earth from the Sun, alterations to the gaseous composition of the atmosphere, night-time cooling, effects of volcanic eruptions, effects of plant vegetation, and so on. The fact that we know what processes are involved implies that we are able to build structurally solid physics-based models. That’s the good news.

The bad news, on the other hand, is that there are still uncertainties in several places, such as: (1) measuring the historical sizes of the contributing factors; (2) forecasting the future statistical patterns of natural factors; and (3) determining to what extent the changes to the observed long-term average temperatures are attributable to each contributing factor. Focussing on the last of these, we only have a single record of the past so – unlike in laboratory experiments – counterfactual testing based on repeated observations is not possible. In attribution studies, what happens – roughly speaking – is that a physics-based model is used, starting from a time in the past, to simulate the historical record under alternative hypothetical pasts by varying the sizes of the contributing factors. This approach uses the frequency-based notion of chance that was discussed in my recent post; yet it is often mistaken for a measure of confidence.

There is another problem that plagues climate change modelling: The task gets harder as the time span increases. There are two reasons for this increasing difficulty that are worth us thinking about here. The first is that, as the time span increases, the likelihood of scenarios happening that were not initially considered also increases. The second is that small initial numerical inaccuracies become more significant, especially in simulations of complex systems. In light of these reasons, it is perfectly reasonable to be very sceptical of any definitive claims about what society, the economy, or the climate will be like in 2100. Let’s now take a look at a recent economic case where the first of the above issues has reared its head.

Not a single one of the models of the potential economic impacts of the UK leaving the EU factored in the scenario of a global pandemic. These models were produced by frequently esteemed organisations, such as the IMF, the BoE, the Treasury, the IFS, and the LSE. As a result of recent events, the conclusions of every one of those models – regarding any future economic effects of leaving the EU – have now been rendered suspect at best and invalid at worst. This alone is not a sign of incompetence because it would have been difficult – at the time – to justify the inclusion of the possibility of a pandemic in those models. This is because they aimed to consider the economic impacts that could arise specifically from leaving the EU, while assuming that all else is normal. The take-home point here is that what is outside of the models can be what ends up having the biggest impact, with the chance of that being the case increasing in time. Policymakers then should ask modellers to specify some plausible scenarios that were not considered. They should also maintain a reasonable level of preparedness for, and ability to respond to, major surprises.

The charge of incompetence can be made, however, for several major economic institutions when we consider that they have poured much effort into modelling the potential economic impacts of climate change, while severely neglecting the potential economic impacts of a global pandemic. Yet, because of Covid-19, we are currently living through one of the sharpest economic downturns of the century to date; the duration of which is hard to determine with any reasonable certainty. The excuse of unlikelihood is not warranted for two reasons. Firstly, in several iterations of the national risk register (well worth a read), the chance – in the sense of expert confidence – of an influenza pandemic in a five-year period was recognised as being between 1 in 200 and 1 in 2. That range of uncertainty in the uncertainty justifies caution and serious consideration of possible impacts and mitigation measures. This is especially so when we consider that the chance of a pandemic from any infectious disease must be at least as great as the chance of a pandemic specifically from influenza. Secondly, the chance of an event happening in X years never goes down, and generally increases, as X increases, which means that long-term economic modelling should consider such possibilities.

Returning to the topic of climate change, none of these remarks imply that it is not an important issue or that institutions should not be considering the matter. We do, of course, need to figure out how to effectively conserve what is good and beautiful with our planet’s environment while, at the same time, conserving and furthering the tremendous rise in living standards that comes from economic growth. Not an easy task. The point being made is that a hyper-focus on climate change might have left us insufficiently aware of other serious issues. Not a single central bank in a major economy seriously factored in the risks and impacts of a pandemic. Neither was there a single protest by activist groups warning of the possibility; and indeed, very few working in the media were making much fuss about it until very recently.

With all this in mind, what other major risks are there which we should be paying more attention to than we are currently? How should we prepare for the unforeseeable? It’s high time that these questions are treated with the seriousness they deserve.

Ben Jones is a PhD student in Mathematical Statistics at Cardiff University.

1 Comment

Click here to post a comment

Leave a Reply