Why do climate models vary so much?

It may be a sign we're close to one or more tipping points

The latest report from the Intergovernmental Panel on Climate Change is known as Assessment Report 6, or IPCC AR6 for short.  Rather than contain new research, AR6 summarizes the latest work of climate scientists, and synthesizes thousands of papers.  An important element of this are the climate models.  The climate models examined by AR6 are called CMIP6 models, for Coupled Model Intercomparison Project v6.

Climate models can provide answers to "what if?" questions.  Some of these What Ifs are described by the IPCC in their Shared Socioeconomic Pathways, or SSPs for short.   But these What Ifs combine questions about physics (how the Earth will respond) with assumptions about our future behaviour.  What we would like is to be able to factor out the human influence and get a single number that measures just how sensitive the Earth's climate is to CO2?  That's where Equilibrium Climate Sensitivity (ECS) comes in.

In broad terms ECS is the increase in Surface Average Temperature (SAT) following a doubling of atmospheric CO2 concentrations.  The baseline is taken to be pre-industrial Earth which had a CO2 concentration of 280 parts per million.  Today the concentration is around 420ppm and the SAT is 1.2C higher.  The E in ECS is for Equilibrium, which means that models must allow time for things to equilibriate after they double the CO2.  This still leaves a lot of questions, such as how long should the model Earth be left to equilibriate, and which processes should be included.  The de-facto consensus is that it should be left hundreds of simulated years; that the effect of sea-ice loss (and the corresponding albedo decrease) is included, but not the loss of land-ice; that the heat capacity of only the top layer of the ocean should be taken into account; and that changes in forest cover are not considered.

Back in 1893 Svante Arrhenius, without the advantage of modern computer power, came up with a figure of 5 to 6C for ECS.  Eighty-eight years later the Charney report came up with a range of 1.5 to 4.5C.  So how did the 31 CMIP6 models fare in 2021?  The answer is that they delivered a range of 1.8 to 5.64C.  That's to say the reported accuracy actually got worse, despite the improvements in the models, and despite an enormous increase in processing power.  What happened?

An answer to this question was proposed by Knutti and Hegerl in 2008, and is described by this illustration from their paper "The Equilibrium Sensitivity of the Earth’s Temperature to Radiation Changes".

From Knutti & Hegerl (2008)
If a feedback parameter g is normally distributed across models the resulting distribution of climate sensitivities will have a long tail, especially if g takes values near 1.  This is because climate sensitivity is proportional to 1/(1-g).

This says that there's some parameter called Feedback which is more or less normally distributed in the models, but that the resulting ECS values are not normally distributed: instead they have a very long tail.  But what does this mean?  Let's examine feedback in detail.


Here is a simple feedback system in which an input $x$ generates an output $y$.  First $x$ is scaled by a factor $A$, but then some multiple $B$ of the output is fed back (possibly with a delay) and added to the input.  If both $A$ and $B$ are positive you might expect a runaway chain reaction, but this isn't necessarily the case, as I'll explain in a moment.  But first, what might a model like this represent?

As an example, let $x$ be the increase in atmospheric CO2 from some equilibrium state.  This can be expressed in terms of Instantaneous Radiative Forcing: the equivalent increase in heat power delivered to the Earth's surface as a result of a  change in some variable such as CO2 concentration.  IRF is measured in Watts per square metre.  Continuing with this example, $A$ could be a conversion factor determining the temperature increase that causes the the radiative forcing to be balanced by increased infra-red emissions.  So $y$ is an increase in temperature from the baseline.  $B$ could represent the feedback from any one of a number of natural processes.  For example, after a delay measured in months an increase in temperature leads to a reduction in Arctic sea ice, which leads to a decrease in albedo (reflectivity) which can also expressed as a Radiative Forcing.  Let's see if this leads to runaway warming.  In what follows imagine that $A$ introduces no delay and that $B$ introduces a delay of 1 step.  Letting $y_n$ be the output at step $n$:

$$
\begin{align}
y_0 &= Ax \\
y_1 &= A(x+ By_0) = Ax + A^2Bx \\
y_2 &= A(x+ By_1) = Ax + A^2Bx + A^3B^2x \\
... \\
y_n & = Ax(1 + AB + (AB)^2 + ... + (AB)^n)
\end{align}
$$

This may or may not converge.  If it does, the limit is $Ax/(1-g)$ where $g$ is the closed loop gain $AB$.  (Note that you can get the same result by assuming $x$ and $y$ are in equilibrium and just solving $A(x+By) = y$.)  What determines whether a positive feedback causes a runaway reaction or simply an amplification of the open loop gain $A$ is whether or not the closed loop gain $g$ exceeds one.  If $g$ is just below one, then tiny changes in its value can have enormous impacts on the equilibrium that results.  This is the point that Knutti and Hegerl were making with that illustration.

Perhaps the large range of ECS values returned by the CMIP6 models tell us something: that we are sailing close to various tipping points, that one or more positive feedback loops in the Earth climate system have closed loop gains close to one.  Is there any other evidence for this claim?  It turns out there is.

In their 2018 paper "Trajectories of the Earth System in the Anthropocene" Steffen et al. look at the average temperatures over the last 1.2 million years.  This was a period during which every 100,000 years was marked by an ice age with intervening periods described as "inter-glacials".  In Steffen's terminology this is a "limit cycle", i.e. not quite an equilibrium, but a stable cycle of changing temperatures triggered by the tiny effect of periodic changes in the Earth's orbit. Before the onset of the ice ages temperatures were much higher.  The ice ages only became possible as a result of some very slow natural processes that brought the temperatures into a region that enabled this limit cycle.

The authors point out that none of the inter-glacials saw average temperatures higher than 2C above pre-industrial (i.e. the temperatures of the 1750s).  It is natural to assume, therefore, that it was when temperatures fell below this level that the ice-age limit cycle began, and similarly that temperatures rising above this level could cause the ice-age limit cycle to end.  Of course this isn't a proof, there may be a hysteresis effect requiring a higher temperature to be exceeded to exit the cycle than the threshold crossed to enter it.  Let's hope so.  But in the absence of more details, the best bet is that exceeding 2C would cause us to exit the ice-age limit cycle, and to enter a new, hotter, one.

The discussion of limit cycles and tipping points highlights a limitation of simple feedback models like the one described earlier.  In that model, if $g$ exceeds one the output of the loop will spin off to infinity, but in reality this never happens.  A transistor output may change rapidly for a while, but it can never exceed the bounds defined by the supply voltage or ground.  The reason for this is that in real systems $g$ is only approximately constant over a small range of outputs.  Eventually the system will reaches a place where the old model no longer works and a new model is required, with feedback loops that have gains below one, or even negative.  This is simply a mathematical description of what passing a "tipping point" looks like.

Yes, climate science involves uncertainty, but uncertainty is not our friend.

Comments

Popular posts from this blog

How To Make ASCII Diagrams Beautifuller

Calculation of ECS using simple convection model

Why growth is falling in all developed countries (as a long term trend)