Long-term climate variability is the range of temperatures and weather patterns experienced by the Earth over a scale of thousands of years. New research suggests it could fall as the world warms.
A study using data taken from fossils and ice cores finds that long-term temperature variability decreased four-fold from the Last Glacial Maximum (LGM) around 21,000 years ago to the start of the Holocene around 11,500 years ago. Within this period, natural processes caused the planet to warm by around 3-8C.
If future global emissions are not curbed, human-driven global warming could cause further large declines in long-term temperature variability, the lead author tells Carbon Brief, which may have far-reaching effects on the world’s seasons and weather.
However, it is still unclear how a decline in long-term variability could affect the frequency of extreme weather events, she adds. This is because the chances of an extreme event happening could be influenced by both short- and long-term climate variability, as well as global temperature rise.
Digging up the past
The new study, published in Nature, is the first to make a global assessment of how long-term temperature variability changed from the LGM to the Holocene.
During the LGM, the world’s last major ice age, snow covered much of Asia, Europe and North America. Yet, within a few thousand years, global temperatures rose by around 3-8C, causing the ice to thaw and the world to enter its current geological period, the Holocene.
The cause of this temperature rise is still disputed by scientists, but research suggests the natural release of large stores of CO2 from the world’s oceans may have played a role.
To work out how long-term climate variability changed over the period, the researchers analysed data taken from ancient ice cores, marine sediments and animal and plant fossils stretching back thousands of years.
Scientists are able to analyse some of these samples – which are known as proxy records – by looking at the ratios between different chemical isotopes.
Combining data derived from different parts of the world and time periods allows scientists to create a picture of past temperature change, explains Dr Kira Rehfeld, a research fellow at the British Antarctic Survey and the Alfred-Wegener Institute for Polar and Marine Research (AWI) in Potsdam, Germany. She tells Carbon Brief:
“We set out and started collecting more and more records that we could use to get a more general picture of changing climate variability for temperature. It’s taken us three and a half years to find enough records and to develop the methodology to be able to analyse them.”
The researchers then compared data taken from the LGM and the Holocene to help them work out how global temperatures could have changed over large time scales. Rehfeld says:
“We don’t look at the variability in terms of just temperature rise, we look at the ratio of the variability. So we divide the variability of the LGM by the variability of the Holocene. That way we can compare records that have very different origins.”
The research finds that, from the LGM to the Holocene, long-term temperature variability fell by a factor of four.
However, some parts of the world experienced larger changes in temperature than others, the study notes.
This is shown on the chart below, where dark blues show areas that experienced a large amount of temperature change from the LGM to the Holocene, whereas light blue shows areas that experienced less change.
On the chart, symbols are used to show the location of ice cores (circle), marine sediments (diamond), lacustrine – or lake – sediment (triangle) and tree fossil data (square). Colours are used to show samples from the Holocene and LGM (red), the Holocene (orange) and the LGM (purple).
Global temperature change from the Last Glacial Maximum to the Holocene. Dark blue indicates high temperature change while light blue shows low temperature change. Symbols show the location of ice cores (circle), marine sediments (diamond), lacustrine sediment (triangle) and tree fossil data (square). Colours show samples from the Holocene and LGM (red), the Holocene (orange) and the LGM (purple). Source: Rehfeld et al. (2018)
The findings show that the world’s poles experienced a larger change in temperature than the equator over the time period. These changes led to an overall decline in long-term temperature variability, the research finds.
The difference in warming between the poles and the equator could be down to a process known as “polar amplification”, Rehfeld says.
Polar amplification is the phenomenon that any change in the impact of sunlight on the Earth tends to have a larger effect on the poles than the equator.
This is thought to be because as warming causes sea ice near the poles to melt, energy from the sun that would have been reflected away by the ice is instead absorbed by the ocean. Because of this, surface temperatures near the poles start to rise at an accelerated rate.
The findings reinforce the prediction that future climate change driven by humans will cause a larger increase in temperature at the poles than at the equator, Rehfeld says:
“The temperature difference between the poles and the equator has decreased as the Earth warms due to polar amplification. This relates to a change in overall long timescale temperature variability.
“If you take that and extrapolate that into the future, warming could be larger at the poles. The temperature difference is then further reduced, which would translate into a reduction of overall temperature variability.”
Carbon Brief previously reported on how the effect of climate change on polar amplification could cause the amount of wind available for power generation to fall in the northern hemisphere.
Although long-term variability is expected to fall, this does not mean that short-term variability will also be reduced, Rehfeld says:
“The question we’re asking is what would a warmer world than today look like? If we can translate our changes in the temperature gradient, then that would mean, theoretically, that long timescale variability in the future will be reduced. But that doesn’t mean that short timescale variability will be reduced.”
Short-term climate variability is a term typically used to describe the natural range of temperatures and weather patterns experienced by the Earth within shorter periods.
For example, after an extreme weather event, scientists often carry out single attribution studies to determine how the likelihood of such an event could have been influenced by climate change and short-term climate variability.
It is still not clear how a reduction in long-term variability will affect the frequency and severity of extreme weather events, Rehfeld says:
“There seems to be a correlation. This change in long timescale climate variability could have influences on extreme events and seasonal variability.
“Based on what we know about how extreme events work, if we have a broader distribution of temperatures then we should have more extreme events. However, what we perceive as extreme events, like floods or heatwaves, is not reflected in our datasets.”
In other words, scientific theory suggests that declines in long-term climate variability could lead to fewer extreme events. However, the timescale used in the study was too broad to reflect short-term events, such as floods and heatwaves.
The findings are “interesting”, but could hold “limited relevance” to understanding future climate change, which is occuring at a much faster rate than the warming observed from the LGM to the Holocene, says Prof Amanda Maycock, a research fellow from the University of Leeds who was not involved in the new study. She tells Carbon Brief:
“Current surface temperature changes and associated changes in climate variability and extremes are occurring much more rapidly than the multi-centennial timescales considered in the study.”
The datasets collated in the study could be used to help climate models simulate more long-term changes in climate variability, says Dr Lauren Gregorie, an academic research fellow at the University of Leeds, who was also not involved in the study. She tells Carbon Brief:
“What I find particularly interesting is that while models do simulate a reduction in variability, they tend to underestimate that change compared to the records [used in the study]. There is a great opportunity to use our knowledge of past climate change to test and improve climate models. Unfortunately, there’s currently very little funding to do this kind of work.”
Despite the huge strides taken since the earliest climate models, there are some climatic processes that they do not simulate as accurately as scientists would like.
Advances in knowledge and computing power mean models are constantly revised and improved. As models become ever more sophisticated, scientists can generate a more accurate representation of the climate around us.
But this is a never-ending quest for greater precision.
In the third article in our week-long climate modelling series, Carbon Brief asked a range of climate scientists what they think the main priorities are for improving climate models over the coming decade.
These are their responses, first as sample quotes, then, below, in full:
Prof Pete Smith: “We can get that extra level of detail into the models and check that that’s an appropriate level of detail because a more complex model is not necessarily a better model.”
Dr Kate Marvel: “Higher resolution is the first priority. Right now, climate models have to approximate many physical processes that turn out to be very important.”
Prof John Mitchell: “The top priorities should be reducing uncertainties in climate sensitivity and reducing uncertainties in radiative forcing – particularly that associated with aerosols.”
Prof Daniela Jacob: “It’s important that models will be able to simulate local characteristics, so that they are able to simulate the climate in a city, in mountainous regions, along the coast.”
Prof Kevin Trenberth: “Precipitation. Every model does this poorly and it is socially acceptable. It has to change.”
Prof Piers Forster: “The biggest uncertainty in our climate models has been their inability to simulate clouds correctly. They do a really bad job and they have done ever since they first began.”
Dr Lesley Ott: “Understanding carbon-climate interactions. We don’t understand those processes well enough to know if they’re going to continue.”
Dr Syukuro Manabe: “As the models get ever more complicated – or, as some people say, sophisticated – no one person can appreciate what’s going on inside them.”
Prof Stephen Belcher: “Having climate models that can give us the precision around extreme weather and climate events is definitely one priority.”
Prof Drew Shindell: “One of the key uncertainties is clouds, understanding the physics behind clouds and how clouds interact with aerosol particles.”
Prof Michael Taylor: “I think there is still some difficulty in understanding the land and sea border and any advances within models would be an advantage for island states.”
Prof Stefan Rahmstorf: “I think a key challenge is non-linear effects, or tipping points. For example, the Gulf stream system. We still don’t know how close we are to a threshold there.”
Dr James Hansen: “The fundamental issue about climate change is the delayed response of a system and that’s due to the ocean’s heat capacity.”
Dr Doug McNeall: “We need to be adding more processes, modelling new things, and also we need to be modelling finer detail so we can better explain the climate.”
Dr Ronald Stouffer: “Improving the ocean simulations, particularly in the Southern Ocean. This is a very important region for the uptake of heat and carbon from human activities.”
Prof Adam Scaife: “There is a signal-to-noise problem evident in climate models which means that, in some mid-latitude regions, predicted climate signals are too weak.”
Dr Jatin Kala: “More realistic representation of vegetation processes.”
Dr Katharine Hayhoe: “Natural variability is really important when we’re looking over time scales of anywhere from the next year or two to even a couple of decades.”
Dr Chris Jones: “I think the major gaps would include the ability to trust climate models at finer and finer scales, which, ultimately, is what people want to know.”
Prof Christian Jakob: “In my view, the highest priority is to have more people involved in the model development process so that more new ideas can be generated and implemented.”
Prof Richard Betts: “We need to represent the other aspects of the climate system that aren’t always captured in the climate models [such as] tipping points, nonlinearities.”
Dr Bill Hare: “One of the underdeveloped areas, including in IPCC assessment reports, is evaluating what are the avoidable impacts [of climate change].”
Prof Detlef van Vuuren: “I think quite a number of key Earth processes are still not very well represented, including things like the role of land use, but also pollution and nutrients.”
I think the ESMs [Earth system models] have a pretty good representation of many of the processes, but because they’re trying to cover the whole Earth, then you have a relatively simple description of most things in the models. There are still ESMs, for example, that have a very limited representation of nutrients – so, for example, nitrogen and phosphorus limitations on plant growth in the future.
We’ve got a great representation of these things within ecosystem models that we tend to use uncoupled and we just run those on the land surface. We’ve got a good detailed representation of some of those processes in those models – but those aren’t all yet into the ESMs. So getting that level of detail in I think is important, as well as improving the regional downscaling and improving the resolution of those ESMs.
That used to be limited by computing power, but that’s no longer a limitation. So we can get that extra level of detail into the models and check that that’s an appropriate level of detail, of course – because a more complex model is not necessarily a better model.
Higher resolution is the first priority. Right now, climate models have to approximate many physical processes that turn out to be very important; air flowing over mountain ranges, for example, or small eddies mixing water in the ocean. This is because it’s too hard to get the large and small scales right: there’s simply no computer powerful enough to keep track of very small and large scales simultaneously. Different models make different approximations and this contributes to uncertainty in their projections. But as computing power increases, we’ll be able to explicitly capture a lot of the small-scale effects that are very important to regional climate. You can think of this as sharpening the blurry picture of climate change.
Number two is better cloud simulation. Clouds are hard for models to get right and we know that different climate models don’t agree on how hot it’s going to get, in large part because they don’t agree on what clouds will do in the future. If we can get climate models to more credibly simulate current cloud patterns and observed cloud changes, this might reduce the uncertainty in future projections
Three is better observations. Satellites have been a real game-changer for climate research, but they’re not perfect. We need to keep evaluating our models against observational data and this is difficult in the presence of observational uncertainty. Long-term global datasets are often cobbled together from many different satellite and ground-based observations, and different measurements of the same variable often disagree. Dedicated long-term measurement devices like the instruments on NASA’s Afternoon Constellation (“A-train”) of satellites will help us understand reality better and this will allow us to benchmark and re-evaluate our models.
The top priorities should be reducing uncertainties in climate sensitivity, getting a better understanding of the effect of climate change on atmospheric circulation (critical for understanding of regional climate change, changes in extremes) and reducing uncertainties in radiative forcing – particularly those associated with aerosols.
I think from a societal point of view, it’s important that models will be able to simulate local characteristics, so that they are able to simulate the local climate in parts of a city, in mountainous regions, in valleys, along the coast. There are still limitations in the climate models. Although they’ve made a lot of progress over the last decades, we still do not really know how climate is changing on a local scale.
If you look at the scientific questions behind this, then I think the most important areas to look at are clouds, how to simulate clouds, the development of clouds, the life cycle of clouds, the land surface. The representation of the land cover and the land management is something which needs to be looked at.
Of course, there are many, many other questions. It really depends on what you want to use the model for. All climate models, global or regional, are made for a specific purpose. I think that’s important to have in mind. Not all models can do the same and they are not all good in the same way.
For us, the priority is to simulate the water cycle correctly. I was very interested in getting the precipitation amounts, locations and frequency, intensity, times, weird rains correct to get the runoff simulated.
The top priorities over the next decade for improving climate models are:
Precipitation. Every model does this poorly and it is socially acceptable. It has to change. By precipitation I mean all characteristics: frequency, intensity, duration, amount, type (snow vs rain etc) at hourly resolution.
Aerosols. The indirect effects of aerosols on clouds are poorly done. Some processes are included, but all models are incomplete and the result is nothing like observations. This affects climate sensitivity.
Clouds. This is more generic and relates to sub-grid scale processes.
Land-surface heterogeneity: this is a resolution issue and deals also with complexity.
Air-sea interaction and the oceans. This also relates to mixing in the ocean, the mixed layer depth and ocean heat storage and exchanges.
By far the biggest uncertainty in our climate models has been their inability to simulate clouds correctly. They do a really bad job and they have done ever since they first began. And this has all sorts of knock-on effects. It gives a big uncertainty in projections going further forward in time because we don’t understand the way they work, and it also gives big uncertainty to things like extreme precipitation – so that we don’t understand rainfall extremes that well. So we have all these big uncertainties from our incorrect simulation of clouds.
It is intimately tied with observations but there’s also been a huge advance in the last 10 years in the way we can observe the way clouds work. We have unprecedented satellite instruments up there currently, that can really observe clouds in a far more sophisticated way than we ever have been able to before.
They’re fantastic, and by exploiting these wonderful observations we’ve got, I think we can really test the way these climate models work.
One area that’s really critical is cloud–aerosol interactions. It’s something that we really don’t know too much about, we’re seeing some tantalising evidence that there could be important effects but on a global scale it’s very hard to understand. For us, in our office, it took a lot of work to get our model to run with the kind of cloud microphysics and aerosol microphysics that would actually allow us to study that. We’re now at that point where we are starting to do that kind of work and I think you’re going to see in the next five or ten years a lot more research on that.
The other thing that is particularly important, which is my research area, is understanding carbon–climate interactions. Right now, one thing that not a lot of people know is that 50% of human emissions get absorbed by plants on the land and the oceans and that’s been a really valuable resource in limiting climate change to the effects we’re seeing today. If we didn’t have that valuable resource we’d be seeing things progress much more quickly, in terms of CO2 concentrations and global warming. The problem is we don’t understand those processes well enough to know if they’re going to continue. We’re seeing a lot of energy both with atmospheric observations and new observations of the land’s surface and I hope we’re going to continue to see progress.
As the models get ever more complicated – or, as some people say, sophisticated – no one person can appreciate what’s going on inside them.
What we have to do now is more of the things that I was doing in the old days when I used a simpler parameterisation of the sub-grid scale process, but keeping basic physics such as the hydrodynamical equation, radiative transfer, etc. That model is run much faster than the so-called Earth system model which they now use for the IPCC [Intergovernmental Panel on Climate Change]. And then using a much faster computer you can run a large number of numerical experiments where you can change one factor at a time, as if the model were a virtual laboratory. You can then see how the model is responding to that change.
I think the Paris Agreement really changed the agenda for climate science. And at the Met Office, we’re really focused on two aspects of improving climate models. The first is understanding extreme events and the risks associated with extreme weather and climate events – in the current climate, but also in a future climate.
For example, the kind of heatwaves we’ve seen in Europe – we had one in 2003 and 2006 – just how severe will they become and how frequent might they become? Some of the wet winters we’ve been having in Europe as well – are they going to become the new normal, or are will they just remain unusual events? So, having climate models that can really give us the precision around these extreme weather and climate events is definitely one priority.
The other priority is that in order to achieve the goals of the Paris Agreement, we’ll need to have a very close eye on the amount of carbon we emit into the atmosphere and the amount of CO2 that remains in the atmosphere. There are other factors in the climate system that drive the concentration of CO2 and hence global warming. For example, we know that as the planet warms, permafrost might melt and emit greenhouse gases of their own – warming the planet still further. But our quantitative estimate of that permafrost and the warming that might give are not very quantitatively accurate at the moment.
Carbon budget: A carbon budget is the maximum amount of carbon that can be released into the atmosphere while keeping a reasonable chance of staying below a given temperature rise. The Intergovernmental Panel on Climate Change (IPCC) first adopted the concept of carbon budgets in its 2013 report. Budgets are typically expressed in gigatonnes of carbon (GtC) or carbon dioxide (GtCO2). To convert the former to the latter, multiply by 3.67.
Carbon budget: A carbon budget is the maximum amount of carbon that can be released into the atmosphere while keeping a reasonable chance of staying below a given temperature rise. The Intergovernmental Panel on… Read More
Secondly, about half of the CO2 we release into the atmosphere is absorbed either by plants on land or into the ocean and tightening up those numbers is really important. As we approach the targets given in Paris, the amount of precision we need on these allowable carbon budgets – to meet the temperature changes – is going to get sharper and sharper, and so we’re going to need better climate models to address those carbon budget issues.
One of the key uncertainties is clouds, understanding the physics behind clouds and how clouds interact with aerosol particles. That has, unfortunately, also been a key uncertainty for a long time and is likely to remain one.
In particular, better computer power [is needed] because we do have some observations and some process understanding, but they happen at very fine spatial and temporal scales, and that’s the hardest thing to model because it takes an enormous amount of computer power.
We can get better observations from things like satellite data, but a lot of that is very challenging because the uppermost level of clouds blocks everything below and then you can’t see what’s really going on. You can fly airplanes and get detailed information, but for one short period of time and one short area. Those are really challenging things to improve from an observational perspective – and require immense computer power.
I would say that as far as advancing our ability to really look at the issue of climate change, I think one of the things we really need to do is to make our models interact more between the physical sciences and the social economics, and to really understand the link a little more closely between climate change and the drivers and impacts of climate change.
I think for us there is still some difficulty in understanding the land and sea border and certainly any advances in differentiating that land-sea contrast within the model would be an advantage for island states – especially small island states.
Certainly advances in representing topography at a finer scale – putting the mountains in the right place, achieving the right height for the small scale – would represent significant improvements for the small islands. And improvements in coastal processes, the dynamics of coastal climate would represent improvements for the small island community.
I think a key challenge is non-linear effects, or tipping points. For example, the Gulf stream system. We still don’t know how close we are to a threshold there. We know there is one because we know these non-linear phenomena are very sensitively dependent on the exact state of the system and so models still widely disagree on how stable or unstable the Gulf stream system will be under global warming in the future.
There is another effect which is the changes in the atmospheric circulation, including the jet stream. That’s one area of research that we are working on currently which has a really big impact on extreme weather events and it’s this kind of phenomena that we need to understand much better.
I’ve had a longstanding interest in palaeoclimate. The last few million years have been generally colder with ice ages, but if you go way back in time for many millions of years, there are much warmer climates on Earth and we are very interested in modelling these. But it is quite difficult because of the long time scales that you have to do deal with so you can’t use the models that are used to simulate a hundred years or two hundred years. You have to design models that are highly computationally efficient to study palaeoclimate.
The fundamental issue about climate change, the difficulty, is the delayed response of a system and that’s due to the ocean’s heat capacity.
But then the effective heat capacity, the surface temperature, depends on the rate of mixing of the ocean water and I have presented evidence from a number of different ways that models tend to be too diffusive because of numerical reasons and coarse resolution and wave parameter rise, motions in the ocean. It can tend to exaggerate the mixing and, therefore, make the heat capacity more effective.
As we’ve gone through time, climate models have got more complex, so that’s not only been due to increasing resolution, but also adding more processes. I think we need to continue both of those trends. We need to be adding more processes, modelling new things and also we need to be modelling finer detail so we can better explain the climate. We need to better explain the impacts of climate on the systems we care about, such as the human systems, ecological, carbon cycle systems. If you make the model better, if you make it look more like reality, it means that your knowledge of how the system will change gets better.
Dr Ronald Stouffer Senior research climatologist and group head of the Climate and Ecosystems Group at the Geophysical Fluid Dynamics Laboratory (GFDL) Princeton University
The top priorities over the next decade for improving climate models are:
Evaluating and understanding climate response to changes in radiative forcing (greenhouse gases and aerosols).
Improving the cloud simulation (distribution 3D and radiative properties). This is of first importance for better estimates of the climate sensitivity.
Improving the ocean simulation particularly in the Southern Ocean. Models do a fairly poor job currently and this is a very important region for the uptake of heat and carbon from human activities.
Higher model resolution. This helps provide improved local information on climate change. It also reduces the influence of physical parameterisations in models (a known problem).
Improve the carbon simulation and modelling in general. Modelling land carbon changes is particularly a challenge do to the importance of small local scales.
There is a signal-to-noise problem evident in climate models which means that, in some mid-latitude regions, predicted climate signals are too weak. This possibility was realised in the past and has actually been around in climate models for many years.
It is the top priority of my research group to try to solve this problem to improve our climate predictions and, depending on the answer, it could affect predictions on all timescales from medium range forecasts, through monthly, seasonal, decadal and even climate change projections.
Climate modelling is an enormous undertaking. I think few people realise just how complex these models are. As soon as there’s a new supercomputer available anywhere in the world, there’s a climate model waiting to be run on it because we know that many of our physical processes right now are not being directly represented. They have to be “parameterised” because they occur at spatial or time scales that are smaller than the grids in the time steps that we use. So the smaller the spatial grids and the smaller the time step we use in the model, the better we’re able to actually explicitly resolve the physical processes in the climate.
We’re also learning that natural variability is really important when we’re looking over time scales of anywhere from the next year or two to even a couple of decades in the future. Natural variability is primarily controlled by exchange of heat between the ocean and the atmosphere, but it is an extremely complex process and if we want to develop better near-term predictive skills – which is looking not at what’s going to happen in the next three months but what’s going to happen between the next year and 10 years or 20 years or so – if we want to expand our understanding there, we have to understand natural variability better than we do today.
I think the major gaps would include the ability to trust climate models at finer and finer scales, which, ultimately, is what people want to know. At a global scale we understand the physics very well about how greenhouse gases trap energy in the atmosphere, and so the models do a pretty good job of the global scale energy balance and how the world as a planet warms up. We can recreate the 20th century global climate patterns pretty well and we know why that is.
When we start to get into the details that really affect people, that’s where the models are not yet perfect, and that’s partly because we can’t represent them in enough fine scale detail. There is always a big push as soon as we get a new computer to try and increase the resolution that we represent, and we’ve seen them get better and better in that respect over the years.
The other aspect and something that I work on is increasingly trying to look at the interactions between climate and ecosystems, and if what that allows us to do is to inform climate negotiations around things like carbon budgets, so how much CO2 can we emit to stay within a certain target.
In my view, the highest priority is to have more people involved in the model development process so that more new ideas can be generated and implemented. This has proven difficult.
Other priorities would be to improve the physical realism of the models, in particular the representation of precipitation and clouds, and to significantly increase the model development “workforce” in the relevant areas.
It’s worth saying at first that they are remarkably good already at simulating the general patterns of climate, the general circulation of the atmosphere and the past trend of global temperatures. But we still see systematic biases in some of the models so we have to often correct for these biases when looking at other models for impact studies. It would be good to be able to eliminate that because that introduces another level of uncertainty and inconsistency. Say we could have detailed, realistic, regional climates that don’t require this adjustment that would be a major victory.
The other thing we need to do is to find ways to represent the other aspects of the climate system that aren’t always captured in the climate models [such as] tipping points, non-linearities. They don’t always, or hardly ever, emerge from the models. You can artificially force the models to do this. We know these things have happened in the real climate in the past. We need to find ways to reproduce these in a completely realistic way so that we can do a full risk assessment of future climate change including these surprises that may occur.
I think one of the important issues is to be doing modelling of the climate system is consequences of fully 1.5C pathways and maybe even more than that. This would allow us to begin to understand how we could prevent some of the major tipping point problems that we can already foresee coming, even for 1.5C warming, and to try and understand what it would take to protect and sustain important natural ecosystems such as coral reefs, or to prevent ice sheet disintegration.
One of the underdeveloped areas, including in IPCC assessment reports, is evaluating what are the avoidable impacts [of climate change]. It’s very hard to find a coherent survey of avoidable impacts in an IPCC assessment reports. I think we need to be getting at that so we can better inform policymakers about what the benefits are of taking some of the big transformational steps that, while economically beneficial, are definitely going to cause political problems as incumbent power producers and others try and defend their turf.
For me, broadening the representation of different factors would have a higher priority than deepening the existing process representation. I think quite a number of key Earth processes are still not very well represented, including things like the role of land use, but also pollution and nutrients. I would see that as a high priority. Activities are going on in this area, no doubt. But I personally think that the balance might shift still in this direction.
Integrated Assessment Models: IAMs are computer models that analyse a broad range of data – e.g. physical, economic and social – to produce information that can be used to help decision-making. For climate research, specifically, IAMs are typically used to project future greenhouse gas emissions and climate impacts, and the benefits and costs of policy options that could be implemented to tackle them.
Integrated Assessment Models: IAMs are computer models that analyse a broad range of data – e.g. physical, economic and social – to produce information that can be used to help decision-making. For climate research, specifically,… Read More
Second, ensuring somehow that we keep older versions of the models “active”. The idea sounds attractive to me that in addition of having ever better models, but still being slow despite progress in computing power, we would also the ability to have fast model runs. This could be used for more uncertainty runs, having larger ensembles, exploring a wider range of types of scenarios.
Finally, I would expect that there will be a further representation of the human system in Earth system models (ESMs) and that integrated assessment models (IAMs) will try to be more geographically explicit – in order to better represent local processes, such as water management and presence of renewable energy. These together might mean that there is the agenda of merging ESMs and IAMs more. I think this is interesting, but, at the same time, it is also very challenging as both communities already are rather interdisciplinary (so one would risk having models based on different philosophies and being too complex to understand the results).
Scientists have presented a new, narrower estimate of the “climate sensitivity” – a measure of how much the climate could warm in response to the release of greenhouse gases.
The latest assessment report from the Intergovernmental Panel on Climate Change (IPCC) estimates that climate sensitivity is close to 3C, with a “likely” range of 1.5 to 4.5C.
The new study, published in Nature, refines this estimate to 2.8C, with a corresponding range of 2.2 to 3.4C. If correct, the new estimates could reduce the uncertainty surrounding climate sensitivity by 60%.
The narrower range suggests that global temperature rise is “going to shoot over 1.5C” above pre-industrial levels, the lead author tells Carbon Brief, but “we might be able to avoid 2C”. Meeting either limit will likely require negative emissions technologies that can remove CO2 from the atmosphere, he says.
The new estimate is another “brick in the wall” of scientists’ understanding of climate sensitivity, another scientist tells Carbon Brief, and “the best-informed views will be reached by multiple lines of evidence”.
Climate sensitivity is the amount of warming that can be expected in response to the concentration of CO2 in the atmosphere reaching double the level observed in pre-industrial times.
The research makes a new estimate of the “equilibrium” climate sensitivity (ECS) – that is, the amount of warming expected to occur once the full impact of the extra greenhouse gases release has played out. This measure includes the impact of warming on long-term climate feedback loops, which can take decades, or even centuries, to materialise.
The value of ECS is one of the big climate change questions that scientists are still trying to address.
It is important because understanding how sensitive the Earth is to CO2 could help us to estimate how much the planet could warm in response to greenhouse gases, explains Prof Peter Cox, lead author of the new paper and a climate scientist at the University of Exeter. He tells Carbon Brief:
“The issue about the equilibrium climate sensitivity is the range that has been given in successive IPCC reports – 1.5 to 4C – is a range that is essentially ‘climate change we could probably adapt to’ at the 1.5C end and ‘climate change we probably can’t adapt to’ at the 4C end. So that uncertainty has a huge impact on impeding the focused effort to mitigate climate change and adapt.”
The new findings indicate that the value of ECS could be close to 2.8C, says Cox:
“We get a value with a ‘likely’ range, which means there’s a 66% probability that it’s in that range of 2.2 to 3.4C with a central estimate of 2.8C. That’s not so far from the central estimate of the IPCC which is 3C, but the range is much reduced, from 1.5 to 4C, to 2.2 to 3.4C. What that means is we can rule out very low climate sensitivities and we can rule out very high climate sensitivities.”
Capturing a signal
There are a number of techniques that scientists can use to work out what ECS could be.
One method is to look at how Earth has responded to natural greenhouse gas changes in its geological past to try to work out how it might respond to future global warming.
A third method used by scientists involves matching global surface temperatures with the global warming trend over the past century to try and work out sensitivity from how the planet is responding. (This is what is known as the “energy budget model” approach.)
The new study uses a similar method to the energy budget model approach. However, instead of matching the global temperature record to global warming, the new research attempts to match temperature records to natural, long-term fluctuations in temperature.
Looking at natural variability rather than the warming trend allowed the scientists to exclude a range of uncertainties associated with human-caused climate change, Cox explains:
“Normally the way this [research] is done is by looking at the historical record warming, which makes sense. We’ve seen 1C of warming, roughly speaking, and so you may think that must tell you how sensitive the climate is. But it doesn’t. The main reason it doesn’t is that we don’t know how much energy or heat we’ve put in the system in terms of radiative forcing – greenhouse gases.”
To understand how historical temperature fluctuations have changed over the past century, the researchers first removed the global warming trend from a set of observational temperature data.
They then compared this data to results from a series of 22 global climate models. Some models had lower climate sensitivity, while some some models had higher climate sensitivity.
The results are shown on the chart below. On the chart, black dots show natural fluctuations in temperature from 1940 to 2020. Each line represents the results from one model, with magenta lines showing results from higher sensitivity models and green showing the results from models with lower climate sensitivity.
Natural temperature variability (black dots) compared to simulations of variability from climate models with higher climate sensitivity (magenta) and lower climate sensitivity (green). Each line represents the results from one model. Source: Cox et al. (2018)
The chart indicates that higher sensitivity models generally predict more warming than has been observed over the past 50 years, while lower sensitivity models either closely match the observed trend or estimate a lower amount of warming.
Together, these results allowed the researchers to produce their narrower range.
Understanding ECS could help scientists to work out how much the climate is likely to warm in the future, Cox says, which, in turn, could allow policymakers to estimate how easy it will be to meet the goals of the Paris Agreement.
Climate sensitivity is the amount of warming that will occur after CO2 concentrations become twice as high as they were in pre-industrial times. Pre-industrial CO2 concentration levels were about 280 parts per million (ppm) and levels are currently at around 404ppm.
This means that, if humans stopped releasing CO2 today, the world should expect to experience more than half of the warming dictated by the ECS. Cox explains:
“That means that if you’ve got an ECS of 4C, then you’ve pretty much already missed the 2C target of Paris. So the ECS value has a big impact on the feasibility of Paris.”
If the results are correct and the climate sensitivity is 2.8C, then it is likely that the world will fail to limit warming to 1.5C above pre-industrial levels, which is the aspirational goal of the Paris Agreement, Cox adds:
“Our numbers suggest that we’re going to shoot over 1.5C. We might be able to avoid 2C, it will take a huge effort to do so. I think, to achieve 1.5C, you definitely have to think of negative emissions technologies and, if you want 2C, you need to think about it, too, even if it’s only a short-term stop gap.”
Negative emissions technologies are a group of techniques – many of which still remain hypothetical – that aim to remove CO2 from the air in an attempt to tackle climate change.
The study’s results “reduce the probability of very high climate sensitivity”, which should “reassure” those taking steps to meet the goals of the Paris Agreement, says Prof Gabi Hegerl FRS, a climate system scientist from the University of Edinburgh, who was not involved in the research. She tells Carbon Brief:
“It also emphasises that climate change won’t be small, so reducing climate change will continue to require very sharp reductions of emissions leading towards ceasing emissions.”
Reducing uncertainty surrounding climate sensitivity should help policymakers to refocus their efforts on tackling climate change, says Cox:
“If you can reduce the uncertainty, which I think we can, then you can focus your mind on what needs to be done. We can rule out very low values, where you might say, ‘don’t worry about it, we’ll adapt’ and you can rule out very high values that might lead to you to a sort of hopeless where you think, ‘it’s too late’. We are still in that zone where action is urgent, but not too late. But it is very urgent.”
‘Brick in the wall’
The new paper adds to the extensive research around the potential value for ECS.
Despite debate among scientists about the best way to estimate climate sensitivity, each new research paper can be seen as “brick in the wall” of our understanding, says Prof Andrew Dessler, an atmospheric scientist from Texas A&M University, who was not involved in the research. He tells Carbon Brief:
“I don’t think any single paper will by itself redefine what we think about ECS. Rather, the best-informed views will be reached by multiple lines of evidence, with care taken in relating the inferred ECS from different methods.”
For example, the paper does not discuss how natural events, such as El Niño, could impact temperature fluctuations, he tells Carbon Brief:
“The approach mixes up natural variability due to El Niño, decadal variations, volcanic eruptions and air pollutants, and we know that models have different biases with respect to each of these. There are also theoretical problems with applying their statistical approach in this way, even though it seems to work. So it is not clear whether to put more weight on this study, or the previous ones suggesting even higher sensitivity.”
El Niño: Every five years or so, a change in the winds causes a shift to warmer than normal sea surface temperatures in the equatorial Pacific Ocean – known as El Niño. Together with its cooler counterpart, La Niña, this is known as the El Niño Southern Oscillation (ENSO) and is responsible for most of the fluctuations in temperature and rainfall patterns we see from one year to the next.
El Niño: Every five years or so, a change in the winds causes a shift to warmer than normal sea surface temperatures in the equatorial Pacific Ocean – known as El Niño. Together with… Read More
In addition, the research may have made “significant” errors in its attempts to reduce uncertainty surrounding climate sensitivity, says Dr Patrick Brown, a climate scientist from the Carnegie Institution for Science in Stanford, California.
Last month, Brown was the lead author of a Nature paper which found that ECS could be relatively higher than previous estimates have suggested – their central estimate was 3.7C. Brown tells Carbon Brief:
“They appear to be comparing the IPCC ECS ‘likely’ range of 1.5 to 4.5C to their constrained ECS model range. This is not an appropriate comparison because the 16 models that they use do not span the entire uncertainty range of ECS.
“For example, no model that they investigate has an ECS below 2.2C. Thus their claim that they reduced uncertainty in ECS by 60% comes partly from the coincidence of which models happened to be included in their study.”
“By contrast, Cox et al started from climate-model values that are at the upper end of the IPCC range and used evidence to effectively rule out catastrophically high values.”
Forster adds that the methods used in the present study are “enviably simple” and will leave climate scientists asking, “why didn’t I think of that?” He says:
“In my view, Cox and colleagues’ estimate and the estimates produced by analysing the historical energy budget carry the most weight, because they are based on simpler physical theories of climate forcing and response, and do not directly require the use of a climate that correctly represents cloud.”
(Improving the representation of clouds in climate models should be a major priority for future research, scientists recently told Carbon Brief.)
In the first article of a week-long series focused on climate modelling, Carbon Brief explains in detail how scientists use computers to understand our changing climate…
The use of computer models runs right through the heart of climate science.
From helping scientists unravel cycles of ice ages hundreds of thousands of years ago to making projections for this century or the next, models are an essential tool for understanding the Earth’s climate.
But what is a climate model? What does it look like? What does it actually do? These are all questions that anyone outside the world of climate science might reasonably ask.
Carbon Brief has spoken to a range of climate scientists in order to answer these questions and more. What follows is an in-depth Q&A on climate models and how scientists use them. You can use the links below to navigate to a specific question.
A global climate model typically contains enough computer code to fill 18,000 pages of printed text; it will have taken hundreds of scientists many years to build and improve; and it can require a supercomputer the size of a tennis court to run.
The models themselves come in different forms – from those that just cover one particular region of the world or part of the climate system, to those that simulate the atmosphere, oceans, ice and land for the whole planet.
The output from these models drives forward climate science, helping scientists understand how human activity is affecting the Earth’s climate. These advances have underpinned climate policy decisions on national and international scales for the past five decades.
In many ways, climate modelling is just an extension of weather forecasting, but focusing on changes over decades rather than hours. In fact, the UK’s Met Office Hadley Centre uses the same “Unified Model” as the basis for both tasks.
The vast computing power required for simulating the weather and climate means today’s models are run using massive supercomputers.
The Met Office Hadley Centre’s three new Cray XC40 supercomputers, for example, are together capable of 14,000 trillion calculations a second. The timelapse video below shows the third of these supercomputers being installed in 2017.
Fundamental physical principles
So, what exactly goes into a climate model? At their most basic level, climate models use equations to represent the processes and interactions that drive the Earth’s climate. These cover the atmosphere, oceans, land and ice-covered regions of the planet.
The models are based on the same laws and equations that underpin scientists’ understanding of the physical, chemical and biological mechanisms going on in the Earth system.
For example, scientists want climate models to abide by fundamental physical principles, such as the first law of thermodynamics (also known as the law of conservation of energy), which states that in a closed system, energy cannot be lost or created, only changed from one form to another.
Then there are the equations that describe the dynamics of what goes on in the climate system, such as the Clausius-Clapeyron equation, which characterises the relationship between the temperature of the air and its maximum water vapour pressure.
The most important of these are the Navier-Stokes equations of fluid motion, which capture the speed, pressure, temperature and density of the gases in the atmosphere and the water in the ocean.
The Navier-Stokes equations for “incompressible” flow in three dimensions (x, y and z). (Although the air in our atmosphere is technically compressible, it is relatively slow-moving and is, therefore, treated as incompressible in order to simplify the equations.). Note: this set of equations is simpler than the ones a climate model will use because they need to calculate flows across a rotating sphere.
Scientists translate each of these physical principles into equations that make up line after line of computer code – often running to more than a million lines for a global climate model.
The code in global climate models is typically written in the programming language Fortran. Developed by IBM in the 1950s, Fortran was the first “high-level” programming language. This means that rather than being written in a machine language – typically a stream of numbers – the code is written much like a human language.
You can see this in the example below, which shows a small section of code from one of the Met Office Hadley Centre models. The code contains commands such as “IF”, “THEN” and “DO”. When the model is run, it is first translated (automatically) into machine code that the computer understands.
A section of code from HadGEM2-ES (as used for CMIP5) in Fortran programming language. The code is from within the plant physiology section that starts to look at how the different vegetation types absorb light and moisture. Credit: Dr Chris Jones, Met Office Hadley Centre
There are now many other programming languages available to climate scientists, such as C, Python, R, Matlab and IDL. However, the last four of these are applications that are themselves written in a more fundamental language (such as Fortran) and, therefore, are relatively slow to run. Fortran and C are generally used today for running a global model quickly on a supercomputer.
Throughout the code in a climate model are equations that govern the underlying physics of the climate system, from how sea ice forms and melts on Arctic waters to the exchange of gases and moisture between the land surface and the air above it.
The figure below shows how more and more climate processes have been incorporated into global models over the decades, from the mid-1970s through to the fourth assessment report (“AR4”) of the Intergovernmental Panel of Climate Change (IPCC), published in 2007.
Illustration of the processes added to global climate models over the decades, from the mid-1970s, through the first four IPCC assessment reports: first (“FAR”) published in 1990, second (“SAR”) in 1995, third (“TAR”) in 2001 and fourth (“AR4”) in 2007. (Note, there is also a fifth report, which was completed in 2014). Source: IPCC AR4, Fig 1.2
So, how does a model go about calculating all these equations?
Because of the complexity of the climate system and limitation of computing power, a model cannot possibly calculate all of these processes for every cubic metre of the climate system. Instead, a climate model divides up the Earth into a series of boxes or “grid cells”. A global model can have dozens of layers across the height and depth of the atmosphere and oceans.
The image below shows a 3D representation of what this looks like. The model then calculates the state of the climate system in each cell – factoring in temperature, air pressure, humidity and wind speed.
Illustration of grid cells used by climate models and the climatic processes that the model will calculate for each cell (bottom corner). Source: NOAA GFDL
For processes that happen on scales that are smaller than the grid cell, such as convection, the model uses “parameterisations” to fill in these gaps. These are essentially approximations that simplify each process and allow them to be included in the model. (Parameterisation is covered in the question on model tuning below.)
The size of the grid cells in a model is known as its “spatial resolution”. A relatively-coarse global climate model typically has grid cells that are around 100km in longitude and latitude in the mid-latitudes. Because the Earth is a sphere, the cells for a grid based on longitude and latitude are larger at the equator and smaller at the poles. However, it is increasingly common for scientists to use alternative gridding techniques – such as cubed-sphere and icosahedral – which don’t have this problem.
A high-resolution model will have more, smaller boxes. The higher the resolution, the more specific climate information a model can produce for a particular region – but this comes at a cost of taking longer to run because the model has more calculations to make.
The figure below shows how the spatial resolution of models improved between the first and fourth IPCC assessment reports. You can see how the detail in the topography of the land surface emerges as the resolution is improved.
Increasing spatial resolution of climate models used through the first four IPCC assessment reports: first (“FAR”) published in 1990, second (“SAR”) in 1995, third (“TAR”) in 2001 and fourth (“AR4”) in 2007. (Note, there is also a fifth report, which was completed in 2014). Source: IPCC AR4, Fig 1.2
A similar compromise has to be made for the “time step” of how often a model calculates the state of the climate system. In the real world, time is continuous, yet a model needs to chop time up into bite-sized chunks to make the calculations manageable.
“The role of the leapfrog in models is to march the weather forward in time, to allow predictions about the future to be made. In the same way that a child in the playground leapfrogs over another child to get from behind to in front, the model leapfrogs over the present to get from the past to the future.”
In other words, the model takes the climate information it has from the previous and present time steps to extrapolate forwards to the next one, and so on through time.
As with the size of grid cells, a smaller time step means the model can produce more detailed climate information. But it also means the model has more calculations to do in every run.
For example, calculating the state of the climate system for every minute of an entire century would require over 50m calculations for every grid cell – whereas only calculating it for each day would take 36,500. That’s quite a range – so how do scientists decide what time step to use?
The answer comes down to finding a balance, Williams tells Carbon Brief:
“Mathematically speaking, the correct approach would be to keep decreasing the time step until the simulations are converged and the results stop changing. However, we normally lack the computational resources to run the models with a time step this small. Therefore, we are forced to tolerate a larger time step than we would ideally like.”
For the atmosphere component of climate models, a time step of around 30 minutes “seems to be a reasonable compromise” between accuracy and computer processing time, says Williams:
“Any smaller and the improved accuracy would not be sufficient to justify the extra computational burden. Any larger and the model would run very quickly, but the simulation quality would be poor.”
Bringing all these pieces together, a climate model can produce a representation of the whole climate system at 30-minute intervals over many decades or even centuries.
As Dr Gavin Schmidt, director of the NASA Goddard Institute for Space Studies, describes in his TED talk in 2014, the interactions of small-scale processes in a model mean it creates a simulation of our climate – everything from the evaporation of moisture from the Earth’s surface and formation of clouds, to where the wind carries them and where the rain eventually falls.
Schmidt calls these “emergent properties” in his talk – features of the climate that aren’t specifically coded in the model, but are simulated by the model as a result of all the individual processes that are built in.
It is akin to the manager of a football team. He or she picks the team, chooses the formation and settles on the tactics, but once the team is out on the pitch, the manager cannot dictate if and when the team scores or concedes a goal. In a climate model, scientists set the ground rules based on the physics of the Earth system, but it is the model itself that creates the storms, droughts and sea ice.
So to summarise: scientists put the fundamental physical equations of the Earth’s climate into a computer model, which is then able to reproduce – among many other things – the circulation of the oceans, the annual cycle of the seasons, and the flows of carbon between the land surface and the atmosphere.
You can watch the whole of Schmidt’s talk below.
While the above broadly explains what a climate model is, there are many different types. Read on to the question below to explore these in more detail.
The earliest and most basic numerical climate models are Energy Balance Models (EBMs). EBMs do not simulate the climate, but instead consider the balance between the energy entering the Earth’s atmosphere from the sun and the heat released back out to space. The only climate variable they calculate is surface temperature. The simplest EBMs only require a few lines of code and can be run in a spreadsheet.
Many of these models are “zero-dimensional”, meaning they treat the Earth as a whole; essentially, as a single point. Others are 1D, such as those that also factor in the transfer of energy across different latitudes of the Earth’s surface (which is predominantly from the equator to the poles).
A step along from EBMs are Radiative Convective Models, which simulate the transfer of energy through the height of the atmosphere – for example, by convection as warm air rises. Radiative Convective Models can calculate the temperature and humidity of different layers of the atmosphere. These models are typically 1D – only considering energy transport up through the atmosphere – but they can also be 2D.
The next level up are General Circulation Models (GCMs), also called Global Climate Models, which simulate the physics of the climate itself. This means they capture the flows of air and water in the atmosphere and/or the oceans, as well as the transfer of heat.
Early GCMs only simulated one aspect of the Earth system – such as in “atmosphere-only” or “ocean-only” models – but they did this in three dimensions, incorporating many kilometres of height in the atmosphere or depth of the oceans in dozens of model layers.
More sophisticated “coupled” models have brought these different aspects together, linking together multiple models to provide a comprehensive representation of the climate system. Coupled atmosphere-ocean general circulation models (or “AOGCMs”) can simulate, for example, the exchange of heat and freshwater between the land and ocean surface and the air above.
The infographic below shows how modellers have gradually incorporated individual model components into global coupled models over recent decades.
Graphic by Rosamund Pearce; based on the work of Dr Gavin Schmidt.
Over time, scientists have gradually added in other aspects of the Earth system to GCMs. These would have once been simulated in standalone models, such as land hydrology, sea ice and land ice.
The most recent subset of GCMs now incorporate biogeochemical cycles – the transfer of chemicals between living things and their environment – and how they interact with the climate system. These “Earth System Models” (ESMs) can simulate the carbon cycle, nitrogen cycle, atmospheric chemistry, ocean ecology and changes in vegetation and land use, which all affect how the climate responds to human-caused greenhouse gas emissions. They have vegetation that responds to temperature and rainfall and, in turn, changes uptake and release of carbon and other greenhouse gases to the atmosphere.
“The GCMs were the models that were used maybe in the 1980s. So these were largely put together by the atmospheric physicists, so it’s all to do with energy and mass and water conservation, and it’s all the physics of moving those around. But they had a relatively limited representation of how the atmosphere then interacts with the ocean and the land surface. Whereas an ESM tries to incorporate those land interactions and those ocean interactions, so you could regard an ESM as a ‘pimped’ version of a GCM.”
There are also Regional Climate Models (“RCMs”) which do a similar job as GCMs, but for a limited area of the Earth. Because they cover a smaller area, RCMs can generally be run more quickly and at a higher resolution than GCMs. A model with a high resolution has smaller grid cells and therefore can produce climate information in greater detail for a specific area.
RCMs are one way of “downscaling” global climate information to a local scale. This means taking information provided by a GCM or coarse-scale observations and applying it to a specific area or region. Downscaling is covered in more detail under a later question.
Integrated Assessment Models: IAMs are computer models that analyse a broad range of data – e.g. physical, economic and social – to produce information that can be used to help decision-making. For climate research, specifically, IAMs are typically used to project future greenhouse gas emissions and climate impacts, and the benefits and costs of policy options that could be implemented to tackle them.
Integrated Assessment Models: IAMs are computer models that analyse a broad range of data – e.g. physical, economic and social – to produce information that can be used to help decision-making. For climate research, specifically,… Read More
Finally, a subset of climate modelling involves Integrated Assessment Models (IAMs). These add aspects of society to a simple climate model, simulating how population, economic growth and energy use affect – and interact with – the physical climate.
IAMs produce scenarios of how greenhouse gas emissions may vary in future. Scientists can then run these scenarios through ESMs to generate climate change projections – providing information that can be used to inform climate and energy policies around the world.
In climate research, IAMs are typically used to project future greenhouse gas emissions and the benefits and costs of policy options that could be implemented to tackle them. For example, they are used to estimate the social cost of carbon – the monetary value of the impact, both positive and negative, of every additional tonne of CO2 that is emitted.
What are the inputs and outputs for a climate model?
If the previous section looked at what is inside a climate model, this one focuses on what scientists put into a model and get out the other side.
Climate models are run using data on the factors that drive the climate, and projections about how these might change in the future. Climate model results can run to petabytes of data, including readings every few hours across thousands of variables in space and time, from temperature to clouds to ocean salinity.
The main inputs into models are the external factors that change the amount of the sun’s energy that is absorbed by the Earth, or how much is trapped by the atmosphere.
These external factors are called “forcings”. They include changes in the sun’s output, long-lived greenhouse gases – such as CO2, methane (CH4), nitrous oxides (N2O) and halocarbons – as well as tiny particles called aerosols that are emitted when burning fossil fuels, and from forest fires and volcanic eruptions. Aerosols reflect incoming sunlight and influence cloud formation.
Typically, all these individual forcings are run through a model either as a best estimate of past conditions or as part of future “emission scenarios”. These are potential pathways for the concentration of greenhouse gases in the atmosphere, based on how technology, energy and land use change over the centuries ahead.
Today, most model projections use one or more of the “Representative Concentration Pathways” (RCPs), which provide plausible descriptions of the future, based on socio-economic scenarios of how global society grows and develops. You can read more about the different pathways in this earlier Carbon Brief article.
Models also use estimates of past forcings to examine how the climate changed over the past 200, 1,000, or even 20,000 years. Past forcings are estimated using evidence of changes in the Earth’s orbit, historical greenhouse gas concentrations, past volcanic eruptions, changes in sunspot counts, and other records of the distant past.
Then there are climate model “control runs”, where radiative forcing is held constant for hundreds or thousands of years. This allows scientists to compare the modelled climate with and without changes in human or natural forcings, and assess how much “unforced” natural variability occurs.
Climate models generate a nearly complete picture of the Earth’s climate, including thousands of different variables across hourly, daily and monthly timeframes.
These outputs include temperatures and humidity of different layers of the atmosphere from the surface to the upper stratosphere, as well as temperatures, salinity and acidity (pH) of the oceans from the surface down to the sea floor.
Models also produce estimates of snowfall, rainfall, snow cover and the extent of glaciers, ice sheets and sea ice. They generate wind speed, strength and direction, as well as climate features, such as the jet stream and ocean currents.
More unusual model outputs include cloud cover and height, along with more technical variables, such as surface upwelling longwave radiation – how much energy is emitted by the surface back up to the atmosphere – or how much sea salt comes off the ocean during evaporation and is accumulated on land.
What types of experiments do scientists run on climate models?
Climate models are used by scientists to answer many different questions, including why the Earth’s climate is changing and how it might change in the future if greenhouse gas emissions continue.
Models can help work out what has caused observed warming in the past, as well as how big a role natural factors play compared to human factors.
Scientists run many different experiments to simulate climates of the past, present and future. They also design tests to look at the performance of specific parts of different climate models. Modellers run experiments on what would happen if, say, we suddenly quadrupled CO2, or if geoengineering approaches were used to cool the climate.
Many different groups run the same experiments on their climate models, producing what is called a model ensemble. These model ensembles allow researchers to examine differences between climate models, as well as better capture the uncertainty in future projections. Experiments that modellers do as part of the Coupled Model Intercomparison Projects (CMIPs) include:
These historical runs are not “fit” to actual observed temperatures or rainfall, but rather emerge from the physics of the model. This means they allow scientists to compare model predictions (“hindcasts”) of the past climate to recorded climate observations. If climate models are able to successfully hindcast past climate variables, such as surface temperature, this gives scientists more confidence in model forecasts of the future
Historical runs are also useful for determining how large a role human activity plays in climate change (called “attribution”). For example, the chart below compares two model variants against the observed climate – with only natural forcings (blue shading) and model runs with both human and natural forcings (pink shading).
Natural-only runs only include natural factors such as changes in the sun’s output and volcanoes, but they assume greenhouse gases and other human factors remain unchanged at pre-industrial levels. Human-only runs hold natural factors unchanged and only include the effects of human activities, such as increasing atmospheric greenhouse gas concentrations.
By comparing these two scenarios (and a combined “all-factors” run), scientists can assess the relative contributions to observed climate changes from human and natural factors. This helps them to figure out what proportion of modern climate change is due to human activity.
Future warming scenarios
The IPCC’s fifth assessment report focused on four future warming scenarios, known as the Representative Concentration Pathway (RCP) scenarios. These look at how the climate might change from present through to 2100 and beyond.
Many things that drive future emissions, such as population and economic growth, are difficult to predict. Therefore, these scenarios span a wide range of futures, from a business-as-usual world where little or no mitigation actions are taken (RCP6.0 and RCP8.5) to a world in which aggressive mitigation generally limits warming to no more than 2C (RCP2.6). You can read more about the different RCPs here.
These RCP scenarios specify different amounts of radiative forcings. Models use those forcings to examine how the Earth’s system will change under each of the different pathways. The upcoming CMIP6 exercise, associated with the IPCC sixth assessment report, will add four new RCP scenarios to fill in the gaps around the four already in use, including a scenario that meets the 1.5C temperature limit.
Control runs are useful to examine how natural variability is expressed in models, in the absence of other changes. They are also used to diagnose “model drift”, where spurious long-term changes occur in the model that are unrelated to either natural variability or changes to external forcing.
If a model is “drifting” it will experience changes beyond the usual year-to-year and decade-to-decade natural variability, even though the factors affecting the climate, such as greenhouse gas concentrations, are unchanged.
Model control runs start the model during a period before modern industrial activity dramatically increased greenhouse gases. They then let the model run for hundreds or thousands of years without changing greenhouse gases, solar activity, or any other external factors that affect the climate. This differs from a natural-only run as both human and natural factors are left unchanged.
Atmospheric model intercomparison project (AMIP) runs
Climate models include the atmosphere, land and ocean. AMIP runs effectively ‘‘turn off’’ everything except the atmosphere, using fixed values for the land and ocean based on observations. For example, AMIP runs use observed sea surface temperatures as an input to the model, allowing the land surface temperature and the temperature of the different layers of the atmosphere to respond.
Normally climate models will have their own internal variability – short-term climate cycles in the oceans such as El Niño and La Niña events – that occur at different times than what happens in the real world. AMIP runs allow modellers to match ocean temperatures to observations, so that internal variability in the models occurs at the same time as in the observations and changes over time in both are easier to compare.
Abrupt 4x CO2 runs
Climate models comparison projects, such as CMIP5, generally request that all models undertake a set of “diagnostic” scenarios to test performance across various criteria.
One of these tests is an “abrupt” increase in CO2 from pre-industrial levels to four times higher – from 280 parts per million (ppm) to 1,120ppm – holding all other factors that influence the climate constant. (For context, current CO2 concentrations are around 400ppm.) This allows scientists to see how quickly the Earth’s temperature responds to changes in CO2 in their model compared to others.
One of 42 panels displayed throughout the Gare du Nord metro station in Paris, honouring Syukuro Manabe and his contributions to climate science, to mark the COP21 UN climate change conference in 2015. The equations were used by Manabe in his seminal climate model in the late 1960s. Credit: NOAA/Rory O’Connor.
One of 42 panels displayed throughout the Gare du Nord metro station in Paris, honouring Syukuro Manabe and his contributions to climate science, to mark the COP21 UN climate change conference in 2015. The equations were used by Manabe in his seminal climate model in the late 1960s. Credit: Rosamund Pearce/Carbon Brief.
One of 42 panels displayed throughout the Gare du Nord metro station in Paris, honouring Syukuro Manabe and his contributions to climate science, to mark the COP21 UN climate change conference in 2015. The equations were used by Manabe in his seminal climate model in the late 1960s. Credit: NOAA/Rory O’Connor.
1% CO2 runs
Another diagnostic test increases CO2 emissions from pre-industrial levels by 1% per year, until CO2 ultimately quadruples and reaches 1,120ppm. These scenarios also hold all other factors affecting the climate unchanged.
This allows modellers to isolate the effects of gradually increasing CO2 from everything else going on in more complicated scenarios, such as changes in aerosols and other greenhouse gases such as methane.
Here, models are run for climates of the past (palaeoclimate). Models have been run for a number of different periods: the past 1,000 years; the Holocene spanning the past 12,000 years; the last glacial maximum 21,000 years ago, during the last ice age; the last interglacial around 127,000 years ago; the mid-Pliocene warm period 3.2m years ago; and the unusual period of rapid warming called the Paleocene-Eocene thermal maximum around 55m years ago.
These models use the best estimates available for factors affecting the Earth’s past climate – including solar output and volcanic activity – as well as longer-term changes in the Earth’s orbit and the location of the continents.
These palaeoclimate model runs can help researchers understand how large past swings in the Earth’s climate occurred, such as those during ice ages, and how sea level and other factors changed during periods of warming and cooling. These past changes offer a guide to the future, if warming continues.
Specialised model tests
As part of CMIP6, research groups around the world are conducting many different experiments. These include looking at the behaviour of aerosols in models, cloud formation and feedbacks, ice sheet responses to warming, monsoon changes, sea level rise, land-use changes, oceans and the effects of volcanoes.
There are more than two dozen scientific institutions around the world that develop climate models, with each centre often building and refining several different models at the same time.
The models they produce are typically – though rather unimaginatively – named after the centres themselves. Hence, for example, the Met Office Hadley Centre has developed the “HadGEM3” family of models. Meanwhile, the NOAA Geophysical Fluid Dynamics Laboratory has produced the “GFDL ESM2M” Earth system model.
That said, models are increasingly collaborative efforts, which is often reflected in their names. For example, the Hadley Centre and the wider Natural Environment Research Council (NERC) community in the UK have jointly developed the “UKESM1” Earth system model. This has the Met Office Hadley Centre’s HadGEM3 model at its core.
The fact that there are numerous modelling centres around the world going through similar processes is a “really important strand of climate research”, says Dr Chris Jones, who leads the Met Office Hadley Centre’s research into vegetation and carbon cycle modelling and their interactions with climate. He tells Carbon Brief:
“There are maybe the order of 10 or 15 kind of big global climate modelling centres who produce simulations and results. And, by comparing what the different models and the different sets of research say, you can judge which things to have confidence in, where they agree, and where we have less confidence, where there is disagreement. That guides the model development process.”
If there was just one model, or one modelling centre, there would be much less of an idea of its strengths and weaknesses, says Jones. And while the different models are related – there is a lot of collaborative research and discussion that goes on between the groups – they do not usually go to the extent of using the same lines of code. He explains:
“When we develop a new [modelling] scheme, we would publish the equations of that scheme in the scientific literature, so it’s peer reviewed. It’s publicly available and other centres can compare that with what they use.”
Below, Carbon Brief has mapped the climate modelling centres that contributed to the fifth Coupled Model Intercomparison Project (CMIP5), which fed into the IPCC’s fifth assessment report. Mouse over the individual centres in the map to find out more about them.
The majority of modelling centres are in North America and Europe. However, it is worth noting that the CMIP5 list is not an exhaustive inventory of modelling centres – particularly as it focuses on institutions with global climate models. This means the list does not include centres that concentrate on regional climate modelling or weather forecasting, says Jones:
“For example, we do a lot of collaborative work with Brazil, who concentrate their GCMs on weather and seasonal forecasting. In the past, they have even used a version of HadGEM2 to submit data to CMIP5. For CMIP6 they hope to run the Brazil Earth system model (‘BESM’).”
The institute points out that the main purpose of the licence agreement is to let it know who is using the models and to establish a way of getting in touch with the users. It says:
“[T]he MPI-M software developed must remain controllable and documented. This is the spirit behind the following licence agreement…It is also important to provide feedback to the model developers, to report about errors and to suggest improvements of the code.”
With so many institutions developing and running climate models, there is a risk that each group approaches its modelling in a different way, reducing how comparable their results will be.
This is where the Coupled Model Intercomparison Project (“CMIP”) comes in. CMIP is a framework for climate model experiments, allowing scientists to analyse, validate and improve GCMs in a systematic way.
The “coupled” in the name means that all the climate models in the project are coupled atmosphere-ocean GCMs. The Met Office’s Dr Chris Jones explains the significance of the “intercomparison” part of the name:
“The idea of an intercomparison came from the fact that many years ago different modelling groups would have different models, but they would also set them up slightly differently, and they would run different numerical experiments with them. When you come to compare the results you’re never quite sure if the differences are because the models are different or because they were set up in a different way.”
So, CMIP was designed to be a way to bring into line all the climate model experiments that different modelling centres were doing.
Since its inception in 1995, CMIP has been through several generations and each iteration becomes more sophisticated in the experiments that are being designed. A new generation comes round every 5-6 years.
In its early years, CMIP experiments included, for example, modelling the impact of a 1% annual increase in atmospheric CO2 concentrations (as mentioned above). In later iterations, the experiments incorporated more detailed emissions scenarios, such as the Representative Concentration Pathways (“RCPs”).
Setting the models up in the same way and using the same inputs means that scientists know that the differences in the climate change projections coming out of the models is down to differences in the models themselves. This is the first step in trying to understand what is causing those differences.
The number of researchers publishing papers based on CMIP data “has grown from a few dozen to well over a thousand”, says Prof Veronika Eyring, chair of the CMIP Panel, in a recent interview with Nature Climate Change.
With the model simulations for CMIP5 complete, CMIP6 is now underway, which will involve more than 30 modelling centres around the world, Eyring says.
As well as having a core set of “DECK” (Diagnostic, Evaluation, and Characterisation of Klima) modelling experiments, CMIP6 will also have a set of additional experiments to answer specific scientific questions. These are divided into individual Model Intercomparison Projects, or “MIPs”. So far, 21 MIPs have been endorsed, Eyring says:
“Proposals were submitted to the CMIP Panel and received endorsement if they met 10 community-set criteria, broadly: advancing progress on gaps identified in previous CMIP phases, contributing to the WCRP Grand Challenges, and having at least eight model groups willing to participate.”
You can see the 21 MIPs and the overall experiment design of CMIP6 in the schematic below.
Schematic of the CMIP/CMIP6 experimental design and the 21 CMIP6-Endorsed MIPs. Reproduced with permission from Simpkins (2017).
There is a special issue of the journal Geoscientific Model Development on CMIP6, with 28 published papers covering the overall project and the specific MIPs.
The results of CMIP6 model runs will form the basis of much of the research feeding into the sixth assessment report of the IPCC. However, it is worth noting that CMIP is entirely independent from the IPCC.
How do scientists validate climate models? How do they check them?
Scientists test, or “validate”, their models by comparing them against real-world observations. This might include, for example, comparing the model projections against actual global surface temperatures over the past century.
Climate models can be tested against past changes in the Earth’s climate. These comparisons with the past are called “hindcasts”, as mentioned above.
Scientists do not “tell” their models how the climate has changed in the past – they do not feed in historical temperature readings, for example. Instead, they feed in information on past climate forcings and the models generate a “hindcast” of historical conditions. This can be a useful way to validate models.
Specific events that have a large impact on the climate, such as volcanic eruptions, can also be used to test model performance. The climate responds relatively quickly to volcanic eruptions, so modellers can see if models accurately capture what happens after big eruptions, after waiting only a few years. Studies show models accurately project changes in temperature and in atmospheric water vapour after major volcanic eruptions.
Climate models are also compared against the average state of the climate, known as the “climatology”. For example, researchers check to see if the average temperature of the Earth in winter and summer is similar in the models and reality. They also compare sea ice extent between models and observations, and may choose to use models that do a better job of representing the current amount of sea ice when trying to project future changes.
Experiments where many different models are run with the same greenhouse gas concentrations and other “forcings”, as in model intercomparison projects, provide a way to look at similarities and differences between models.
For many parts of the climate system, the average of all models can be more accurate than most individual models. Researchers have found that forecasts can show better skill, higher reliability and consistency when several independent models are combined.
One way to check if models are reliable is to compare projected future changes against how things turn out in the real world. This can be hard to do with long-term projections, however, because it would take a long time to assess how well current models perform.
Recently, Carbon Brief found that models produced by scientists since the 1970s have generally done a good job of projecting future warming. The video below shows an example of model hindcasts and forecasts compared to actual surface temperatures.
As mentioned above, scientists do not have a limitless supply of computing power at their disposal, and so it is necessary for models to divide up the Earth into grid cells to make the calculations more manageable.
This means that at every step of the model through time, it calculates the average climate of each grid cell. However, there are many processes in the climate system and on the Earth’s surface that occur on scales within a single cell.
For example, the height of the land surface will be averaged across a whole grid cell in a model, meaning it potentially overlooks the detail of any physical features such as mountains and valleys. Similarly, clouds can form and dissipate at scales that are much smaller than a grid cell.
To solve this problem, these variables are “parameterised”, meaning their values are defined in the computer code rather than being calculated by the model itself.
The graphic below shows some of the processes that are typically parameterised in models.
Parameterisations may also be used as a simplification where a climate process isn’t well understood. Parameterisations are one of the main sources of uncertainty in climate models.
A list of 20 climate processes and properties that typically need to be parameterised within global climate models. Image courtesy of MetEd, The COMET Program, UCAR.
In many cases, it is not possible to narrow down parameterised variables into a single value, so the model needs to include an estimation. Scientists run tests with the model to find the value – or range of values – that allows the model to give the best representation of the climate.
This complex process is known variously as model “tuning” or “calibration”. While it is a necessary part of climate modelling, it is not a process that is specific to it. In 1922, for example, a Royal Society paper on theoretical statistics identified “parameter estimation” as one of three steps in modelling.
Dr James Screen, assistant professor in climate science at the University of Exeter, describes how scientists might tune their model for the albedo (reflectivity) of sea ice. He tells Carbon Brief:
“In a lot of sea ice models, the albedo of sea ice is a parameter that is set to a particular value. We don’t know the ‘correct’ value of the ice albedo. There is some uncertainty range associated with observations of albedo. So whilst developing their models, modelling centres may experiment with slightly different – but plausible – parameter values in an attempt to model some basic features of the sea ice as closely as possible to our best estimates from observations. For example, they might want to make sure the seasonal cycle looks right or there is roughly the right amount of ice on average. This is tuning.”
If all parameters were 100% certain, then this calibration would not be necessary, Screen notes. But scientists’ knowledge of the climate is not perfect, because the evidence they have from observations is incomplete. Therefore, they need to test their parameter values in order to give sensible model output for key variables.
Albedo:Albedo is a measure of how much of the sun’s energy is reflected by a surface. It is derived from the Latin word albus, meaning white. Albedo is measured as a percentage or fraction of the sun’s energy that is reflected away. Snow and ice tend to have a higher albedo than, for example, soil, forests and open water.
Albedo: Albedo is a measure of how much of the sun’s energy is reflected by a surface. It is derived from the Latin word albus, meaning white. Albedo is measured as a percentage… Read More
As most global models will contain parameterisation schemes, virtually all modelling centres undertake model tuning of some kind. A survey in 2014 (pdf) found that, in most cases, modellers tune their models to ensure that the long-term average state of the climate is accurate – including factors such as absolute temperatures, sea ice concentrations, surface albedo and sea ice extent.
The factor most often tuned for – in 70% of cases – is the radiation balance at the top of the atmosphere. This process involved adjusting parameterisations particularly of clouds – microphysics, convection and cloud fraction – but also snow, sea ice albedo and vegetation.
This tuning does not involve simply “fitting” historical observations. Rather, if a reasonable choice of parameters leads to model results that differ dramatically from observed climatology, modellers may decide to use a different one. Similarly, if updates to a model leads to a wide divergence from observations, modellers may look for bugs or other factors that explain the difference.
As NASA Goddard Institute for Space Studies director Dr Gavin Schmidt tells Carbon Brief:
“Global mean trends are monitored for sanity, but not (generally) precisely tuned for. There is a lot of discussion on this point in the community, but everyone is clear this needs to be made more transparent.”
What is bias correction?
While climate models simulate the Earth’s climate well overall – including familiar climatic features, such as storms, monsoon rains, jet streams, trade winds and El Niño cycles – they are not perfect. This is particularly the case at the regional and local scales, where simulations can have substantial deviations from the observed climate, known as “biases”.
These biases occur because models are a simplification of the climate system and the large-scale grid cells that global models use can miss the detail of the local climate.
“Imagine you are a water engineer and have to protect a valley against flash floods from a nearby mountain creek. The protection is supposed to last for the next decades, so you have to account for future changes in rainfall over your river catchment. Climate models, even if they resolve the relevant weather systems, may be biased compared to the real world.”
For the water engineer, who runs the climate model output as an input for a flood risk model of the valley, such biases may be crucial, says Maraun:
“Assume a situation where you have freezing temperatures in reality, snow is falling and surface run-off from heavy rainfall is very low. But the model simulates positive temperatures, rainfall and a flash flood.”
In other words, taking the large-scale climate model output as is and running it through a flood model could give a misleading impression of flood risk in that specific valley.
To solve this issue – and produce climate projections that the water engineer can use in designing flood defences – scientist apply “bias correction” to climate model output.
“Bias correction – sometimes called ‘calibration’ – is the process of accounting for biases in the climate model simulations to provide projections which are more consistent with the available observations.”
Essentially, scientists compare long-term statistics in the model output with observed climate data. Using statistical techniques, they then correct any biases in the model output to make sure it is consistent with current knowledge of the climate system.
Bias correction is often based on average climate information, Maraun notes, though more sophisticated approaches adjust extremes too.
The bias correction step in the modelling process is particularly useful when scientists are considering aspects of the climate where thresholds are important, says Hawkins.
An example comes from a 2016 study, co-authored by Hawkins, on how shipping routes could open through Arctic sea ice because of climate change. He explains:
“The viability of Arctic shipping in future depends on the projected thickness of the sea ice, as different types of ship are unable to travel if the ice reaches a critical thickness at any point along the route. If the climate model simulates too much or too little ice for the present day in a particular location then the projections of ship route viability will also be incorrect.
“However, we are able to use observations of ice thickness to correct the spatial biases in the simulated sea ice thickness across the Arctic and produce projections which are more consistent than without a bias correction.”
In other words, by using bias correction to get the simulated sea ice in the model for the present day right, Hawkins and his colleagues can then have more confidence in their projections for the future.
Russian icebreaker at the North Pole. Credit: Christopher Michel via Flickr.
Typically, bias correction is applied only to model output, but in the past it has also been used within runs of models, explains Maraun:
“Until about a decade ago it was quite common to adjust the fluxes between different model components – for example, the ocean and atmosphere – in every model step towards the observed fields by so-called ‘flux corrections’”.
Recent advances in modelling mean flux corrections are largely no longer necessary. However, some researchers have put forward suggestions that flux corrections could still be used to help eliminate remaining biases in models, says Maraun:
“For instance, most GCMs simulate too cold a North Atlantic, a problem that has knock-on effects, for example, on the atmospheric circulation and rainfall patterns in Europe.”
So by nudging the model to keep its simulations of the North Atlantic Ocean on track (based on observed data), the idea is that this may produce, for example, more accurate simulations of rainfall for Europe.
However, there are potential pitfalls in using flux corrections, he adds:
“The downside of such approaches is that there is an artificial force in the model that pulls the model towards observations and such a force may even dampen the simulated climate change.”
In other words, if a model is not producing enough rainfall in Europe, it might be for reasons other than the North Atlantic, explains Maraun. For example, it might be because the modelled storm tracks are sending rainstorms to the wrong region.
This reinforces that point that scientists need to be careful not to apply bias correction without understanding the underlying reason for the bias, concludes Maraun:
“Climate researchers need to spend much more efforts to understand the origins of model biases, and researchers doing bias correction need to include this information into their research.”
In a recent perspectives article in Nature Climate Change, Maraun and his co-authors argue that “current bias correction methods might improve the applicability of climate simulations” but that they could not – and should not – be used to overcome more significant limitations with climate models.
How accurate are climate model projections of temperature?
One of the most important outputs of climate models is the projection of global surface temperatures.
In order to evaluate how well their models perform, scientists compare observations of the Earth’s climate with models’ future temperatures forecasts and historical temperatures “hindcasts”. Scientists can then assess the accuracy of temperature projections by looking at how individual climate models and the average of all models compare to observed warming.
Historical temperature changes since the late 1800s are driven by a number of factors, including increasing atmospheric greenhouse gas concentrations, aerosols, changes in solar activity, volcanic eruptions, and changes in land use. Natural variability also plays a role over shorter timescales.
If models do a good job of capturing the climate response in the past, researchers can be more confident that they will accurately respond to changes in the same factors in the future.
Carbon Brief has explored how climate models compare to observations in more detail in a recent analysis piece, looking at how surface temperature projections in climate models since the 1970s have matched up to reality.
Comparing models and observations can be a somewhat tricky exercise. The most often used values from climate models are for the temperature of the air just above the surface. However, observed temperature records are a combination of the temperature of the air just above the surface, over land, and the temperature of the surface waters of the ocean.
Comparing global air temperatures from the models to a combination of air temperatures and sea surface temperatures in the observations can create problems. To account for this, researchers have created what they call “blended fields” from climate models, which include sea surface temperatures of the oceans and surface air temperatures over land, in order to match what is actually measured in the observations.
These blended fields from models show slightly less warming than global surface air temperatures, as the air over the ocean warms faster than sea surface temperatures in recent years.
Carbon Brief’s figure below shows both the average of air temperature from all CMIP5 models (dashed black line) and the average of blended fields from all CMIP5 models (solid black line). The grey area shows the uncertainty in the model results, known as the 95% confidence interval. Individual coloured lines represent different observational temperature estimates from groups, such as the Met Office Hadley Centre, NOAA and NASA.
The blended fields from models generally match the warming seen in observations fairly well, while the air temperatures from the models show a bit more warming as they include the temperature of the air over the ocean rather than of the sea surface itself. Observations are all within the 95% confidence interval of model runs, suggesting that models do a good job of reflecting the short-term natural variability driven by El Niño and other factors.
The longer period of model projections from 1880 through 2100 is shown in the figure below. It shows both the longer-term warming since the late 19th century and projections of future warming under a scenario of relatively rapid emissions reductions (called “RCP4.5”), with global temperatures reaching around 2.5C above pre-industrial levels by 2100 (and around 2C above the 1970-2000 baseline shown in the figure).
Same as prior figure, but from 1880 to 2100. Projections through 2100 use RCP4.5. Note that this and the prior graph use a 1970-2000 baseline period. Chart by Carbon Brief using Highcharts.
Projections of the climate from the mid-1800s onwards agree fairly well with observations. There are a few periods, such as the early 1900s, where the Earth was a bit cooler than models projected, or the 1940s, where observations were a bit warmer.
Overall, however, the strong correspondence between modelled and observed temperatures increases scientists’ confidence that models are accurately capturing both the factors driving climate change and the level of short-term natural variability in the Earth’s climate.
For the period since 1998, when observations have been a bit lower than model projections, a recent Nature paper explores the reasons why this happened.
The researchers find that some of the difference is resolved by using blended fields from models. They suggest that the remainder of the divergence can be accounted for by a combination of short-term natural variability (mainly in the Pacific Ocean), small volcanoes and lower-than-expected solar output that was not included in models in their post-2005 projections.
Global average surface temperature is only one of many variables included in climate models, and models can be evaluated against many other climate metrics. There are specific “fingerprints” of human warming in the lower atmosphere, for example, that are seen in both models and observations.
What are the main limitations in climate modelling at the moment?
It is worth reiterating that climate models are not a perfect representation of the Earth’s climate – and nor can they be. As the climate is inherently chaotic, it is impossible to simulate with 100% accuracy, yet models do a pretty good job at getting the climate right.
The accuracy of projections made by models is also dependent on the quality of the forecasts that go into them. For example, scientists do not know if greenhouse gas emissions will fall, and so make estimates based on different scenarios of future socio-economic development. This adds another layer of uncertainty to climate projections.
Similarly, there are aspects of the future that would be so rare in Earth’s history that they’re extremely difficult to make projections for. One example is that ice sheets could destabilise as they melt, accelerating expected global sea level rise.
Yet, despite models becoming increasingly complex and sophisticated, there are still aspects of the climate system that they struggle to capture as well as scientists would like.
One of the main limitations of the climate models is how well they represent clouds.
Clouds are a constant thorn in the side of climate scientists. They cover around two-thirds of the Earth at any one time, yet individual clouds can form and disappear within minutes; they can both warm and cool the planet, depending on the type of cloud and the time of day; and scientists have no records of what clouds were like in the distant past, making it harder to ascertain if and how they have changed.
A particular aspect of the difficulties in modelling clouds comes down to convection. This is the process whereby warm air at the Earth’s surface rises through the atmosphere, cools, and then the moisture it contains condenses to form clouds.
On hot days, the air warms quickly, which drives convection. This can bring intense, short-duration rainfall, often accompanied by thunder and lightning.
Convectional rainfall can occur on short timescales and in very specific areas. Global climate models, therefore, have a resolution that is too coarse to capture these rainfall events.
Instead, scientists use “parameterisations” (see above) that represent the average effects of convection over an individual grid cell. This means GCMs do not simulate individual storms and local high rainfall events, explains Dr Lizzie Kendon, senior climate extremes scientist at the Met Office Hadley Centre, to Carbon Brief:
“As a consequence, GCMs are unable to capture precipitation intensities on sub-daily timescales and summertime precipitation extremes. Thus, we would have low confidence in future projections of hourly rainfall or convective extremes from GCMs or coarse resolution RCMs.”
(Carbon Brief will be publishing an article later this week exploring climate model projections of precipitation.)
To help overcome this issue, scientists have been developing very high resolution climate models. These have grid cells that are a few kilometres wide, rather than tens of kilometres. These “convective-permitting” models can simulate larger convective storms without the need of parameterisation.
However, the tradeoff of having greater detail is that the models cannot yet cover the whole globe. Despite the smaller area – and using supercomputers – these models still take a very long time to run, particularly if scientists want to run lots of variations of the model, known as an “ensemble”.
For example, simulations that are part of the Future Climate For Africa IMPALA project (“Improving Model Processes for African Climate”) use convection-permitting models covering all of Africa, but only for one ensemble member, says Kendon. Similarly, the next set of UK Climate Projections, due next year (“UKCP18”), will be run for 10 ensemble members, but for just the UK.
But expanding these convection-permitting models to the global scale is still some way away, notes Kendon:
“It is likely to be many years before we can afford [the computing power for] convection-permitting global climate simulations, especially for multiple ensemble members.”
Related to the issue of clouds in global models is that of “double ITCZ”. The Intertropical Convergence Zone, or ITCZ, is a huge belt of low pressure that encircles the Earth near the equator. It governs the annual rainfall patterns of much of the tropics, making it a hugely important feature of the climate for billions of people.
Illustration of the Intertropical Convergence Zone (ITCZ) and the principle global circulation patterns in the Earth’s atmosphere. Source: Creative Commons
The ITCZ wanders north and south across the tropics each year, roughly tracking the position of the sun through the seasons. Global climate models do recreate the ITCZ in their simulations – which emerges as a result of the interaction between the individual physical processes coded in the model. However, as a Journal of Climate paper by scientists at Caltech in the US explains, there are some areas where climate models struggle to represent the position of the ITCZ correctly:
“[O]ver the eastern Pacific, the ITCZ is located north of the equator most of the year, meandering by a few degrees latitude around [the] six [degree line of latitude]. However, for a brief period in spring, it splits into two ITCZs straddling the equator. Current climate models exaggerate this split into two ITCZs, leading to the well-known double-ITCZ bias of the models.”
The main implication of this is that modellers have lower confidence in projections for how the ITCZ could change as the climate warms. But there are knock-on impacts as well, Xiang tells Carbon Brief:
“For example, most of current climate models predict a weakened trade wind along with the slowdown of the Walker circulation. The existence of [the] double ITCZ problem may lead to an underestimation of this weakened trade wind.”
(Trade winds are near-constant easterly winds that circle the Earth either side of the equator.)
In addition, a 2015 study in Geophysical Research Letters suggests that because the double ITCZ affects cloud and water vapour feedbacks in models, it therefore plays a role in the climate sensitivity.
Climate sensitivity: The amount of warming we can expect when carbon dioxide in the atmosphere reaches double what it was before the industrial revolution. There are two ways to express climate sensitivity: Transient Climate Response (TCR) is the warming at Earth’s surface we can expect at the point of doubling, while Equilibrium Climate Sensitivity (ECS) is the total amount of warming once the Earth has had time to adjust fully to the extra carbon dioxide.
Climate sensitivity: The amount of warming we can expect when carbon dioxide in the atmosphere reaches double what it was before the industrial revolution. There are two ways to express climate sensitivity: Transient Climate… Read More
They found that models with a strong double ITCZ have a lower value for equilibrium climate sensitivity (ECS), which indicates that “most models might have underestimated ECS”. If models underestimate ECS, the climate will warm more in response to human-caused emissions than their current projections would suggest.
The causes of the double ITCZ in models are complex, Xiang tells Carbon Brief, and have been the subject of numerous studies. There are likely to be a number of contributing factors, Xiang says, including the way convection is parameterised in models.
For example, a Proceedings of the National Academy of Sciences paper in 2012 suggested that the issue stems from most models not producing enough thick cloud over the “oft-overcast Southern Ocean”, leading to higher-than-usual temperatures over the Southern Hemisphere as a whole, and also a southward shift in tropical rainfall.
As for the question of when scientists might solve this issue, Xiang says it is a tough one to answer:
“From my point of view, I think we may not be able to completely resolve this issue in the coming decade. However, we have made significant progress with the improved understanding of model physics, increased model resolution, and more reliable observations.”
Finally, another common issue in climate models is to do with the position of jet streams in the climate models. Jet streams are meandering rivers of high-speed winds flowing high up in the atmosphere. They can funnel weather systems west to east across the Earth.
As with the ITCZ, climate models recreate jet streams as a result of the fundamental physical equations contained in their code.
However, jet streams often appear to be too “zonal” in models – in other words, they are too strong and too straight, explains Dr Tim Woollings, a lecturer in physical climate science at the University of Oxford and former leader of the joint Met Office-Universities Process Evaluation Group for blocking and storm tracks. He tells Carbon Brief:
“In the real world, the jet veers north a little as it crosses the Atlantic (and a bit the Pacific). Because models underestimate this, the jet is often too far equatorward on average.”
As a result, models do not always get it right on the paths that low-pressure weather patterns take – known as “storm tracks”. Storms are often too sluggish in models, says Woollings, and they do not get strong enough and they peter out too quickly.
There are ways to improve this, says Woollings, but some are more straightforward than others. In general, increasing the resolution of the model can help, Woollings says:
“For example, as we increase resolution, the peaks of the mountains get a little higher and this contributes to deflecting the jets a little north. More complicated things also happen; if we can get better, more active storms in the model, that can have a knock-on effect on the jet stream, which is partly driven by the storms.”
(Mountain peaks get higher as model resolution increases because the greater detail allows the model to “see” more of the mountain as it narrows towards the top.)
Another option is improving how the model represents the physics of the atmosphere in its equations, adds Woollings, using “new, clever schemes [to approximate] the fluid mechanics in the computer code”.
The process of developing a climate model is a long-term task, which does not end once a model has been published. Most modelling centres will be updating and improving their models on a continuous cycle, with a development process where scientists spend a few years building the next version of their models.
Climate modeller at work in the Met Office, Exeter, UK. Credit: Met Office.
Once ready, the new model version incorporating all the improvements can be released, says Dr Chris Jones from the Met Office Hadley Centre:
“It’s a bit like motor companies build the next model of a particular vehicle so they’ve made the same one for years, but then all of a sudden a new one comes out that they’ve been developing. We do the same with our climate models.”
At the beginning of each cycle, the climate being reproduced by the model is compared to a range of observations to identify the biggest issues, explains Dr Tim Woollings. He tells Carbon Brief:
“Once these are identified, attention usually turns to assessing the physical processes known to affect those areas and attempts are made to improve the representation of these processes [in the model].”
How this is done varies from case to case, says Woollings, but will generally end up with some new improved code:
“This might be whole lines of code, to handle a process in a slightly different way, or it could sometimes just be changing an existing parameter to a better value. This may well be motivated by new research, or the experience of others [modelling centres].”
Sometimes during this process, scientists find that some issues compensate others, he adds:
“For example, Process A was found to be too strong, but this seemed to be compensated by Process B being too weak. In these cases, Process A will generally be fixed, even if it makes the model worse in the short term. Then attention turns to fixing Process B. At the end of the day, the model represents the physics of both processes better and we have a better model overall.”
At the Met Office Hadley Centre, the development process involves multiple teams, or “Process Evaluation Groups”, looking to improve a different element of the model, explains Woollings:
“The Process Evaluation Groups are essentially taskforces which look after certain aspects of the model. They monitor the biases in their area as the model develops, and test new methods to reduce these. These groups meet regularly to discuss their area, and often contain members from the academic community as well as Met Office scientists.
The improvements that each group are working on are then brought together into the new model. Once complete, the model can start to be run in earnest, says Jones:
“At the end of a two- or three-year process, we have a new-generation model that we believe is better than the last one, and then we can start to use that to kind of go back to the scientific questions we’ve looked at before and see if we can answer them better.”
How do scientists produce climate model information for specific regions?
One of the main limitations of global climate models is that the grid cells they are made up of are typically around 100km in longitude and latitude in the mid-latitudes. When you consider that the UK, for example, is only a little over 400km wide, that means it is represented in a GCM by a handful of grid boxes.
“If you think about the eastern Caribbean islands, a single eastern Caribbean island falls within a grid box, so is represented as water within these global climate models.”
“Even the larger Caribbean islands are represented as one or, at most, two grid boxes – so you get information for just one or two grid boxes – this poses a limitation for the small islands of the Caribbean region and small islands in general. And so you don’t end up with refined, finer scale, sub-country scale information for the small islands.”
Scientists overcome this problem by “downscaling” global climate information to a local or regional scale. In essence, this means taking information provided by a GCM or coarse-scale observations and applying it to specific place or region.
Tobago Cays and Mayreau Island, St. Vincent and The Grenadines. Credit: robertharding/Alamy Stock Photo.
For small island states, this process allows scientists to get useful data for specific islands, or even areas within islands, explains Taylor:
“The whole process of downscaling then is trying to take the information that you can get from the large scale and somehow relate it to the local scale, or the island scale, or even the sub-island scale.”
There are two main categories for methods of downscaling. The first is “dynamical downscaling”. This is essentially running models that are similar to GCMs, but for specific regions. Because these Regional Climate Models (RCMs) cover a smaller area, they can have higher resolution than GCMs and still run in a reasonable time. That said, notes Dr Dann Mitchell, a lecturer in the School of Geographical Sciences at the University of Bristol, RCMs may be slower than their global counterparts:
“An RCM with 25km grid cells covering Europe would take around 5-10 times longer to run than a GCM at ~150 km resolution.”
The UK Climate Projections 2009 (UKCP09), for example, is a set of climate projections specifically for the UK, produced from a regional climate model – the Met Office Hadley Centre’s HadRM3 model.
HadRM3 uses grid cells of 25km by 25km, thus dividing the UK up into 440 squares. This was an improvement over UKCP09’s predecessor (“UKCIP02”), which produced projections at a spatial resolution of 50km. The map below shows how the greater detail that the 25km grid (six maps to the right) affords than the 50km grid (two maps on far left),
RCMs such as HadRM3 can add a better – though still limited – representation of local factors, such as the influence of lakes, mountain ranges and a sea breeze.
Despite RCMs being limited to a specific area, they still need to factor in the wider climate that influences it. Scientists do this by feeding in information from GCMs or observations. Taylor explains how this applies to his research in the Caribbean:
“For dynamical downscaling, you first have to define the domain that you are going to run the model over – in our case, we define a kind of Caribbean/intra-Americas domain – so we limit the modelling to that domain. But, of course, you feed into the boundaries of that domain the output of the large-scale models, so it’s the larger scale model information that drives then the finer-scale model. And that’s the dynamical downscaling – you’re essentially doing the modelling at a finer scale, but over a limited domain, fed in with information at the boundaries.”
It is also possible to “nest”, or embed, RCMs within a GCM, which means scientists can run more than one model at the same time and get multiple levels of output simultaneously.
The second main category of downscaling is “statistical downscaling”. This involves using observed data to establish a statistical relationship between the global and local climate. Using this relationship, scientists then derive local changes based on the large scale projections coming from GCMs or observations.
One example of statistical downscaling is a weather generator. A weather generator produces synthetic timeseries of daily and/or hourly data for a particular location. It uses a combination of observed local weather data and projections of future climate to give an indication of what future weather conditions could be like on short timescales. (Weather generators can also produce timeseries of the weather in the current climate.)
It can be used for planning purposes – for example, in a flood risk assessment to simulate whether existing flood defences will cope with likely future levels of heavy rainfall.
In general, these statistical models can be run quickly, allowing scientists to carry out many simulations in the time it takes to complete a single GCM run.
It is worth noting that downscaled information still depends heavily on the quality of the information that it is based on, such as the observed data or the GCM data feeding in. Downscaling only provides more location-specific data, it does not make up for any uncertainties that stem from the data it relies on.
Statistical downscaling, in particular, is reliant on the observed data used to derive the statistical relationship. Downscaling also assumes that relationships in the current climate will still hold true in a warmer world, notes Mitchell. He tells Carbon Brief:
“[Statistical downscaling] can be fine for well-observed periods of time, or well-observed locations of interest, but, in general, if you push the local system too far, the statistical relationship will break down. For that reason, statistical downscaling is poorly constrained for future climate projections.”
Dynamical downscaling is more robust, says Mitchell, though only if an RCM captures the relevant processes well and the data driving them is reliable:
“Often for climate modelling, the implementation of the weather and climate processes in the dynamical model is not too dissimilar from the coarser global driving model, so the dynamical downscaling only provides limited improvability of the data. However, if done well, dynamical downscaling can be useful for localised understanding of weather and climate, but it requires a tremendous amount of model validation and in some cases model development to represent processes that can be captured at the new finer scales.”