Long-term climate variability is the range of temperatures and weather patterns experienced by the Earth over a scale of thousands of years. New research suggests it could fall as the world warms.
A study using data taken from fossils and ice cores finds that long-term temperature variability decreased four-fold from the Last Glacial Maximum (LGM) around 21,000 years ago to the start of the Holocene around 11,500 years ago. Within this period, natural processes caused the planet to warm by around 3-8C.
If future global emissions are not curbed, human-driven global warming could cause further large declines in long-term temperature variability, the lead author tells Carbon Brief, which may have far-reaching effects on the world’s seasons and weather.
However, it is still unclear how a decline in long-term variability could affect the frequency of extreme weather events, she adds. This is because the chances of an extreme event happening could be influenced by both short- and long-term climate variability, as well as global temperature rise.
Digging up the past
The new study, published in Nature, is the first to make a global assessment of how long-term temperature variability changed from the LGM to the Holocene.
During the LGM, the world’s last major ice age, snow covered much of Asia, Europe and North America. Yet, within a few thousand years, global temperatures rose by around 3-8C, causing the ice to thaw and the world to enter its current geological period, the Holocene.
The cause of this temperature rise is still disputed by scientists, but research suggests the natural release of large stores of CO2 from the world’s oceans may have played a role.
To work out how long-term climate variability changed over the period, the researchers analysed data taken from ancient ice cores, marine sediments and animal and plant fossils stretching back thousands of years.
Scientists are able to analyse some of these samples – which are known as proxy records – by looking at the ratios between different chemical isotopes.
Combining data derived from different parts of the world and time periods allows scientists to create a picture of past temperature change, explains Dr Kira Rehfeld, a research fellow at the British Antarctic Survey and the Alfred-Wegener Institute for Polar and Marine Research (AWI) in Potsdam, Germany. She tells Carbon Brief:
“We set out and started collecting more and more records that we could use to get a more general picture of changing climate variability for temperature. It’s taken us three and a half years to find enough records and to develop the methodology to be able to analyse them.”
The researchers then compared data taken from the LGM and the Holocene to help them work out how global temperatures could have changed over large time scales. Rehfeld says:
“We don’t look at the variability in terms of just temperature rise, we look at the ratio of the variability. So we divide the variability of the LGM by the variability of the Holocene. That way we can compare records that have very different origins.”
The research finds that, from the LGM to the Holocene, long-term temperature variability fell by a factor of four.
However, some parts of the world experienced larger changes in temperature than others, the study notes.
This is shown on the chart below, where dark blues show areas that experienced a large amount of temperature change from the LGM to the Holocene, whereas light blue shows areas that experienced less change.
On the chart, symbols are used to show the location of ice cores (circle), marine sediments (diamond), lacustrine – or lake – sediment (triangle) and tree fossil data (square). Colours are used to show samples from the Holocene and LGM (red), the Holocene (orange) and the LGM (purple).
Global temperature change from the Last Glacial Maximum to the Holocene. Dark blue indicates high temperature change while light blue shows low temperature change. Symbols show the location of ice cores (circle), marine sediments (diamond), lacustrine sediment (triangle) and tree fossil data (square). Colours show samples from the Holocene and LGM (red), the Holocene (orange) and the LGM (purple). Source: Rehfeld et al. (2018)
The findings show that the world’s poles experienced a larger change in temperature than the equator over the time period. These changes led to an overall decline in long-term temperature variability, the research finds.
The difference in warming between the poles and the equator could be down to a process known as “polar amplification”, Rehfeld says.
Polar amplification is the phenomenon that any change in the impact of sunlight on the Earth tends to have a larger effect on the poles than the equator.
This is thought to be because as warming causes sea ice near the poles to melt, energy from the sun that would have been reflected away by the ice is instead absorbed by the ocean. Because of this, surface temperatures near the poles start to rise at an accelerated rate.
The findings reinforce the prediction that future climate change driven by humans will cause a larger increase in temperature at the poles than at the equator, Rehfeld says:
“The temperature difference between the poles and the equator has decreased as the Earth warms due to polar amplification. This relates to a change in overall long timescale temperature variability.
“If you take that and extrapolate that into the future, warming could be larger at the poles. The temperature difference is then further reduced, which would translate into a reduction of overall temperature variability.”
Carbon Brief previously reported on how the effect of climate change on polar amplification could cause the amount of wind available for power generation to fall in the northern hemisphere.
Although long-term variability is expected to fall, this does not mean that short-term variability will also be reduced, Rehfeld says:
“The question we’re asking is what would a warmer world than today look like? If we can translate our changes in the temperature gradient, then that would mean, theoretically, that long timescale variability in the future will be reduced. But that doesn’t mean that short timescale variability will be reduced.”
Short-term climate variability is a term typically used to describe the natural range of temperatures and weather patterns experienced by the Earth within shorter periods.
For example, after an extreme weather event, scientists often carry out single attribution studies to determine how the likelihood of such an event could have been influenced by climate change and short-term climate variability.
It is still not clear how a reduction in long-term variability will affect the frequency and severity of extreme weather events, Rehfeld says:
“There seems to be a correlation. This change in long timescale climate variability could have influences on extreme events and seasonal variability.
“Based on what we know about how extreme events work, if we have a broader distribution of temperatures then we should have more extreme events. However, what we perceive as extreme events, like floods or heatwaves, is not reflected in our datasets.”
In other words, scientific theory suggests that declines in long-term climate variability could lead to fewer extreme events. However, the timescale used in the study was too broad to reflect short-term events, such as floods and heatwaves.
The findings are “interesting”, but could hold “limited relevance” to understanding future climate change, which is occuring at a much faster rate than the warming observed from the LGM to the Holocene, says Prof Amanda Maycock, a research fellow from the University of Leeds who was not involved in the new study. She tells Carbon Brief:
“Current surface temperature changes and associated changes in climate variability and extremes are occurring much more rapidly than the multi-centennial timescales considered in the study.”
The datasets collated in the study could be used to help climate models simulate more long-term changes in climate variability, says Dr Lauren Gregorie, an academic research fellow at the University of Leeds, who was also not involved in the study. She tells Carbon Brief:
“What I find particularly interesting is that while models do simulate a reduction in variability, they tend to underestimate that change compared to the records [used in the study]. There is a great opportunity to use our knowledge of past climate change to test and improve climate models. Unfortunately, there’s currently very little funding to do this kind of work.”
The climate data for 2017 is now in. In this article, Carbon Brief explains why last year proved to be so remarkable across the oceans, atmosphere, cryosphere and surface temperature of the planet.
A number of records for the Earth’s climate were set in 2017:
It was the warmest year on record for ocean heat content, which increased markedly between 2016 and 2017.
It was the second or third warmest year on record for surface temperature – depending on the dataset used – and the warmest year without the influence of an El Niño event.
It saw record lows in sea ice extent and volume in the Arctic both at the beginning and end of the year, though the minimum extent reached in September was only the eighth lowest on record.
It also saw record-low Antarctic sea ice for much of the year, though scientists are still working to determine the role of human activity in the region’s sea ice changes.
Warmest year on record in the oceans
More than 90% of the heat trapped by increasing greenhouse gas concentrations ends up going into the Earth’s oceans. While surface temperatures fluctuate a bit from year to year due to natural variability, ocean heat content increases much more smoothly and is, in many ways, a more reliable indicator of the warming of the Earth, albeit one with a shorter historical record.
The figures below shows ocean heat content for each year in the region of the ocean between the surface and 2,000 meters in depth (comprising the bulk of the world’s oceans), as well as a map of 2017 anomalies.
The upper figure shows changes in ocean heat content since 1958, while the lower map shows ocean heat content in 2017 relative to the average ocean heat content between 1981 and 2010, with red areas showing warmer ocean heat content than over the past few decades and blue areas showing cooler.
Change in global ocean heat content between the surface and 2000 meters of depth from 1958 to 2017 (top) and distribution of ocean heat content anomalies in 2017 (bottom). Figure from Cheng and Zhu (2018), using data from IAP-CAS.
Ocean heat content in 2017 was significantly higher than in 2015, the next warmest year. While 2016 was the warmest year on the surface, it was only the third warmest year for ocean heat content as the El Niño event that helped 2016 surface temperatures be so warm redistributed heat out of the ocean and into the atmosphere.
Warmest surface temperatures without an El Niño
Global surface temperatures in 2017 were the second or third warmest on record since 1850, when global temperatures can first be calculated with reasonable accuracy. Unlike the other warmest years – 2015 and 2016 – there was no El Niño event in 2017 (or in late 2016) contributing to increased temperatures this year (and mild El Niño conditions in early 2017 were offset by mild La Niña conditions during the later part of the year).
The figure below shows global surface temperatures records from the principal research groups around the world since 1970. These are created by combining ship- and buoy-based measurements of ocean sea surface temperatures with temperature readings of the surface air temperature from weather stations on land. Temperatures are shown as anomalies relative to a 1970 to 2000 average. [Click the figure legend to show or hide different temperature records.]
Short-term variability in the record is mostly due to the influence of El Niño and La Niña events, which have a short-term warming or cooling impact on the climate. Other dips, such as the one in the mid-1990s, are associated with large volcanic eruptions. The longer-term warming of the climate is entirely driven by atmospheric increases in CO2 and other greenhouse gases emitted from human activity.
The record warm temperatures experienced over the past three years are not due to any adjustments made to the underlying temperature records. The figure above includes a “raw records” line calculated by Carbon Brief using data not subject to any adjustments or corrections for changes in measurement techniques. Since 1970, the raw data and the adjusted temperature records produced by different groups largely agree.
Global surface temperature records can be calculated back to 1850, though some groups choose to start their records in 1880 when more data was available. Prior to 1850, records exist for some specific regions, but are not sufficiently widespread to calculate global temperatures with any reasonable accuracy. Global temperature records since 1850 are shown in the figure below, again shown as the difference from a baseline of 1970-2000.
Same as prior figure, but with data extending back to 1850 (or as far back as each individual record is available). Chart by Carbon Brief using Highcharts.
Global surface temperatures in 2017 were 1-1.2C warmer than temperatures in late 19th century (between 1880 and 1900), depending on the temperature record chosen.
It is striking how warm 2017 was, despite the end of the massive El Niño event that pushed up 2015 and 2016 temperatures. The past three years are well above any prior years’ temperatures, by a margin of more than 0.15C,
This is shown in the figure below from Berkeley Earth. Each shaded curve represents the annual average temperature for that year, and the further that curve is to the right, the warmer it was.
The width of each year’s curve reflects the uncertainty in the annual temperature values (caused by factors such as changes in measurement techniques and the fact that some parts of the world have more sparse station coverage).
Global average surface temperatures for each year with their respective uncertainties (width of the curves) from Berkeley Earth. Note that warming is shown here relative to the temperature of the 1951-1980 period, but the relative position of the years would be the same using a 1970-2000 baseline. Figure produced by Dr Robert Rohde.
While El Niño and La Niña events have a sizable short-term impact on global temperatures, their influence tends not to extend for more than six months or so after the event has ended. With the large El Niño event of 2015 and 2016 fading by the summer of 2016, it had little direct influence on 2017 temperatures.
In the figure below, Dr Gavin Schmidt, director of the NASA Goddard Institute for Space Studies, uses a simple statistical model to estimate what the global temperature record (black line) would be like in the absence of El Niño or La Niña influences (red line).
Although El Niño bumped up the temperatures of 2015 modestly and 2016 quite a bit, it had almost zero effect on 2017 temperatures. When the influence of El Niño is removed from the record, according to Schmidt’s analysis, 2017 would be the warmest year on record.
Global average surface temperatures from NASA’s GISTemp (black) and with the influence of El Niño and La Niña (collectively referred to as ENSO) removed (red). Figure produced by Dr. Gavin Schmidt.
However, Dr Tim Osborn, director of the Climatic Research Unit at the University of East Anglia, cautions that these results are somewhat sensitive to the statistical method and El Niño index used.
He suggests that, while 2017 is probably the warmest when ENSO is taken out, it is not necessarily as clear a winner over 2016 and 2015 if different methods are used. It is clear, though, that 2017 is the warmest non-El Niño year by any measure.
A paper recently published in Geophysical Research Letters by researchers at the University of Arizona suggests that global temperatures may not return down to pre-2015 levels any time soon. They suggest that extra heat was absorbed by the tropical Pacific Ocean since the late 1990s and that the recent El Niño event acted as a trigger for that heat to be released. The cycle of extra heat uptake by the oceans may be over for at least a decade.
Near-record warmth in satellite records
In addition to surface measurements over the world’s land and oceans, satellite microwave sounding units have been providing estimates of global lower atmospheric temperatures since 1979. These measurements, while subject to some large uncertainties, also show 2017 as a near-record warm year.
The record produced by Remote Sensing Systems (RSS) shows 2017 as the second warmest year after 2016, while the record from the University of Alabama, Huntsville (UAH) shows it as the third warmest after 2016 and 1998. The two records are shown in the figure below – RSS in red and UAH in blue.
Global average lower troposphere temperatures from RSS version 4 (red) and UAH version 6 (blue) relative to a 1979-2000 baseline (as the satellite records begin in 1979). Chart by Carbon Brief using Highcharts.
These satellites measure the temperature of the lower troposphere and capture average temperature changes around 5km above the surface. This region tends to be influenced more strongly by El Niño and La Niña events than the surface and satellite records show correspondingly larger warming or cooling spikes during these events.
This is why, for example, 1998 shows up as one of the warmest years in satellites, but not in surface records.
Observations tracking close to climate modelling projections
Climate models provide projections of both long-term and shorter-term changes to the Earth’s climate. While climate models show their own El Niño- and La Niña-like behaviour, it does not necessarily occur at the same time in models as it does in the real world.
However, temperatures in recent years – both during the El Niño event and, more importantly, now that the El Niño event is over – are tracking rather close to the average projection of the climate models included in the latest report from the Intergovernmental Panel on Climate Change (the CMIP5 models).
These models used historical records of greenhouse gases and other factors through to 2005. Model estimates of temperatures prior to 2005 are a “hindcast” using known past climate influences, while temperatures projected after 2005 are a “forecast” based on a estimate of how things might change.
The figure below shows the range of individual models forecasts between 1970 and 2020 with grey shading, with the average projection across all the models shown in black. Individual observational temperature records are represented by coloured lines.
Annual global average surface temperatures from CMIP5 models and observations between 1970 and 2020. Models use RCP4.5 forcings after 2005. They include sea surface temperatures over oceans and surface air temperatures over land to match what is measured by observations. Anomalies plotted with respect to a 1970-2000 baseline. Chart by Carbon Brief using Highcharts.
While global temperatures were running a bit below climate models between 2005 and 2014, the last few years have been pretty close to the model average.
Low sea ice at both poles
In addition to near-record temperatures, 2017 also saw record-low sea ice during parts of the year, both in the Arctic and Antarctic.
The figure below shows the average Arctic sea ice extent for each week of the year for every year between 1978 and 2017. Prior to 1978, satellite measurements of sea ice extent are not available and the data is much less reliable.
The figure shows a clear and steady decline in Arctic sea ice since the late 1970s, with lighter darker colours (earlier years) at the top and lighter colors (more recent years) much lower. A typical summer now has nearly half as much sea ice in the Arctic as it had in the 1970s and 1980s.
Sea ice extent only provides part of the picture, as some sea ice is much thicker or older than others. The Pan-Arctic Ice Ocean Modeling and Assimilation System (PIOMAS) project provides estimates of sea ice volume since 1979, shown in the figure below.
Arctic sea ice volume anomalies from 1979 through 2017 from PIOMAS.
According to PIOMAS, sea ice volume was around 12,000 cubic kilometers lower than in 1979. They found that 2017 tied 2012 for the lowest measured Arctic sea ice volume on record, though 2012 remains the year with the lowest summer minimum volume.
While the long-term decline in Arctic sea ice is clear, the Antarctic is much more complicated. Weekly Antarctic sea ice extent from 1978 through to 2017 is shown in the figure below.
Unlike in the Arctic, the Antarctic has no clear long-term trend in sea ice extent. In the figure early years (darker lines) and recent years (lighter lines) are intermixed. In fact, 2015 and early 2016 set records for the most sea ice extent observed.
In 2017, however, Antarctic sea ice hit record lows for much of the year. Even in recent months it has been the second lowest recorded after late 2016. It is unclear what role, if any, climate change is playing in Antarctic sea ice changes, though it is an area of very active research.
Finally, both Antarctic and Arctic sea ice extent is combined to estimate global sea ice extent in the figure below.
Global sea ice set a clear record low in the first half of 2017, driven in large part by record low Antarctic sea ice cover. There has been a long-term downward trend in summer global sea ice extent, though the trend is less clear in the winter, reflecting the fact that the Arctic shows a clearer long-term trend than the Antarctic.
Carbon Brief produced a raw global temperature record using using unadjusted ICOADS sea surface temperature measurements gridded by the UK Hadley Centre and raw land temperature measurements assembled by NOAA in version 4 of the Global Historical Climatological Network (GHCN).
Raw land temperatures were calculated by assigning each station to a 5×5 latitude/longitude grid box, converting station temperatures into anomalies relative to a 1971-2000 baseline period, averaging all the anomalies within each grid box for each month, and averaging all grid boxes for each month weighted by the land area within each grid box.
Raw combined land/ocean temperatures were estimated by averaging raw land and ocean temperatures weighted by the percent of the globe covered by each.
Scientists have presented a new, narrower estimate of the “climate sensitivity” – a measure of how much the climate could warm in response to the release of greenhouse gases.
The latest assessment report from the Intergovernmental Panel on Climate Change (IPCC) estimates that climate sensitivity is close to 3C, with a “likely” range of 1.5 to 4.5C.
The new study, published in Nature, refines this estimate to 2.8C, with a corresponding range of 2.2 to 3.4C. If correct, the new estimates could reduce the uncertainty surrounding climate sensitivity by 60%.
The narrower range suggests that global temperature rise is “going to shoot over 1.5C” above pre-industrial levels, the lead author tells Carbon Brief, but “we might be able to avoid 2C”. Meeting either limit will likely require negative emissions technologies that can remove CO2 from the atmosphere, he says.
The new estimate is another “brick in the wall” of scientists’ understanding of climate sensitivity, another scientist tells Carbon Brief, and “the best-informed views will be reached by multiple lines of evidence”.
Climate sensitivity is the amount of warming that can be expected in response to the concentration of CO2 in the atmosphere reaching double the level observed in pre-industrial times.
The research makes a new estimate of the “equilibrium” climate sensitivity (ECS) – that is, the amount of warming expected to occur once the full impact of the extra greenhouse gases release has played out. This measure includes the impact of warming on long-term climate feedback loops, which can take decades, or even centuries, to materialise.
The value of ECS is one of the big climate change questions that scientists are still trying to address.
It is important because understanding how sensitive the Earth is to CO2 could help us to estimate how much the planet could warm in response to greenhouse gases, explains Prof Peter Cox, lead author of the new paper and a climate scientist at the University of Exeter. He tells Carbon Brief:
“The issue about the equilibrium climate sensitivity is the range that has been given in successive IPCC reports – 1.5 to 4C – is a range that is essentially ‘climate change we could probably adapt to’ at the 1.5C end and ‘climate change we probably can’t adapt to’ at the 4C end. So that uncertainty has a huge impact on impeding the focused effort to mitigate climate change and adapt.”
The new findings indicate that the value of ECS could be close to 2.8C, says Cox:
“We get a value with a ‘likely’ range, which means there’s a 66% probability that it’s in that range of 2.2 to 3.4C with a central estimate of 2.8C. That’s not so far from the central estimate of the IPCC which is 3C, but the range is much reduced, from 1.5 to 4C, to 2.2 to 3.4C. What that means is we can rule out very low climate sensitivities and we can rule out very high climate sensitivities.”
Capturing a signal
There are a number of techniques that scientists can use to work out what ECS could be.
One method is to look at how Earth has responded to natural greenhouse gas changes in its geological past to try to work out how it might respond to future global warming.
A third method used by scientists involves matching global surface temperatures with the global warming trend over the past century to try and work out sensitivity from how the planet is responding. (This is what is known as the “energy budget model” approach.)
The new study uses a similar method to the energy budget model approach. However, instead of matching the global temperature record to global warming, the new research attempts to match temperature records to natural, long-term fluctuations in temperature.
Looking at natural variability rather than the warming trend allowed the scientists to exclude a range of uncertainties associated with human-caused climate change, Cox explains:
“Normally the way this [research] is done is by looking at the historical record warming, which makes sense. We’ve seen 1C of warming, roughly speaking, and so you may think that must tell you how sensitive the climate is. But it doesn’t. The main reason it doesn’t is that we don’t know how much energy or heat we’ve put in the system in terms of radiative forcing – greenhouse gases.”
To understand how historical temperature fluctuations have changed over the past century, the researchers first removed the global warming trend from a set of observational temperature data.
They then compared this data to results from a series of 22 global climate models. Some models had lower climate sensitivity, while some some models had higher climate sensitivity.
The results are shown on the chart below. On the chart, black dots show natural fluctuations in temperature from 1940 to 2020. Each line represents the results from one model, with magenta lines showing results from higher sensitivity models and green showing the results from models with lower climate sensitivity.
Natural temperature variability (black dots) compared to simulations of variability from climate models with higher climate sensitivity (magenta) and lower climate sensitivity (green). Each line represents the results from one model. Source: Cox et al. (2018)
The chart indicates that higher sensitivity models generally predict more warming than has been observed over the past 50 years, while lower sensitivity models either closely match the observed trend or estimate a lower amount of warming.
Together, these results allowed the researchers to produce their narrower range.
Understanding ECS could help scientists to work out how much the climate is likely to warm in the future, Cox says, which, in turn, could allow policymakers to estimate how easy it will be to meet the goals of the Paris Agreement.
Climate sensitivity is the amount of warming that will occur after CO2 concentrations become twice as high as they were in pre-industrial times. Pre-industrial CO2 concentration levels were about 280 parts per million (ppm) and levels are currently at around 404ppm.
This means that, if humans stopped releasing CO2 today, the world should expect to experience more than half of the warming dictated by the ECS. Cox explains:
“That means that if you’ve got an ECS of 4C, then you’ve pretty much already missed the 2C target of Paris. So the ECS value has a big impact on the feasibility of Paris.”
If the results are correct and the climate sensitivity is 2.8C, then it is likely that the world will fail to limit warming to 1.5C above pre-industrial levels, which is the aspirational goal of the Paris Agreement, Cox adds:
“Our numbers suggest that we’re going to shoot over 1.5C. We might be able to avoid 2C, it will take a huge effort to do so. I think, to achieve 1.5C, you definitely have to think of negative emissions technologies and, if you want 2C, you need to think about it, too, even if it’s only a short-term stop gap.”
Negative emissions technologies are a group of techniques – many of which still remain hypothetical – that aim to remove CO2 from the air in an attempt to tackle climate change.
The study’s results “reduce the probability of very high climate sensitivity”, which should “reassure” those taking steps to meet the goals of the Paris Agreement, says Prof Gabi Hegerl FRS, a climate system scientist from the University of Edinburgh, who was not involved in the research. She tells Carbon Brief:
“It also emphasises that climate change won’t be small, so reducing climate change will continue to require very sharp reductions of emissions leading towards ceasing emissions.”
Reducing uncertainty surrounding climate sensitivity should help policymakers to refocus their efforts on tackling climate change, says Cox:
“If you can reduce the uncertainty, which I think we can, then you can focus your mind on what needs to be done. We can rule out very low values, where you might say, ‘don’t worry about it, we’ll adapt’ and you can rule out very high values that might lead to you to a sort of hopeless where you think, ‘it’s too late’. We are still in that zone where action is urgent, but not too late. But it is very urgent.”
‘Brick in the wall’
The new paper adds to the extensive research around the potential value for ECS.
Despite debate among scientists about the best way to estimate climate sensitivity, each new research paper can be seen as “brick in the wall” of our understanding, says Prof Andrew Dessler, an atmospheric scientist from Texas A&M University, who was not involved in the research. He tells Carbon Brief:
“I don’t think any single paper will by itself redefine what we think about ECS. Rather, the best-informed views will be reached by multiple lines of evidence, with care taken in relating the inferred ECS from different methods.”
For example, the paper does not discuss how natural events, such as El Niño, could impact temperature fluctuations, he tells Carbon Brief:
“The approach mixes up natural variability due to El Niño, decadal variations, volcanic eruptions and air pollutants, and we know that models have different biases with respect to each of these. There are also theoretical problems with applying their statistical approach in this way, even though it seems to work. So it is not clear whether to put more weight on this study, or the previous ones suggesting even higher sensitivity.”
El Niño: Every five years or so, a change in the winds causes a shift to warmer than normal sea surface temperatures in the equatorial Pacific Ocean – known as El Niño. Together with its cooler counterpart, La Niña, this is known as the El Niño Southern Oscillation (ENSO) and is responsible for most of the fluctuations in temperature and rainfall patterns we see from one year to the next.
El Niño: Every five years or so, a change in the winds causes a shift to warmer than normal sea surface temperatures in the equatorial Pacific Ocean – known as El Niño. Together with… Read More
In addition, the research may have made “significant” errors in its attempts to reduce uncertainty surrounding climate sensitivity, says Dr Patrick Brown, a climate scientist from the Carnegie Institution for Science in Stanford, California.
Last month, Brown was the lead author of a Nature paper which found that ECS could be relatively higher than previous estimates have suggested – their central estimate was 3.7C. Brown tells Carbon Brief:
“They appear to be comparing the IPCC ECS ‘likely’ range of 1.5 to 4.5C to their constrained ECS model range. This is not an appropriate comparison because the 16 models that they use do not span the entire uncertainty range of ECS.
“For example, no model that they investigate has an ECS below 2.2C. Thus their claim that they reduced uncertainty in ECS by 60% comes partly from the coincidence of which models happened to be included in their study.”
“By contrast, Cox et al started from climate-model values that are at the upper end of the IPCC range and used evidence to effectively rule out catastrophically high values.”
Forster adds that the methods used in the present study are “enviably simple” and will leave climate scientists asking, “why didn’t I think of that?” He says:
“In my view, Cox and colleagues’ estimate and the estimates produced by analysing the historical energy budget carry the most weight, because they are based on simpler physical theories of climate forcing and response, and do not directly require the use of a climate that correctly represents cloud.”
(Improving the representation of clouds in climate models should be a major priority for future research, scientists recently told Carbon Brief.)
In the first article of a week-long series focused on climate modelling, Carbon Brief explains in detail how scientists use computers to understand our changing climate…
The use of computer models runs right through the heart of climate science.
From helping scientists unravel cycles of ice ages hundreds of thousands of years ago to making projections for this century or the next, models are an essential tool for understanding the Earth’s climate.
But what is a climate model? What does it look like? What does it actually do? These are all questions that anyone outside the world of climate science might reasonably ask.
Carbon Brief has spoken to a range of climate scientists in order to answer these questions and more. What follows is an in-depth Q&A on climate models and how scientists use them. You can use the links below to navigate to a specific question.
A global climate model typically contains enough computer code to fill 18,000 pages of printed text; it will have taken hundreds of scientists many years to build and improve; and it can require a supercomputer the size of a tennis court to run.
The models themselves come in different forms – from those that just cover one particular region of the world or part of the climate system, to those that simulate the atmosphere, oceans, ice and land for the whole planet.
The output from these models drives forward climate science, helping scientists understand how human activity is affecting the Earth’s climate. These advances have underpinned climate policy decisions on national and international scales for the past five decades.
In many ways, climate modelling is just an extension of weather forecasting, but focusing on changes over decades rather than hours. In fact, the UK’s Met Office Hadley Centre uses the same “Unified Model” as the basis for both tasks.
The vast computing power required for simulating the weather and climate means today’s models are run using massive supercomputers.
The Met Office Hadley Centre’s three new Cray XC40 supercomputers, for example, are together capable of 14,000 trillion calculations a second. The timelapse video below shows the third of these supercomputers being installed in 2017.
Fundamental physical principles
So, what exactly goes into a climate model? At their most basic level, climate models use equations to represent the processes and interactions that drive the Earth’s climate. These cover the atmosphere, oceans, land and ice-covered regions of the planet.
The models are based on the same laws and equations that underpin scientists’ understanding of the physical, chemical and biological mechanisms going on in the Earth system.
For example, scientists want climate models to abide by fundamental physical principles, such as the first law of thermodynamics (also known as the law of conservation of energy), which states that in a closed system, energy cannot be lost or created, only changed from one form to another.
Then there are the equations that describe the dynamics of what goes on in the climate system, such as the Clausius-Clapeyron equation, which characterises the relationship between the temperature of the air and its maximum water vapour pressure.
The most important of these are the Navier-Stokes equations of fluid motion, which capture the speed, pressure, temperature and density of the gases in the atmosphere and the water in the ocean.
The Navier-Stokes equations for “incompressible” flow in three dimensions (x, y and z). (Although the air in our atmosphere is technically compressible, it is relatively slow-moving and is, therefore, treated as incompressible in order to simplify the equations.). Note: this set of equations is simpler than the ones a climate model will use because they need to calculate flows across a rotating sphere.
Scientists translate each of these physical principles into equations that make up line after line of computer code – often running to more than a million lines for a global climate model.
The code in global climate models is typically written in the programming language Fortran. Developed by IBM in the 1950s, Fortran was the first “high-level” programming language. This means that rather than being written in a machine language – typically a stream of numbers – the code is written much like a human language.
You can see this in the example below, which shows a small section of code from one of the Met Office Hadley Centre models. The code contains commands such as “IF”, “THEN” and “DO”. When the model is run, it is first translated (automatically) into machine code that the computer understands.
A section of code from HadGEM2-ES (as used for CMIP5) in Fortran programming language. The code is from within the plant physiology section that starts to look at how the different vegetation types absorb light and moisture. Credit: Dr Chris Jones, Met Office Hadley Centre
There are now many other programming languages available to climate scientists, such as C, Python, R, Matlab and IDL. However, the last four of these are applications that are themselves written in a more fundamental language (such as Fortran) and, therefore, are relatively slow to run. Fortran and C are generally used today for running a global model quickly on a supercomputer.
Throughout the code in a climate model are equations that govern the underlying physics of the climate system, from how sea ice forms and melts on Arctic waters to the exchange of gases and moisture between the land surface and the air above it.
The figure below shows how more and more climate processes have been incorporated into global models over the decades, from the mid-1970s through to the fourth assessment report (“AR4”) of the Intergovernmental Panel of Climate Change (IPCC), published in 2007.
Illustration of the processes added to global climate models over the decades, from the mid-1970s, through the first four IPCC assessment reports: first (“FAR”) published in 1990, second (“SAR”) in 1995, third (“TAR”) in 2001 and fourth (“AR4”) in 2007. (Note, there is also a fifth report, which was completed in 2014). Source: IPCC AR4, Fig 1.2
So, how does a model go about calculating all these equations?
Because of the complexity of the climate system and limitation of computing power, a model cannot possibly calculate all of these processes for every cubic metre of the climate system. Instead, a climate model divides up the Earth into a series of boxes or “grid cells”. A global model can have dozens of layers across the height and depth of the atmosphere and oceans.
The image below shows a 3D representation of what this looks like. The model then calculates the state of the climate system in each cell – factoring in temperature, air pressure, humidity and wind speed.
Illustration of grid cells used by climate models and the climatic processes that the model will calculate for each cell (bottom corner). Source: NOAA GFDL
For processes that happen on scales that are smaller than the grid cell, such as convection, the model uses “parameterisations” to fill in these gaps. These are essentially approximations that simplify each process and allow them to be included in the model. (Parameterisation is covered in the question on model tuning below.)
The size of the grid cells in a model is known as its “spatial resolution”. A relatively-coarse global climate model typically has grid cells that are around 100km in longitude and latitude in the mid-latitudes. Because the Earth is a sphere, the cells for a grid based on longitude and latitude are larger at the equator and smaller at the poles. However, it is increasingly common for scientists to use alternative gridding techniques – such as cubed-sphere and icosahedral – which don’t have this problem.
A high-resolution model will have more, smaller boxes. The higher the resolution, the more specific climate information a model can produce for a particular region – but this comes at a cost of taking longer to run because the model has more calculations to make.
The figure below shows how the spatial resolution of models improved between the first and fourth IPCC assessment reports. You can see how the detail in the topography of the land surface emerges as the resolution is improved.
Increasing spatial resolution of climate models used through the first four IPCC assessment reports: first (“FAR”) published in 1990, second (“SAR”) in 1995, third (“TAR”) in 2001 and fourth (“AR4”) in 2007. (Note, there is also a fifth report, which was completed in 2014). Source: IPCC AR4, Fig 1.2
A similar compromise has to be made for the “time step” of how often a model calculates the state of the climate system. In the real world, time is continuous, yet a model needs to chop time up into bite-sized chunks to make the calculations manageable.
“The role of the leapfrog in models is to march the weather forward in time, to allow predictions about the future to be made. In the same way that a child in the playground leapfrogs over another child to get from behind to in front, the model leapfrogs over the present to get from the past to the future.”
In other words, the model takes the climate information it has from the previous and present time steps to extrapolate forwards to the next one, and so on through time.
As with the size of grid cells, a smaller time step means the model can produce more detailed climate information. But it also means the model has more calculations to do in every run.
For example, calculating the state of the climate system for every minute of an entire century would require over 50m calculations for every grid cell – whereas only calculating it for each day would take 36,500. That’s quite a range – so how do scientists decide what time step to use?
The answer comes down to finding a balance, Williams tells Carbon Brief:
“Mathematically speaking, the correct approach would be to keep decreasing the time step until the simulations are converged and the results stop changing. However, we normally lack the computational resources to run the models with a time step this small. Therefore, we are forced to tolerate a larger time step than we would ideally like.”
For the atmosphere component of climate models, a time step of around 30 minutes “seems to be a reasonable compromise” between accuracy and computer processing time, says Williams:
“Any smaller and the improved accuracy would not be sufficient to justify the extra computational burden. Any larger and the model would run very quickly, but the simulation quality would be poor.”
Bringing all these pieces together, a climate model can produce a representation of the whole climate system at 30-minute intervals over many decades or even centuries.
As Dr Gavin Schmidt, director of the NASA Goddard Institute for Space Studies, describes in his TED talk in 2014, the interactions of small-scale processes in a model mean it creates a simulation of our climate – everything from the evaporation of moisture from the Earth’s surface and formation of clouds, to where the wind carries them and where the rain eventually falls.
Schmidt calls these “emergent properties” in his talk – features of the climate that aren’t specifically coded in the model, but are simulated by the model as a result of all the individual processes that are built in.
It is akin to the manager of a football team. He or she picks the team, chooses the formation and settles on the tactics, but once the team is out on the pitch, the manager cannot dictate if and when the team scores or concedes a goal. In a climate model, scientists set the ground rules based on the physics of the Earth system, but it is the model itself that creates the storms, droughts and sea ice.
So to summarise: scientists put the fundamental physical equations of the Earth’s climate into a computer model, which is then able to reproduce – among many other things – the circulation of the oceans, the annual cycle of the seasons, and the flows of carbon between the land surface and the atmosphere.
You can watch the whole of Schmidt’s talk below.
While the above broadly explains what a climate model is, there are many different types. Read on to the question below to explore these in more detail.
The earliest and most basic numerical climate models are Energy Balance Models (EBMs). EBMs do not simulate the climate, but instead consider the balance between the energy entering the Earth’s atmosphere from the sun and the heat released back out to space. The only climate variable they calculate is surface temperature. The simplest EBMs only require a few lines of code and can be run in a spreadsheet.
Many of these models are “zero-dimensional”, meaning they treat the Earth as a whole; essentially, as a single point. Others are 1D, such as those that also factor in the transfer of energy across different latitudes of the Earth’s surface (which is predominantly from the equator to the poles).
A step along from EBMs are Radiative Convective Models, which simulate the transfer of energy through the height of the atmosphere – for example, by convection as warm air rises. Radiative Convective Models can calculate the temperature and humidity of different layers of the atmosphere. These models are typically 1D – only considering energy transport up through the atmosphere – but they can also be 2D.
The next level up are General Circulation Models (GCMs), also called Global Climate Models, which simulate the physics of the climate itself. This means they capture the flows of air and water in the atmosphere and/or the oceans, as well as the transfer of heat.
Early GCMs only simulated one aspect of the Earth system – such as in “atmosphere-only” or “ocean-only” models – but they did this in three dimensions, incorporating many kilometres of height in the atmosphere or depth of the oceans in dozens of model layers.
More sophisticated “coupled” models have brought these different aspects together, linking together multiple models to provide a comprehensive representation of the climate system. Coupled atmosphere-ocean general circulation models (or “AOGCMs”) can simulate, for example, the exchange of heat and freshwater between the land and ocean surface and the air above.
The infographic below shows how modellers have gradually incorporated individual model components into global coupled models over recent decades.
Graphic by Rosamund Pearce; based on the work of Dr Gavin Schmidt.
Over time, scientists have gradually added in other aspects of the Earth system to GCMs. These would have once been simulated in standalone models, such as land hydrology, sea ice and land ice.
The most recent subset of GCMs now incorporate biogeochemical cycles – the transfer of chemicals between living things and their environment – and how they interact with the climate system. These “Earth System Models” (ESMs) can simulate the carbon cycle, nitrogen cycle, atmospheric chemistry, ocean ecology and changes in vegetation and land use, which all affect how the climate responds to human-caused greenhouse gas emissions. They have vegetation that responds to temperature and rainfall and, in turn, changes uptake and release of carbon and other greenhouse gases to the atmosphere.
“The GCMs were the models that were used maybe in the 1980s. So these were largely put together by the atmospheric physicists, so it’s all to do with energy and mass and water conservation, and it’s all the physics of moving those around. But they had a relatively limited representation of how the atmosphere then interacts with the ocean and the land surface. Whereas an ESM tries to incorporate those land interactions and those ocean interactions, so you could regard an ESM as a ‘pimped’ version of a GCM.”
There are also Regional Climate Models (“RCMs”) which do a similar job as GCMs, but for a limited area of the Earth. Because they cover a smaller area, RCMs can generally be run more quickly and at a higher resolution than GCMs. A model with a high resolution has smaller grid cells and therefore can produce climate information in greater detail for a specific area.
RCMs are one way of “downscaling” global climate information to a local scale. This means taking information provided by a GCM or coarse-scale observations and applying it to a specific area or region. Downscaling is covered in more detail under a later question.
Integrated Assessment Models: IAMs are computer models that analyse a broad range of data – e.g. physical, economic and social – to produce information that can be used to help decision-making. For climate research, specifically, IAMs are typically used to project future greenhouse gas emissions and climate impacts, and the benefits and costs of policy options that could be implemented to tackle them.
Integrated Assessment Models: IAMs are computer models that analyse a broad range of data – e.g. physical, economic and social – to produce information that can be used to help decision-making. For climate research, specifically,… Read More
Finally, a subset of climate modelling involves Integrated Assessment Models (IAMs). These add aspects of society to a simple climate model, simulating how population, economic growth and energy use affect – and interact with – the physical climate.
IAMs produce scenarios of how greenhouse gas emissions may vary in future. Scientists can then run these scenarios through ESMs to generate climate change projections – providing information that can be used to inform climate and energy policies around the world.
In climate research, IAMs are typically used to project future greenhouse gas emissions and the benefits and costs of policy options that could be implemented to tackle them. For example, they are used to estimate the social cost of carbon – the monetary value of the impact, both positive and negative, of every additional tonne of CO2 that is emitted.
What are the inputs and outputs for a climate model?
If the previous section looked at what is inside a climate model, this one focuses on what scientists put into a model and get out the other side.
Climate models are run using data on the factors that drive the climate, and projections about how these might change in the future. Climate model results can run to petabytes of data, including readings every few hours across thousands of variables in space and time, from temperature to clouds to ocean salinity.
The main inputs into models are the external factors that change the amount of the sun’s energy that is absorbed by the Earth, or how much is trapped by the atmosphere.
These external factors are called “forcings”. They include changes in the sun’s output, long-lived greenhouse gases – such as CO2, methane (CH4), nitrous oxides (N2O) and halocarbons – as well as tiny particles called aerosols that are emitted when burning fossil fuels, and from forest fires and volcanic eruptions. Aerosols reflect incoming sunlight and influence cloud formation.
Typically, all these individual forcings are run through a model either as a best estimate of past conditions or as part of future “emission scenarios”. These are potential pathways for the concentration of greenhouse gases in the atmosphere, based on how technology, energy and land use change over the centuries ahead.
Today, most model projections use one or more of the “Representative Concentration Pathways” (RCPs), which provide plausible descriptions of the future, based on socio-economic scenarios of how global society grows and develops. You can read more about the different pathways in this earlier Carbon Brief article.
Models also use estimates of past forcings to examine how the climate changed over the past 200, 1,000, or even 20,000 years. Past forcings are estimated using evidence of changes in the Earth’s orbit, historical greenhouse gas concentrations, past volcanic eruptions, changes in sunspot counts, and other records of the distant past.
Then there are climate model “control runs”, where radiative forcing is held constant for hundreds or thousands of years. This allows scientists to compare the modelled climate with and without changes in human or natural forcings, and assess how much “unforced” natural variability occurs.
Climate models generate a nearly complete picture of the Earth’s climate, including thousands of different variables across hourly, daily and monthly timeframes.
These outputs include temperatures and humidity of different layers of the atmosphere from the surface to the upper stratosphere, as well as temperatures, salinity and acidity (pH) of the oceans from the surface down to the sea floor.
Models also produce estimates of snowfall, rainfall, snow cover and the extent of glaciers, ice sheets and sea ice. They generate wind speed, strength and direction, as well as climate features, such as the jet stream and ocean currents.
More unusual model outputs include cloud cover and height, along with more technical variables, such as surface upwelling longwave radiation – how much energy is emitted by the surface back up to the atmosphere – or how much sea salt comes off the ocean during evaporation and is accumulated on land.
What types of experiments do scientists run on climate models?
Climate models are used by scientists to answer many different questions, including why the Earth’s climate is changing and how it might change in the future if greenhouse gas emissions continue.
Models can help work out what has caused observed warming in the past, as well as how big a role natural factors play compared to human factors.
Scientists run many different experiments to simulate climates of the past, present and future. They also design tests to look at the performance of specific parts of different climate models. Modellers run experiments on what would happen if, say, we suddenly quadrupled CO2, or if geoengineering approaches were used to cool the climate.
Many different groups run the same experiments on their climate models, producing what is called a model ensemble. These model ensembles allow researchers to examine differences between climate models, as well as better capture the uncertainty in future projections. Experiments that modellers do as part of the Coupled Model Intercomparison Projects (CMIPs) include:
These historical runs are not “fit” to actual observed temperatures or rainfall, but rather emerge from the physics of the model. This means they allow scientists to compare model predictions (“hindcasts”) of the past climate to recorded climate observations. If climate models are able to successfully hindcast past climate variables, such as surface temperature, this gives scientists more confidence in model forecasts of the future
Historical runs are also useful for determining how large a role human activity plays in climate change (called “attribution”). For example, the chart below compares two model variants against the observed climate – with only natural forcings (blue shading) and model runs with both human and natural forcings (pink shading).
Natural-only runs only include natural factors such as changes in the sun’s output and volcanoes, but they assume greenhouse gases and other human factors remain unchanged at pre-industrial levels. Human-only runs hold natural factors unchanged and only include the effects of human activities, such as increasing atmospheric greenhouse gas concentrations.
By comparing these two scenarios (and a combined “all-factors” run), scientists can assess the relative contributions to observed climate changes from human and natural factors. This helps them to figure out what proportion of modern climate change is due to human activity.
Future warming scenarios
The IPCC’s fifth assessment report focused on four future warming scenarios, known as the Representative Concentration Pathway (RCP) scenarios. These look at how the climate might change from present through to 2100 and beyond.
Many things that drive future emissions, such as population and economic growth, are difficult to predict. Therefore, these scenarios span a wide range of futures, from a business-as-usual world where little or no mitigation actions are taken (RCP6.0 and RCP8.5) to a world in which aggressive mitigation generally limits warming to no more than 2C (RCP2.6). You can read more about the different RCPs here.
These RCP scenarios specify different amounts of radiative forcings. Models use those forcings to examine how the Earth’s system will change under each of the different pathways. The upcoming CMIP6 exercise, associated with the IPCC sixth assessment report, will add four new RCP scenarios to fill in the gaps around the four already in use, including a scenario that meets the 1.5C temperature limit.
Control runs are useful to examine how natural variability is expressed in models, in the absence of other changes. They are also used to diagnose “model drift”, where spurious long-term changes occur in the model that are unrelated to either natural variability or changes to external forcing.
If a model is “drifting” it will experience changes beyond the usual year-to-year and decade-to-decade natural variability, even though the factors affecting the climate, such as greenhouse gas concentrations, are unchanged.
Model control runs start the model during a period before modern industrial activity dramatically increased greenhouse gases. They then let the model run for hundreds or thousands of years without changing greenhouse gases, solar activity, or any other external factors that affect the climate. This differs from a natural-only run as both human and natural factors are left unchanged.
Atmospheric model intercomparison project (AMIP) runs
Climate models include the atmosphere, land and ocean. AMIP runs effectively ‘‘turn off’’ everything except the atmosphere, using fixed values for the land and ocean based on observations. For example, AMIP runs use observed sea surface temperatures as an input to the model, allowing the land surface temperature and the temperature of the different layers of the atmosphere to respond.
Normally climate models will have their own internal variability – short-term climate cycles in the oceans such as El Niño and La Niña events – that occur at different times than what happens in the real world. AMIP runs allow modellers to match ocean temperatures to observations, so that internal variability in the models occurs at the same time as in the observations and changes over time in both are easier to compare.
Abrupt 4x CO2 runs
Climate models comparison projects, such as CMIP5, generally request that all models undertake a set of “diagnostic” scenarios to test performance across various criteria.
One of these tests is an “abrupt” increase in CO2 from pre-industrial levels to four times higher – from 280 parts per million (ppm) to 1,120ppm – holding all other factors that influence the climate constant. (For context, current CO2 concentrations are around 400ppm.) This allows scientists to see how quickly the Earth’s temperature responds to changes in CO2 in their model compared to others.
One of 42 panels displayed throughout the Gare du Nord metro station in Paris, honouring Syukuro Manabe and his contributions to climate science, to mark the COP21 UN climate change conference in 2015. The equations were used by Manabe in his seminal climate model in the late 1960s. Credit: NOAA/Rory O’Connor.
One of 42 panels displayed throughout the Gare du Nord metro station in Paris, honouring Syukuro Manabe and his contributions to climate science, to mark the COP21 UN climate change conference in 2015. The equations were used by Manabe in his seminal climate model in the late 1960s. Credit: Rosamund Pearce/Carbon Brief.
One of 42 panels displayed throughout the Gare du Nord metro station in Paris, honouring Syukuro Manabe and his contributions to climate science, to mark the COP21 UN climate change conference in 2015. The equations were used by Manabe in his seminal climate model in the late 1960s. Credit: NOAA/Rory O’Connor.
1% CO2 runs
Another diagnostic test increases CO2 emissions from pre-industrial levels by 1% per year, until CO2 ultimately quadruples and reaches 1,120ppm. These scenarios also hold all other factors affecting the climate unchanged.
This allows modellers to isolate the effects of gradually increasing CO2 from everything else going on in more complicated scenarios, such as changes in aerosols and other greenhouse gases such as methane.
Here, models are run for climates of the past (palaeoclimate). Models have been run for a number of different periods: the past 1,000 years; the Holocene spanning the past 12,000 years; the last glacial maximum 21,000 years ago, during the last ice age; the last interglacial around 127,000 years ago; the mid-Pliocene warm period 3.2m years ago; and the unusual period of rapid warming called the Paleocene-Eocene thermal maximum around 55m years ago.
These models use the best estimates available for factors affecting the Earth’s past climate – including solar output and volcanic activity – as well as longer-term changes in the Earth’s orbit and the location of the continents.
These palaeoclimate model runs can help researchers understand how large past swings in the Earth’s climate occurred, such as those during ice ages, and how sea level and other factors changed during periods of warming and cooling. These past changes offer a guide to the future, if warming continues.
Specialised model tests
As part of CMIP6, research groups around the world are conducting many different experiments. These include looking at the behaviour of aerosols in models, cloud formation and feedbacks, ice sheet responses to warming, monsoon changes, sea level rise, land-use changes, oceans and the effects of volcanoes.
There are more than two dozen scientific institutions around the world that develop climate models, with each centre often building and refining several different models at the same time.
The models they produce are typically – though rather unimaginatively – named after the centres themselves. Hence, for example, the Met Office Hadley Centre has developed the “HadGEM3” family of models. Meanwhile, the NOAA Geophysical Fluid Dynamics Laboratory has produced the “GFDL ESM2M” Earth system model.
That said, models are increasingly collaborative efforts, which is often reflected in their names. For example, the Hadley Centre and the wider Natural Environment Research Council (NERC) community in the UK have jointly developed the “UKESM1” Earth system model. This has the Met Office Hadley Centre’s HadGEM3 model at its core.
The fact that there are numerous modelling centres around the world going through similar processes is a “really important strand of climate research”, says Dr Chris Jones, who leads the Met Office Hadley Centre’s research into vegetation and carbon cycle modelling and their interactions with climate. He tells Carbon Brief:
“There are maybe the order of 10 or 15 kind of big global climate modelling centres who produce simulations and results. And, by comparing what the different models and the different sets of research say, you can judge which things to have confidence in, where they agree, and where we have less confidence, where there is disagreement. That guides the model development process.”
If there was just one model, or one modelling centre, there would be much less of an idea of its strengths and weaknesses, says Jones. And while the different models are related – there is a lot of collaborative research and discussion that goes on between the groups – they do not usually go to the extent of using the same lines of code. He explains:
“When we develop a new [modelling] scheme, we would publish the equations of that scheme in the scientific literature, so it’s peer reviewed. It’s publicly available and other centres can compare that with what they use.”
Below, Carbon Brief has mapped the climate modelling centres that contributed to the fifth Coupled Model Intercomparison Project (CMIP5), which fed into the IPCC’s fifth assessment report. Mouse over the individual centres in the map to find out more about them.
The majority of modelling centres are in North America and Europe. However, it is worth noting that the CMIP5 list is not an exhaustive inventory of modelling centres – particularly as it focuses on institutions with global climate models. This means the list does not include centres that concentrate on regional climate modelling or weather forecasting, says Jones:
“For example, we do a lot of collaborative work with Brazil, who concentrate their GCMs on weather and seasonal forecasting. In the past, they have even used a version of HadGEM2 to submit data to CMIP5. For CMIP6 they hope to run the Brazil Earth system model (‘BESM’).”
The institute points out that the main purpose of the licence agreement is to let it know who is using the models and to establish a way of getting in touch with the users. It says:
“[T]he MPI-M software developed must remain controllable and documented. This is the spirit behind the following licence agreement…It is also important to provide feedback to the model developers, to report about errors and to suggest improvements of the code.”
With so many institutions developing and running climate models, there is a risk that each group approaches its modelling in a different way, reducing how comparable their results will be.
This is where the Coupled Model Intercomparison Project (“CMIP”) comes in. CMIP is a framework for climate model experiments, allowing scientists to analyse, validate and improve GCMs in a systematic way.
The “coupled” in the name means that all the climate models in the project are coupled atmosphere-ocean GCMs. The Met Office’s Dr Chris Jones explains the significance of the “intercomparison” part of the name:
“The idea of an intercomparison came from the fact that many years ago different modelling groups would have different models, but they would also set them up slightly differently, and they would run different numerical experiments with them. When you come to compare the results you’re never quite sure if the differences are because the models are different or because they were set up in a different way.”
So, CMIP was designed to be a way to bring into line all the climate model experiments that different modelling centres were doing.
Since its inception in 1995, CMIP has been through several generations and each iteration becomes more sophisticated in the experiments that are being designed. A new generation comes round every 5-6 years.
In its early years, CMIP experiments included, for example, modelling the impact of a 1% annual increase in atmospheric CO2 concentrations (as mentioned above). In later iterations, the experiments incorporated more detailed emissions scenarios, such as the Representative Concentration Pathways (“RCPs”).
Setting the models up in the same way and using the same inputs means that scientists know that the differences in the climate change projections coming out of the models is down to differences in the models themselves. This is the first step in trying to understand what is causing those differences.
The number of researchers publishing papers based on CMIP data “has grown from a few dozen to well over a thousand”, says Prof Veronika Eyring, chair of the CMIP Panel, in a recent interview with Nature Climate Change.
With the model simulations for CMIP5 complete, CMIP6 is now underway, which will involve more than 30 modelling centres around the world, Eyring says.
As well as having a core set of “DECK” (Diagnostic, Evaluation, and Characterisation of Klima) modelling experiments, CMIP6 will also have a set of additional experiments to answer specific scientific questions. These are divided into individual Model Intercomparison Projects, or “MIPs”. So far, 21 MIPs have been endorsed, Eyring says:
“Proposals were submitted to the CMIP Panel and received endorsement if they met 10 community-set criteria, broadly: advancing progress on gaps identified in previous CMIP phases, contributing to the WCRP Grand Challenges, and having at least eight model groups willing to participate.”
You can see the 21 MIPs and the overall experiment design of CMIP6 in the schematic below.
Schematic of the CMIP/CMIP6 experimental design and the 21 CMIP6-Endorsed MIPs. Reproduced with permission from Simpkins (2017).
There is a special issue of the journal Geoscientific Model Development on CMIP6, with 28 published papers covering the overall project and the specific MIPs.
The results of CMIP6 model runs will form the basis of much of the research feeding into the sixth assessment report of the IPCC. However, it is worth noting that CMIP is entirely independent from the IPCC.
How do scientists validate climate models? How do they check them?
Scientists test, or “validate”, their models by comparing them against real-world observations. This might include, for example, comparing the model projections against actual global surface temperatures over the past century.
Climate models can be tested against past changes in the Earth’s climate. These comparisons with the past are called “hindcasts”, as mentioned above.
Scientists do not “tell” their models how the climate has changed in the past – they do not feed in historical temperature readings, for example. Instead, they feed in information on past climate forcings and the models generate a “hindcast” of historical conditions. This can be a useful way to validate models.
Specific events that have a large impact on the climate, such as volcanic eruptions, can also be used to test model performance. The climate responds relatively quickly to volcanic eruptions, so modellers can see if models accurately capture what happens after big eruptions, after waiting only a few years. Studies show models accurately project changes in temperature and in atmospheric water vapour after major volcanic eruptions.
Climate models are also compared against the average state of the climate, known as the “climatology”. For example, researchers check to see if the average temperature of the Earth in winter and summer is similar in the models and reality. They also compare sea ice extent between models and observations, and may choose to use models that do a better job of representing the current amount of sea ice when trying to project future changes.
Experiments where many different models are run with the same greenhouse gas concentrations and other “forcings”, as in model intercomparison projects, provide a way to look at similarities and differences between models.
For many parts of the climate system, the average of all models can be more accurate than most individual models. Researchers have found that forecasts can show better skill, higher reliability and consistency when several independent models are combined.
One way to check if models are reliable is to compare projected future changes against how things turn out in the real world. This can be hard to do with long-term projections, however, because it would take a long time to assess how well current models perform.
Recently, Carbon Brief found that models produced by scientists since the 1970s have generally done a good job of projecting future warming. The video below shows an example of model hindcasts and forecasts compared to actual surface temperatures.
As mentioned above, scientists do not have a limitless supply of computing power at their disposal, and so it is necessary for models to divide up the Earth into grid cells to make the calculations more manageable.
This means that at every step of the model through time, it calculates the average climate of each grid cell. However, there are many processes in the climate system and on the Earth’s surface that occur on scales within a single cell.
For example, the height of the land surface will be averaged across a whole grid cell in a model, meaning it potentially overlooks the detail of any physical features such as mountains and valleys. Similarly, clouds can form and dissipate at scales that are much smaller than a grid cell.
To solve this problem, these variables are “parameterised”, meaning their values are defined in the computer code rather than being calculated by the model itself.
The graphic below shows some of the processes that are typically parameterised in models.
Parameterisations may also be used as a simplification where a climate process isn’t well understood. Parameterisations are one of the main sources of uncertainty in climate models.
A list of 20 climate processes and properties that typically need to be parameterised within global climate models. Image courtesy of MetEd, The COMET Program, UCAR.
In many cases, it is not possible to narrow down parameterised variables into a single value, so the model needs to include an estimation. Scientists run tests with the model to find the value – or range of values – that allows the model to give the best representation of the climate.
This complex process is known variously as model “tuning” or “calibration”. While it is a necessary part of climate modelling, it is not a process that is specific to it. In 1922, for example, a Royal Society paper on theoretical statistics identified “parameter estimation” as one of three steps in modelling.
Dr James Screen, assistant professor in climate science at the University of Exeter, describes how scientists might tune their model for the albedo (reflectivity) of sea ice. He tells Carbon Brief:
“In a lot of sea ice models, the albedo of sea ice is a parameter that is set to a particular value. We don’t know the ‘correct’ value of the ice albedo. There is some uncertainty range associated with observations of albedo. So whilst developing their models, modelling centres may experiment with slightly different – but plausible – parameter values in an attempt to model some basic features of the sea ice as closely as possible to our best estimates from observations. For example, they might want to make sure the seasonal cycle looks right or there is roughly the right amount of ice on average. This is tuning.”
If all parameters were 100% certain, then this calibration would not be necessary, Screen notes. But scientists’ knowledge of the climate is not perfect, because the evidence they have from observations is incomplete. Therefore, they need to test their parameter values in order to give sensible model output for key variables.
Albedo:Albedo is a measure of how much of the sun’s energy is reflected by a surface. It is derived from the Latin word albus, meaning white. Albedo is measured as a percentage or fraction of the sun’s energy that is reflected away. Snow and ice tend to have a higher albedo than, for example, soil, forests and open water.
Albedo: Albedo is a measure of how much of the sun’s energy is reflected by a surface. It is derived from the Latin word albus, meaning white. Albedo is measured as a percentage… Read More
As most global models will contain parameterisation schemes, virtually all modelling centres undertake model tuning of some kind. A survey in 2014 (pdf) found that, in most cases, modellers tune their models to ensure that the long-term average state of the climate is accurate – including factors such as absolute temperatures, sea ice concentrations, surface albedo and sea ice extent.
The factor most often tuned for – in 70% of cases – is the radiation balance at the top of the atmosphere. This process involved adjusting parameterisations particularly of clouds – microphysics, convection and cloud fraction – but also snow, sea ice albedo and vegetation.
This tuning does not involve simply “fitting” historical observations. Rather, if a reasonable choice of parameters leads to model results that differ dramatically from observed climatology, modellers may decide to use a different one. Similarly, if updates to a model leads to a wide divergence from observations, modellers may look for bugs or other factors that explain the difference.
As NASA Goddard Institute for Space Studies director Dr Gavin Schmidt tells Carbon Brief:
“Global mean trends are monitored for sanity, but not (generally) precisely tuned for. There is a lot of discussion on this point in the community, but everyone is clear this needs to be made more transparent.”
What is bias correction?
While climate models simulate the Earth’s climate well overall – including familiar climatic features, such as storms, monsoon rains, jet streams, trade winds and El Niño cycles – they are not perfect. This is particularly the case at the regional and local scales, where simulations can have substantial deviations from the observed climate, known as “biases”.
These biases occur because models are a simplification of the climate system and the large-scale grid cells that global models use can miss the detail of the local climate.
“Imagine you are a water engineer and have to protect a valley against flash floods from a nearby mountain creek. The protection is supposed to last for the next decades, so you have to account for future changes in rainfall over your river catchment. Climate models, even if they resolve the relevant weather systems, may be biased compared to the real world.”
For the water engineer, who runs the climate model output as an input for a flood risk model of the valley, such biases may be crucial, says Maraun:
“Assume a situation where you have freezing temperatures in reality, snow is falling and surface run-off from heavy rainfall is very low. But the model simulates positive temperatures, rainfall and a flash flood.”
In other words, taking the large-scale climate model output as is and running it through a flood model could give a misleading impression of flood risk in that specific valley.
To solve this issue – and produce climate projections that the water engineer can use in designing flood defences – scientist apply “bias correction” to climate model output.
“Bias correction – sometimes called ‘calibration’ – is the process of accounting for biases in the climate model simulations to provide projections which are more consistent with the available observations.”
Essentially, scientists compare long-term statistics in the model output with observed climate data. Using statistical techniques, they then correct any biases in the model output to make sure it is consistent with current knowledge of the climate system.
Bias correction is often based on average climate information, Maraun notes, though more sophisticated approaches adjust extremes too.
The bias correction step in the modelling process is particularly useful when scientists are considering aspects of the climate where thresholds are important, says Hawkins.
An example comes from a 2016 study, co-authored by Hawkins, on how shipping routes could open through Arctic sea ice because of climate change. He explains:
“The viability of Arctic shipping in future depends on the projected thickness of the sea ice, as different types of ship are unable to travel if the ice reaches a critical thickness at any point along the route. If the climate model simulates too much or too little ice for the present day in a particular location then the projections of ship route viability will also be incorrect.
“However, we are able to use observations of ice thickness to correct the spatial biases in the simulated sea ice thickness across the Arctic and produce projections which are more consistent than without a bias correction.”
In other words, by using bias correction to get the simulated sea ice in the model for the present day right, Hawkins and his colleagues can then have more confidence in their projections for the future.
Russian icebreaker at the North Pole. Credit: Christopher Michel via Flickr.
Typically, bias correction is applied only to model output, but in the past it has also been used within runs of models, explains Maraun:
“Until about a decade ago it was quite common to adjust the fluxes between different model components – for example, the ocean and atmosphere – in every model step towards the observed fields by so-called ‘flux corrections’”.
Recent advances in modelling mean flux corrections are largely no longer necessary. However, some researchers have put forward suggestions that flux corrections could still be used to help eliminate remaining biases in models, says Maraun:
“For instance, most GCMs simulate too cold a North Atlantic, a problem that has knock-on effects, for example, on the atmospheric circulation and rainfall patterns in Europe.”
So by nudging the model to keep its simulations of the North Atlantic Ocean on track (based on observed data), the idea is that this may produce, for example, more accurate simulations of rainfall for Europe.
However, there are potential pitfalls in using flux corrections, he adds:
“The downside of such approaches is that there is an artificial force in the model that pulls the model towards observations and such a force may even dampen the simulated climate change.”
In other words, if a model is not producing enough rainfall in Europe, it might be for reasons other than the North Atlantic, explains Maraun. For example, it might be because the modelled storm tracks are sending rainstorms to the wrong region.
This reinforces that point that scientists need to be careful not to apply bias correction without understanding the underlying reason for the bias, concludes Maraun:
“Climate researchers need to spend much more efforts to understand the origins of model biases, and researchers doing bias correction need to include this information into their research.”
In a recent perspectives article in Nature Climate Change, Maraun and his co-authors argue that “current bias correction methods might improve the applicability of climate simulations” but that they could not – and should not – be used to overcome more significant limitations with climate models.
How accurate are climate model projections of temperature?
One of the most important outputs of climate models is the projection of global surface temperatures.
In order to evaluate how well their models perform, scientists compare observations of the Earth’s climate with models’ future temperatures forecasts and historical temperatures “hindcasts”. Scientists can then assess the accuracy of temperature projections by looking at how individual climate models and the average of all models compare to observed warming.
Historical temperature changes since the late 1800s are driven by a number of factors, including increasing atmospheric greenhouse gas concentrations, aerosols, changes in solar activity, volcanic eruptions, and changes in land use. Natural variability also plays a role over shorter timescales.
If models do a good job of capturing the climate response in the past, researchers can be more confident that they will accurately respond to changes in the same factors in the future.
Carbon Brief has explored how climate models compare to observations in more detail in a recent analysis piece, looking at how surface temperature projections in climate models since the 1970s have matched up to reality.
Comparing models and observations can be a somewhat tricky exercise. The most often used values from climate models are for the temperature of the air just above the surface. However, observed temperature records are a combination of the temperature of the air just above the surface, over land, and the temperature of the surface waters of the ocean.
Comparing global air temperatures from the models to a combination of air temperatures and sea surface temperatures in the observations can create problems. To account for this, researchers have created what they call “blended fields” from climate models, which include sea surface temperatures of the oceans and surface air temperatures over land, in order to match what is actually measured in the observations.
These blended fields from models show slightly less warming than global surface air temperatures, as the air over the ocean warms faster than sea surface temperatures in recent years.
Carbon Brief’s figure below shows both the average of air temperature from all CMIP5 models (dashed black line) and the average of blended fields from all CMIP5 models (solid black line). The grey area shows the uncertainty in the model results, known as the 95% confidence interval. Individual coloured lines represent different observational temperature estimates from groups, such as the Met Office Hadley Centre, NOAA and NASA.
The blended fields from models generally match the warming seen in observations fairly well, while the air temperatures from the models show a bit more warming as they include the temperature of the air over the ocean rather than of the sea surface itself. Observations are all within the 95% confidence interval of model runs, suggesting that models do a good job of reflecting the short-term natural variability driven by El Niño and other factors.
The longer period of model projections from 1880 through 2100 is shown in the figure below. It shows both the longer-term warming since the late 19th century and projections of future warming under a scenario of relatively rapid emissions reductions (called “RCP4.5”), with global temperatures reaching around 2.5C above pre-industrial levels by 2100 (and around 2C above the 1970-2000 baseline shown in the figure).
Same as prior figure, but from 1880 to 2100. Projections through 2100 use RCP4.5. Note that this and the prior graph use a 1970-2000 baseline period. Chart by Carbon Brief using Highcharts.
Projections of the climate from the mid-1800s onwards agree fairly well with observations. There are a few periods, such as the early 1900s, where the Earth was a bit cooler than models projected, or the 1940s, where observations were a bit warmer.
Overall, however, the strong correspondence between modelled and observed temperatures increases scientists’ confidence that models are accurately capturing both the factors driving climate change and the level of short-term natural variability in the Earth’s climate.
For the period since 1998, when observations have been a bit lower than model projections, a recent Nature paper explores the reasons why this happened.
The researchers find that some of the difference is resolved by using blended fields from models. They suggest that the remainder of the divergence can be accounted for by a combination of short-term natural variability (mainly in the Pacific Ocean), small volcanoes and lower-than-expected solar output that was not included in models in their post-2005 projections.
Global average surface temperature is only one of many variables included in climate models, and models can be evaluated against many other climate metrics. There are specific “fingerprints” of human warming in the lower atmosphere, for example, that are seen in both models and observations.
What are the main limitations in climate modelling at the moment?
It is worth reiterating that climate models are not a perfect representation of the Earth’s climate – and nor can they be. As the climate is inherently chaotic, it is impossible to simulate with 100% accuracy, yet models do a pretty good job at getting the climate right.
The accuracy of projections made by models is also dependent on the quality of the forecasts that go into them. For example, scientists do not know if greenhouse gas emissions will fall, and so make estimates based on different scenarios of future socio-economic development. This adds another layer of uncertainty to climate projections.
Similarly, there are aspects of the future that would be so rare in Earth’s history that they’re extremely difficult to make projections for. One example is that ice sheets could destabilise as they melt, accelerating expected global sea level rise.
Yet, despite models becoming increasingly complex and sophisticated, there are still aspects of the climate system that they struggle to capture as well as scientists would like.
One of the main limitations of the climate models is how well they represent clouds.
Clouds are a constant thorn in the side of climate scientists. They cover around two-thirds of the Earth at any one time, yet individual clouds can form and disappear within minutes; they can both warm and cool the planet, depending on the type of cloud and the time of day; and scientists have no records of what clouds were like in the distant past, making it harder to ascertain if and how they have changed.
A particular aspect of the difficulties in modelling clouds comes down to convection. This is the process whereby warm air at the Earth’s surface rises through the atmosphere, cools, and then the moisture it contains condenses to form clouds.
On hot days, the air warms quickly, which drives convection. This can bring intense, short-duration rainfall, often accompanied by thunder and lightning.
Convectional rainfall can occur on short timescales and in very specific areas. Global climate models, therefore, have a resolution that is too coarse to capture these rainfall events.
Instead, scientists use “parameterisations” (see above) that represent the average effects of convection over an individual grid cell. This means GCMs do not simulate individual storms and local high rainfall events, explains Dr Lizzie Kendon, senior climate extremes scientist at the Met Office Hadley Centre, to Carbon Brief:
“As a consequence, GCMs are unable to capture precipitation intensities on sub-daily timescales and summertime precipitation extremes. Thus, we would have low confidence in future projections of hourly rainfall or convective extremes from GCMs or coarse resolution RCMs.”
(Carbon Brief will be publishing an article later this week exploring climate model projections of precipitation.)
To help overcome this issue, scientists have been developing very high resolution climate models. These have grid cells that are a few kilometres wide, rather than tens of kilometres. These “convective-permitting” models can simulate larger convective storms without the need of parameterisation.
However, the tradeoff of having greater detail is that the models cannot yet cover the whole globe. Despite the smaller area – and using supercomputers – these models still take a very long time to run, particularly if scientists want to run lots of variations of the model, known as an “ensemble”.
For example, simulations that are part of the Future Climate For Africa IMPALA project (“Improving Model Processes for African Climate”) use convection-permitting models covering all of Africa, but only for one ensemble member, says Kendon. Similarly, the next set of UK Climate Projections, due next year (“UKCP18”), will be run for 10 ensemble members, but for just the UK.
But expanding these convection-permitting models to the global scale is still some way away, notes Kendon:
“It is likely to be many years before we can afford [the computing power for] convection-permitting global climate simulations, especially for multiple ensemble members.”
Related to the issue of clouds in global models is that of “double ITCZ”. The Intertropical Convergence Zone, or ITCZ, is a huge belt of low pressure that encircles the Earth near the equator. It governs the annual rainfall patterns of much of the tropics, making it a hugely important feature of the climate for billions of people.
Illustration of the Intertropical Convergence Zone (ITCZ) and the principle global circulation patterns in the Earth’s atmosphere. Source: Creative Commons
The ITCZ wanders north and south across the tropics each year, roughly tracking the position of the sun through the seasons. Global climate models do recreate the ITCZ in their simulations – which emerges as a result of the interaction between the individual physical processes coded in the model. However, as a Journal of Climate paper by scientists at Caltech in the US explains, there are some areas where climate models struggle to represent the position of the ITCZ correctly:
“[O]ver the eastern Pacific, the ITCZ is located north of the equator most of the year, meandering by a few degrees latitude around [the] six [degree line of latitude]. However, for a brief period in spring, it splits into two ITCZs straddling the equator. Current climate models exaggerate this split into two ITCZs, leading to the well-known double-ITCZ bias of the models.”
The main implication of this is that modellers have lower confidence in projections for how the ITCZ could change as the climate warms. But there are knock-on impacts as well, Xiang tells Carbon Brief:
“For example, most of current climate models predict a weakened trade wind along with the slowdown of the Walker circulation. The existence of [the] double ITCZ problem may lead to an underestimation of this weakened trade wind.”
(Trade winds are near-constant easterly winds that circle the Earth either side of the equator.)
In addition, a 2015 study in Geophysical Research Letters suggests that because the double ITCZ affects cloud and water vapour feedbacks in models, it therefore plays a role in the climate sensitivity.
Climate sensitivity: The amount of warming we can expect when carbon dioxide in the atmosphere reaches double what it was before the industrial revolution. There are two ways to express climate sensitivity: Transient Climate Response (TCR) is the warming at Earth’s surface we can expect at the point of doubling, while Equilibrium Climate Sensitivity (ECS) is the total amount of warming once the Earth has had time to adjust fully to the extra carbon dioxide.
Climate sensitivity: The amount of warming we can expect when carbon dioxide in the atmosphere reaches double what it was before the industrial revolution. There are two ways to express climate sensitivity: Transient Climate… Read More
They found that models with a strong double ITCZ have a lower value for equilibrium climate sensitivity (ECS), which indicates that “most models might have underestimated ECS”. If models underestimate ECS, the climate will warm more in response to human-caused emissions than their current projections would suggest.
The causes of the double ITCZ in models are complex, Xiang tells Carbon Brief, and have been the subject of numerous studies. There are likely to be a number of contributing factors, Xiang says, including the way convection is parameterised in models.
For example, a Proceedings of the National Academy of Sciences paper in 2012 suggested that the issue stems from most models not producing enough thick cloud over the “oft-overcast Southern Ocean”, leading to higher-than-usual temperatures over the Southern Hemisphere as a whole, and also a southward shift in tropical rainfall.
As for the question of when scientists might solve this issue, Xiang says it is a tough one to answer:
“From my point of view, I think we may not be able to completely resolve this issue in the coming decade. However, we have made significant progress with the improved understanding of model physics, increased model resolution, and more reliable observations.”
Finally, another common issue in climate models is to do with the position of jet streams in the climate models. Jet streams are meandering rivers of high-speed winds flowing high up in the atmosphere. They can funnel weather systems west to east across the Earth.
As with the ITCZ, climate models recreate jet streams as a result of the fundamental physical equations contained in their code.
However, jet streams often appear to be too “zonal” in models – in other words, they are too strong and too straight, explains Dr Tim Woollings, a lecturer in physical climate science at the University of Oxford and former leader of the joint Met Office-Universities Process Evaluation Group for blocking and storm tracks. He tells Carbon Brief:
“In the real world, the jet veers north a little as it crosses the Atlantic (and a bit the Pacific). Because models underestimate this, the jet is often too far equatorward on average.”
As a result, models do not always get it right on the paths that low-pressure weather patterns take – known as “storm tracks”. Storms are often too sluggish in models, says Woollings, and they do not get strong enough and they peter out too quickly.
There are ways to improve this, says Woollings, but some are more straightforward than others. In general, increasing the resolution of the model can help, Woollings says:
“For example, as we increase resolution, the peaks of the mountains get a little higher and this contributes to deflecting the jets a little north. More complicated things also happen; if we can get better, more active storms in the model, that can have a knock-on effect on the jet stream, which is partly driven by the storms.”
(Mountain peaks get higher as model resolution increases because the greater detail allows the model to “see” more of the mountain as it narrows towards the top.)
Another option is improving how the model represents the physics of the atmosphere in its equations, adds Woollings, using “new, clever schemes [to approximate] the fluid mechanics in the computer code”.
The process of developing a climate model is a long-term task, which does not end once a model has been published. Most modelling centres will be updating and improving their models on a continuous cycle, with a development process where scientists spend a few years building the next version of their models.
Climate modeller at work in the Met Office, Exeter, UK. Credit: Met Office.
Once ready, the new model version incorporating all the improvements can be released, says Dr Chris Jones from the Met Office Hadley Centre:
“It’s a bit like motor companies build the next model of a particular vehicle so they’ve made the same one for years, but then all of a sudden a new one comes out that they’ve been developing. We do the same with our climate models.”
At the beginning of each cycle, the climate being reproduced by the model is compared to a range of observations to identify the biggest issues, explains Dr Tim Woollings. He tells Carbon Brief:
“Once these are identified, attention usually turns to assessing the physical processes known to affect those areas and attempts are made to improve the representation of these processes [in the model].”
How this is done varies from case to case, says Woollings, but will generally end up with some new improved code:
“This might be whole lines of code, to handle a process in a slightly different way, or it could sometimes just be changing an existing parameter to a better value. This may well be motivated by new research, or the experience of others [modelling centres].”
Sometimes during this process, scientists find that some issues compensate others, he adds:
“For example, Process A was found to be too strong, but this seemed to be compensated by Process B being too weak. In these cases, Process A will generally be fixed, even if it makes the model worse in the short term. Then attention turns to fixing Process B. At the end of the day, the model represents the physics of both processes better and we have a better model overall.”
At the Met Office Hadley Centre, the development process involves multiple teams, or “Process Evaluation Groups”, looking to improve a different element of the model, explains Woollings:
“The Process Evaluation Groups are essentially taskforces which look after certain aspects of the model. They monitor the biases in their area as the model develops, and test new methods to reduce these. These groups meet regularly to discuss their area, and often contain members from the academic community as well as Met Office scientists.
The improvements that each group are working on are then brought together into the new model. Once complete, the model can start to be run in earnest, says Jones:
“At the end of a two- or three-year process, we have a new-generation model that we believe is better than the last one, and then we can start to use that to kind of go back to the scientific questions we’ve looked at before and see if we can answer them better.”
How do scientists produce climate model information for specific regions?
One of the main limitations of global climate models is that the grid cells they are made up of are typically around 100km in longitude and latitude in the mid-latitudes. When you consider that the UK, for example, is only a little over 400km wide, that means it is represented in a GCM by a handful of grid boxes.
“If you think about the eastern Caribbean islands, a single eastern Caribbean island falls within a grid box, so is represented as water within these global climate models.”
“Even the larger Caribbean islands are represented as one or, at most, two grid boxes – so you get information for just one or two grid boxes – this poses a limitation for the small islands of the Caribbean region and small islands in general. And so you don’t end up with refined, finer scale, sub-country scale information for the small islands.”
Scientists overcome this problem by “downscaling” global climate information to a local or regional scale. In essence, this means taking information provided by a GCM or coarse-scale observations and applying it to specific place or region.
Tobago Cays and Mayreau Island, St. Vincent and The Grenadines. Credit: robertharding/Alamy Stock Photo.
For small island states, this process allows scientists to get useful data for specific islands, or even areas within islands, explains Taylor:
“The whole process of downscaling then is trying to take the information that you can get from the large scale and somehow relate it to the local scale, or the island scale, or even the sub-island scale.”
There are two main categories for methods of downscaling. The first is “dynamical downscaling”. This is essentially running models that are similar to GCMs, but for specific regions. Because these Regional Climate Models (RCMs) cover a smaller area, they can have higher resolution than GCMs and still run in a reasonable time. That said, notes Dr Dann Mitchell, a lecturer in the School of Geographical Sciences at the University of Bristol, RCMs may be slower than their global counterparts:
“An RCM with 25km grid cells covering Europe would take around 5-10 times longer to run than a GCM at ~150 km resolution.”
The UK Climate Projections 2009 (UKCP09), for example, is a set of climate projections specifically for the UK, produced from a regional climate model – the Met Office Hadley Centre’s HadRM3 model.
HadRM3 uses grid cells of 25km by 25km, thus dividing the UK up into 440 squares. This was an improvement over UKCP09’s predecessor (“UKCIP02”), which produced projections at a spatial resolution of 50km. The map below shows how the greater detail that the 25km grid (six maps to the right) affords than the 50km grid (two maps on far left),
RCMs such as HadRM3 can add a better – though still limited – representation of local factors, such as the influence of lakes, mountain ranges and a sea breeze.
Despite RCMs being limited to a specific area, they still need to factor in the wider climate that influences it. Scientists do this by feeding in information from GCMs or observations. Taylor explains how this applies to his research in the Caribbean:
“For dynamical downscaling, you first have to define the domain that you are going to run the model over – in our case, we define a kind of Caribbean/intra-Americas domain – so we limit the modelling to that domain. But, of course, you feed into the boundaries of that domain the output of the large-scale models, so it’s the larger scale model information that drives then the finer-scale model. And that’s the dynamical downscaling – you’re essentially doing the modelling at a finer scale, but over a limited domain, fed in with information at the boundaries.”
It is also possible to “nest”, or embed, RCMs within a GCM, which means scientists can run more than one model at the same time and get multiple levels of output simultaneously.
The second main category of downscaling is “statistical downscaling”. This involves using observed data to establish a statistical relationship between the global and local climate. Using this relationship, scientists then derive local changes based on the large scale projections coming from GCMs or observations.
One example of statistical downscaling is a weather generator. A weather generator produces synthetic timeseries of daily and/or hourly data for a particular location. It uses a combination of observed local weather data and projections of future climate to give an indication of what future weather conditions could be like on short timescales. (Weather generators can also produce timeseries of the weather in the current climate.)
It can be used for planning purposes – for example, in a flood risk assessment to simulate whether existing flood defences will cope with likely future levels of heavy rainfall.
In general, these statistical models can be run quickly, allowing scientists to carry out many simulations in the time it takes to complete a single GCM run.
It is worth noting that downscaled information still depends heavily on the quality of the information that it is based on, such as the observed data or the GCM data feeding in. Downscaling only provides more location-specific data, it does not make up for any uncertainties that stem from the data it relies on.
Statistical downscaling, in particular, is reliant on the observed data used to derive the statistical relationship. Downscaling also assumes that relationships in the current climate will still hold true in a warmer world, notes Mitchell. He tells Carbon Brief:
“[Statistical downscaling] can be fine for well-observed periods of time, or well-observed locations of interest, but, in general, if you push the local system too far, the statistical relationship will break down. For that reason, statistical downscaling is poorly constrained for future climate projections.”
Dynamical downscaling is more robust, says Mitchell, though only if an RCM captures the relevant processes well and the data driving them is reliable:
“Often for climate modelling, the implementation of the weather and climate processes in the dynamical model is not too dissimilar from the coarser global driving model, so the dynamical downscaling only provides limited improvability of the data. However, if done well, dynamical downscaling can be useful for localised understanding of weather and climate, but it requires a tremendous amount of model validation and in some cases model development to represent processes that can be captured at the new finer scales.”