This month’s open thread. It’s coming to the end of the year and that means updates to the annual time series of observations and models relatively soon. Suggestions for what you’d like to see assessed are welcome… or any other climate science related topic.
Lots of interesting methane papers this week. In Nature Geoscience, Shakhova et al (2013) have published a substantial new study of the methane cycle on the Siberian continental margin of the Arctic Ocean. This paper will get a lot of attention, because it follows by a few months a paper from last summer, Whiteman et al (2013), which claimed a strong (and expensive) potential impact from Arctic methane on near-term climate evolution. That economic modeling study was based on an Arctic methane release scenario proposed in an earlier paper by Shakhova (2010). In PNAS, Miller et al (2013) find that the United States may be emitting 50-70% more methane than we thought. So where does this leave us?
Because methane is mostly well-mixed in the atmosphere, emissions from the Arctic or from the US must be seen within the context of the global sources of methane to the atmosphere. Estimates of methane emissions from the Arctic have risen, from land (Walter et al 2006) as well now as from the continental shelf off Siberia. Call it 20-30 Tg CH4 per year from both sources. The US is apparently emitting more than we thought we were, maybe 30 Tg CH4 per year. But these fluxes are relatively small compared to the global emission rate of about 600 Tg CH4 per year. The Arctic and US anthropogenic are each about 5% of the total. Changes in the atmospheric concentration scale more-or-less with changes in the chronic emission flux, so unless these sources suddenly increase by an order of magnitude or more, they won’t dominate the atmospheric concentration of methane, or its climate impact.
American Methane Emissions Higher Than Previously Thought
Miller et al (2013) combine measurements of methane concentrations in various locations through time with model reconstructions of wind fields, and “invert” the information to estimate how much methane was released to the air as it blew over the land. This is a well-established methodology, pushed to constrain US anthropogenic emissions by including measurements from aircraft and communications towers in addition to the ever-invaluable NOAA flask sample network, and incorporating socioeconomic and industrial data. The US appears to be emitting 50-70% more methane than the EPA thought we were, based on “bottom up” accounting (adding up all the known sources).
Is this bad news for global warming?
Not really, because the one real hard fact that we know about atmospheric methane is that it’s concentration isn’t rising very quickly. Methane is a short-lived gas in the atmosphere, so to make it rise, the emission flux has to continually increase. This is in contrast to CO2, which accumulates in the atmosphere / ocean system, meaning that steady (non-rising) emissions still lead to a rising atmospheric concentration. There is enough uncertainty in the methane budget that tweaks of a few percent here and there don’t upset the apple cart. Since the methane concentration wasn’t rising all that much, its sources, uncertain as they are, have been mostly balanced by sinks, also uncertain. If anything, the paper is good news for people concerned about global warming, because it gives us something to fix.
Methane from the Siberian continental shelf
The Siberian continental shelf is huge, comprising about 20% of the global area of continental shelf. Sea level dropped during the last glacial maximum, but there was no ice sheet in Siberia, so the surface was exposed to the really cold atmosphere, and the ground froze to a depth of ~1.5 km. When sea level rose, the permafrost layer came under attack by the relatively warm ocean water. The submerged permafrost has been melting for millennia, but warming of the waters on the continental shelf could accelerate the melting. In equilibrium there should be no permafrost underneath the ocean, because the ocean is unfrozen, and the sediment gets warmer with depth below that (the geothermal temperature gradient).
Ingredients of Shakhova et al (2013)
- There are lots of bubbles containing mostly methane coming up from the shallow sea floor in the East Siberian Arctic shelf. Bubbles like this have been seen elsewhere, off Spitzbergen for example (Shakhova et al (2013)). Most of the seep sites in the Siberian margin are relatively low flow but a few of them are much larger.
- The bubbles mostly dissolve in the water column, but when the methane flux gets really high the bubbles rise faster and reach the atmosphere better. When methane dissolves in the water column, some of it escapes to the atmosphere by evaporation before it gets oxidized to CO2. Storms seem to pull methane out of the water column, enhancing what oceanographers call “gas exchange” by making waves with whitecaps. Melting sea ice will also increase methane escape to the atmosphere by gas exchange. However, the concentration of methane in the water column is low enough that even with storms the gas exchange flux seems like it must be negligible compared with the bubble flux. In their calculation of the methane flux to the atmosphere, Shakhova et al focused on bubbles.
- Sediments that got flooded by rising sea level thousands of years ago are warmer than sediments still exposed to the colder atmosphere, down to a depth of ~50 meters. This information is not directly applied to the question of incremental melting by warming waters in the short-term future.
- The study derives an estimate of a total methane emission rate from the East Siberian Arctic shelf area based on the statistics of a very large number of observed bubble seeps.
Is the methane flux from the Arctic accelerating?
Shakhova et al (2013) argue that bottom water temperatures are increasing more than had been recognized, in particular in near-coastal (shallow) waters. Sea ice cover has certainly been decreasing. These factors will no doubt lead to an increase in methane flux to the atmosphere, but the question is how strong this increase will be and how fast. I’m not aware of any direct observation of methane emission increase itself. The intensity of this response is pretty much the issue of the dispute about the Arctic methane bomb (below).
What about the extremely high methane concentrations measured in Arctic airmasses?
Shakhova et al (2013) show shipboard measurements of methane concentrations in the air above the ESAS that are almost twice as high as the global average (which is already twice as high as preindustrial). Aircraft measurements published last year also showed plumes of high methane concentration over the Arctic ocean (Kort et al 2012), especially in the surface boundary layer. It’s not easy to interpret boundary-layer methane concentrations quantitatively, however, because the concentration in that layer depends on the thickness of the boundary layer and how isolated it is from the air above it. Certainly high methane concentrations indicate emission fluxes, but it’s not straightforward to know how significant that flux is in the global budget.
The more easily interpretable measurement is the time-averaged difference between Northern and Southern hemisphere methane concentrations. If Arctic methane were driving a substantial increase in the global atmospheric methane concentration, it would be detectable in this time-mean interhemispheric gradient. Northern hemisphere concentrations are a bit higher than they are in the Southern hemisphere (here), but the magnitude of the difference is small enough to support the conclusion from the methane budget that tropical wetlands, which don’t generate much interhemispheric gradient, are a dominant natural source (Kirschke et al 2013).
What about methane hydrates?
There are three possible sources of the methane in the bubbles rising out of the Siberian margin continental shelf:
- Decomposition (fermentation) of thawing organic carbon deposited with loess (windblown glacial flour) when the sediment was exposed to the atmosphere by low sea level during the last glacial time. Organic carbon deposits (called Yedoma) are the best-documented carbon reservoir in play in the Arctic.
- Methane gas that has been trapped by ice, now escaping. Shakhova et al (2013) figure that flaws in the permafrost called taliks, resulting from geologic faults or long-running rivers, might allow gas to escape through what would otherwise be impermeable ice. If there were a gas pocket of 50 Gt, it could conceivably escape quickly as a seal breached, but given that global gas reserves come to ~250 Gt, a 50 Gt gas bubble near the surface would be very large and obvious. There could be 50 Gt of small, disseminated bubbles distributed throughout the sediment column of the ESAS, but in that case I’m not sure where the short time scale for getting the gas to move comes from. I would think the gas would dribble out over the millennia as the permafrost melts.
- Decomposition (melting) of methane hydrates, a peculiar form of water ice cages that form in the presence of, and trap, methane.
Methane hydrate seems menacing as a source of gas that can spring aggressively from the solid phase like pop rocks (carbonated candies). But hydrate doesn’t just explode as soon as it crosses a temperature boundary. It takes heat to convert hydrate into fluid + gas, what is called latent heat, just like regular water ice. There could be a lot of hydrate in Arctic sediments (it’s not real well known how much there is), but there is also lot of carbon as organic matter frozen in the permafrost. Their time scales for mobilization are not really all that different, so I personally don’t see hydrates as scarier than frozen organic matter. I think it just seems scarier.
The other thing about hydrate is that at any given temperature, a minimum pressure is required for hydrate to be stable. If there is pure gas phase present, the dissolved methane concentration in the pore water, from Henry’s law, scales with pressure. At 0 degrees C, you need a pressure equivalent to ~250 meters of water depth to get enough dissolved methane for hydrate to form.
The scariest parts of the Siberian margin are the shallow parts, because this is where methane bubbles from the sea floor might reach the surface, and this is where the warming trend is observed most strongly. But methane hydrate can only form hundreds of meters below the sea floor in that setting, so thermodynamically, hydrate is not expected to be found at or near the sea floor. (Methane hydrate can be found close to the sediment surface in deeper water depth settings, as for example in the Gulf of Mexico or the Nankai trough). The implication is that it will take centuries or longer before heat diffusion through that sediment column can reach and destabilize methane hydrates.
Is there any way nature might evade this thermodynamic imperative?
If hydrate exists in near-surface sediments of the Siberian margin, it would be called “metastable”. Metastability in nature is common when forming a new phase for which a “seed” or starting crystal is needed, like cloud droplets freezing in the upper atmosphere. But for decomposition to form water and gas one would not generally expect a barrier to just melting when energy is available. Chuvilin et al (2011) monitored melting hydrate in the laboratory and observed some quirkiness.
But these experiments spanned 100 hours, while the sediment column has been warming for thousands of years, so the experiments do not really address the question. I have to think that if there were some impervious-to-melting hydrate, why then would it suddenly melt, all at once, in a few years? Actual samples of hydrate collected from shallow sediments on the Siberian shelf would be much more convincing.
What about that Arctic methane bomb?
Shakhova et al (2013) did not find or claim to have found a 50 Gt C reservoir of methane ready to erupt in a few years. That claim, which is the basis of the Whiteman et al (2013) $60 trillion Arctic methane bomb paper, remains as unsubstantiated as ever. The Siberian Arctic, and the Americans, each emit a few percent of global emissions. Significant, but not bombs, more like large firecrackers.
- N. Shakhova, I. Semiletov, I. Leifer, V. Sergienko, A. Salyuk, D. Kosmach, D. Chernykh, C. Stubbs, D. Nicolsky, V. Tumskoy, and �. Gustafsson, "Ebullition and storm-induced methane release from the East Siberian Arctic Shelf", Nature Geoscience, 2013. http://dx.doi.org/10.1038/ngeo2007
- G. Whiteman, C. Hope, and P. Wadhams, "Climate science: Vast costs of Arctic change", Nature, vol. 499, pp. 401-403, 2013. http://dx.doi.org/10.1038/499401a
- N.E. Shakhova, V.A. Alekseev, and I.P. Semiletov, "Predicted methane emission on the East Siberian shelf", Doklady Earth Sciences, vol. 430, pp. 190-193, 2010. http://dx.doi.org/10.1134/S1028334X10020091
- S.M. Miller, S.C. Wofsy, A.M. Michalak, E.A. Kort, A.E. Andrews, S.C. Biraud, E.J. Dlugokencky, J. Eluszkiewicz, M.L. Fischer, G. Janssens-Maenhout, B.R. Miller, J.B. Miller, S.A. Montzka, T. Nehrkorn, and C. Sweeney, "Anthropogenic emissions of methane in the United States", Proceedings of the National Academy of Sciences, 2013. http://dx.doi.org/10.1073/pnas.1314392110
- E.A. Kort, S.C. Wofsy, B.C. Daube, M. Diao, J.W. Elkins, R.S. Gao, E.J. Hintsa, D.F. Hurst, R. Jimenez, F.L. Moore, J.R. Spackman, and M.A. Zondlo, "Atmospheric observations of Arctic Ocean methane emissions up to 82° north", Nature Geoscience, vol. 5, pp. 318-321, 2012. http://dx.doi.org/10.1038/ngeo1452
- S. Kirschke, P. Bousquet, P. Ciais, M. Saunois, J.G. Canadell, E.J. Dlugokencky, P. Bergamaschi, D. Bergmann, D.R. Blake, L. Bruhwiler, P. Cameron-Smith, S. Castaldi, F. Chevallier, L. Feng, A. Fraser, M. Heimann, E.L. Hodson, S. Houweling, B. Josse, P.J. Fraser, P.B. Krummel, J. Lamarque, R.L. Langenfelds, C. Le Quéré, V. Naik, S. O'Doherty, P.I. Palmer, I. Pison, D. Plummer, B. Poulter, R.G. Prinn, M. Rigby, B. Ringeval, M. Santini, M. Schmidt, D.T. Shindell, I.J. Simpson, R. Spahni, L.P. Steele, S.A. Strode, K. Sudo, S. Szopa, G.R. van der Werf, A. Voulgarakis, M. van Weele, R.F. Weiss, J.E. Williams, and G. Zeng, "Three decades of global methane sources and sinks", Nature Geoscience, vol. 6, pp. 813-823, 2013. http://dx.doi.org/10.1038/ngeo1955
In the long run, sea-level rise will be one of the most serious consequences of global warming. But how fast will sea levels rise? Model simulations are still associated with considerable uncertainty – too complex and varied are the processes that contribute to the increase. A just-published survey of 90 sea-level experts from 18 countries now reveals what amount of sea-level rise the wider expert community expects. With successful, strong mitigation measures, the experts expect a likely rise of 40-60 cm in this century and 60-100 cm by the year 2300. With unmitigated warming, however, the likely range is 70-120 cm by 2100 and two to three meters by the year 2300.
Complex problems often cannot simply be answered with computer models. Experts form their views on a topic from the totality of their expertise – which includes knowledge of observational findings and model results, as well as their understanding of the methodological strengths and weaknesses of the various studies. Such expertise results from years of study of a topic, through ones own research, through following the scientific literature and through the ongoing critical discussion process with colleagues at conferences.
For many topics it would be interesting for the public to know what the expert community thinks. If I had a dangerous disease, I would give a lot to learn what the best specialists from around the world think about it. Mostly, however, this expertise is not transparent to outsiders. The media only offer a rather selective window into experts’ minds.
More transparency can be achieved through systematic surveys of experts. The International Council of Scientific Academies (InterAcademy Council, IAC) in its review of IPCC procedures recommended in 2010: “Where practical, formal expert elicitation procedures should be used to obtain subjective probabilities for key results”. We took this advice and last November conducted a broad expert survey on future sea-level rise, in the context of a research project funded by NOAA. Lead author is Ben Horton (Rutgers University), the further authors are Simon Engelhart (University of Rhode Island) and Andrew Kemp (Tufts University).
The credibility of such surveys stands and falls with the selection of experts (see Gavin’s article A new survey of scientists). It is important to identify relevant experts using objective criteria. For us, formal criteria such as professorships were not relevant; our objective was to reach active sea-level researchers. To this end we used the scientific publication database Web of Science of Thomson Reuters and let it generate a list of the 500 researchers who had published the most papers for the search term “sea level” in the last five years in the peer-reviewed literature. It was found that at least 6 publications were required for a scientist to make it onto this list. For 360 of those experts we were able to find email addresses. We asked those for their estimates of the sea-level rise from 2000 to 2100 and 2300, both the “likely” rise (17th to 83rd percentile) and the range of the 5th to the 95th percentile (the 95th percentile is the increase which with 95 % probability will not be exceeded, according to the expert). 90 experts from 18 countries provided their responses.
Sea-level: a bit of context
For context, the following figure from the current IPCC report summarizes the sea-level evolution:
Figure 1: Sea level rise according to the IPCC report of 2013. Shown is the past history of sea level since the year 1700 from proxy data (sediments, purple) and multiple records from tide gauge measurements. Light blue are the satellite data (from 1993). The two future scenarios mentioned in the text (RCP8.5 and RCP2.6) are shown in red and blue, with their “likely” uncertainty range according to the IPCC (meaning a 66 % probability to remain inside this range). Source: IPCC AR5 Fig. 13.27.
A more detailed discussion of the IPCC sea level numbers can be found here. The red and blue future scenarios correspond (to good approximation) to the two climate scenarios on which we surveyed the experts: blue a scenario with effective climate mitigation, red a scenario with a further unabated growth of emissions into the 22nd Century.
The survey results
The following graph shows what the surveyed experts expect for these two scenarios up to the year 2100:
Figure 2: Sea level rise over the period 2000-2100 for two warming scenarios (red RCP8.5, blue RCP3). The ranges show the average numbers given across all the experts. The inner (darker) range shows the 17 to 83 percentile values, the outer range the 5 to 95th percentiles. For comparison we see the NOAA projections of December 2012 (dashed lines) and the new IPCC projections (bars on the right). Since this graph shows the increase from the year 2000, about 25 cm should be added for a direct numerical comparison with the previous graph.
The experts gave a median rise of 40-60 cm for the blue climate scenario and 70-120 cm for the red scenario. Most of the experts thus expect a higher rise than the IPCC – about two-thirds (65%) give the upper limit for the red ‘likely’ range a higher value than the IPCC, even though the IPCC has increased its projections by ~60% since its last report of 2007. In expert circles the IPCC reports are widely considered to be conservative; this is empirical confirmation.
The following table shows all the median values:
Highly relevant for coastal protection is a “high-end” value that with high probability will not be exceeded – let’s say the 95th percentile in the table above, below which the rise will remain with 95 percent probability. For the red scenario, about half of the experts (51%) gave 1.5 meters or higher for this, a quarter (27%) even 2 meters or higher. This is for the increase from 2000 to 2100. In the longer term, for the increase from 2000 to the year 2300, the majority of experts (58%) give this high-end value as 4 meters or higher.
These numbers reflect the fact that experts (including myself) have become more pessimistic about sea-level rise in recent years, in the light of new data and insights mainly concerning the dynamic response of the ice sheets.
Experts quoted in the media are often chosen according to media needs – quite popular is the presentation of topics as a controversy with one expert pro and one against. In this way the experts are portrayed as divided into “two camps”, regardless of whether this reflects the reality. This “two-camps theory” is then used as a justification to cite (in the name of supposed balance) counter-arguments by “climate skeptics” with doubtful expertise. Especially in the US this “false balance” phenomenon is widespread.
In the distribution of expert estimates we find no evidence in support of the two-camps theory, as shown in the following graph.
Figure 3. Distribution of the experts’ answers to the upper limit of the ‘likely’ range for the RCP8.5 scenario by the year 2100. (These numbers can be compared to the value of 98 cm given in the IPCC report.)
There is no split into two groups that could be termed “alarmists” and “skeptics” – this idea can thus be regarded as empirically falsified. That is consistent with other surveys, such as that of continental ice experts by Bamber & Aspinell (Nature Climate Change 2013). Instead, we see in the distribution of responses a broad scientific “mainstream” with a normal spread (the large hump of three bars centered on 100 cm, in which I also find myself), complemented with a long tail of about a dozen “pessimists” who are worried about a much larger sea-level rise. Let’s hope these outliers are wrong. At least I don’t see a plausible physical mechanism for such a rapid rise.
A study on the regional differences in sea-level rise: A scaling approach to project regional sea level rise and its uncertainties
A study on impacts on cities: Future flood losses in major coastal cities
And the Washington Post: How high will sea levels rise? Let’s ask the experts.
Do different climate models give different results? And if so, why? The answer to these questions will increase our understanding of the climate models, and potentially the physical phenomena and processes present in the climate system.
We now have many different climate models, many different methods, and get a range of different results. They provide what we call ‘multi-model‘ and ‘multi-method‘ ensembles. But how do we make sense out of all this information?
And, do we really need all these different models? Global climate models tend to give roughly similar estimates for the climate sensitivity, but there is nevertheless a spread between the different model estimates. The models often diverge more radically if we zoom down to a region.
Furthermore, a single model may give different answers for the future temperature over North America, depending on which day is used to describe the weather at the starting point of the model simulation (Deser et al., 2012).
So the question is whether the differences in model set-up affect the range of the results, and whether a mix of models is superior to many simulations with a single model in terms of accounting for the unknowns of climate modelling.
The fuzziness associated with the spread between the model results is often referred to by the catch-all phrase ‘uncertainty‘, referring to (unpredictable) chaotic internal variations, vaguely known forcing estimates, and climate model limitation.
Whereas climate scientists find ‘uncertainty’ difficult, it plays a central role in statistics (Katz et al., 2013). The statisticians are experts at drawing knowledge from a large volume of information, incomplete data samples, and have methods for ‘distilling’ the data (using a phrase coined by Bruce Hewitson). Some interesting methods are regression analysis and factorial design.
It is necessary to bring on board more statisticians to participate on climate research. Hence, the motivation for a Statistics and Climate workshop with a high proportion of statisticians among the participants (supported by the SARMA network, Met Norway, Norwegian computing, and the Bjerknes centre).
Bringing together people from different fields can be challenging, and we sometimes realise that we speak about ‘uncertainty’ or ‘models’, but mean different things. Is ‘uncertainty’ a probability distribution, model error, gaps in observations, inaccuracy, or imprecision?
In statistics, a ‘model’ may be a probability distribution or an equation whose coefficients are estimated from the data (‘best-fit’). We can also define ‘weather’ as a time series describing when and how much, and ‘climate’ as a probability distribution saying something about how typical such an event is (illustration below).
During the workshop there were discussions about what is meant by ‘prediction‘ – is it the same as a ‘forecast‘? It is difficult to collaborate before we speak the same language and understand each other.
Sometimes it also may be useful to take a step back and re-examine concepts that we take for granted. It is interesting that the exact meaning of ‘storm‘ and ‘extreme‘ were topics of discussion at the workshop.
Our understanding of physics is needed to identify key scientific questions, but the statisticians have the expertise to design tests based on data and statistics. For instance, we can ask whether the model set-up has a systematic effect on the results of the simulation – as in the text above.
Another aspect is the question of proper sampling. It may be tempting to pick the ‘best’ model for a region, even though the same model performs poorly elsewhere. From a statistical point of view, however, we know that selective sampling will give spurious results, also referred to as a bias.
The Economist recently printed an article with the title ‘How science goes wrong‘, explaining how a bias arises when mostly positive results are reported in the medical literature. This is another form for selective sampling, and for the climate models, it can only be justified if there are physical reasons to exclude a particular model.
Another contribution from statisticians in climate research is to bring in their experience with ‘infographics’ (Spiegelhalter et al., 2011) and ways to convey complex messages through illustrations. This and the ability to make sense of data and model results are valuable for climate services.
We also need reliable data, but there is a concern about the quality (homogeneity) of some of the surface temperature (The International Surface Temperature Initiative ISTI). Resources are also needed for ‘data rescue‘, but it is difficult to find funding for such activity because it is often not regarded as ‘science’.
In addition to high-quality data, we need a common data structure for creating a platform for collaboration that includes observations and different kinds of products (e.g. empirical orthogonal functions), both in terms of data files on disks (e.g. netCDF and the ‘CF’ convention) and in the computer memory.
Standard conventions can reduce the risk of misrepresenting data and make the analysis more transparent. Advanced data structures also make better use of advanced facilities, e.g. the ‘S3′ method in R.
- C. Deser, R. Knutti, S. Solomon, and A.S. Phillips, "Communication of the role of natural variability in future North American climate", Nature Climate Change, vol. 2, pp. 775-779, 2012. http://dx.doi.org/10.1038/nclimate1562
- R.W. Katz, P.F. Craigmile, P. Guttorp, M. Haran, B. Sansó, and M.L. Stein, "Uncertainty analysis in climate change assessments", Nature Climate Change, vol. 3, pp. 769-771, 2013. http://dx.doi.org/10.1038/nclimate1980
- D. Spiegelhalter, M. Pearson, and I. Short, "Visualizing Uncertainty About the Future", Science, vol. 333, pp. 1393-1400, 2011. http://dx.doi.org/10.1126/science.1191181
A new study by British and Canadian researchers shows that the global temperature rise of the past 15 years has been greatly underestimated. The reason is the data gaps in the weather station network, especially in the Arctic. If you fill these data gaps using satellite measurements, the warming trend is more than doubled in the widely used HadCRUT4 data, and the much-discussed “warming pause” has virtually disappeared.
Obtaining the globally averaged temperature from weather station data has a well-known problem: there are some gaps in the data, especially in the polar regions and in parts of Africa. As long as the regions not covered warm up like the rest of the world, that does not change the global temperature curve.
But errors in global temperature trends arise if these areas evolve differently from the global mean. That’s been the case over the last 15 years in the Arctic, which has warmed exceptionally fast, as shown by satellite and reanalysis data and by the massive sea ice loss there. This problem was analysed for the first time by Rasmus in 2008 at RealClimate, and it was later confirmed by other authors in the scientific literature.
The “Arctic hole” is the main reason for the difference between the NASA GISS data and the other two data sets of near-surface temperature, HadCRUT and NOAA. I have always preferred the GISS data because NASA fills the data gaps by interpolation from the edges, which is certainly better than not filling them at all.
A new gap filler
Now Kevin Cowtan (University of York) and Robert Way (University of Ottawa) have developed a new method to fill the data gaps using satellite data.
It sounds obvious and simple, but it’s not. Firstly, the satellites cannot measure the near-surface temperatures but only those overhead at a certain altitude range in the troposphere. And secondly, there are a few question marks about the long-term stability of these measurements (temporal drift).
Cowtan and Way circumvent both problems by using an established geostatistical interpolation method called kriging – but they do not apply it to the temperature data itself (which would be similar to what GISS does), but to the difference between satellite and ground data. So they produce a hybrid temperature field. This consists of the surface data where they exist. But in the data gaps, it consists of satellite data that have been converted to near-surface temperatures, where the difference between the two is determined by a kriging interpolation from the edges. As this is redone for each new month, a possible drift of the satellite data is no longer an issue.
Prerequisite for success is, of course, that this difference is sufficiently smooth, i.e. has no strong small-scale structure. This can be tested on artificially generated data gaps, in places where one knows the actual surface temperature values but holds them back in the calculation. Cowtan and Way perform extensive validation tests, which demonstrate that their hybrid method provides significantly better results than a normal interpolation on the surface data as done by GISS.
The surprising result
Cowtan and Way apply their method to the HadCRUT4 data, which are state-of-the-art except for their treatment of data gaps. For 1997-2012 these data show a relatively small warming trend of only 0.05 °C per decade – which has often been misleadingly called a “warming pause”. The new IPCC report writes:
Due to natural variability, trends based on short records are very sensitive to the beginning and end dates and do not in general reflect long-term climate trends. As one example, the rate of warming over the past 15 years (1998–2012; 0.05 [–0.05 to +0.15] °C per decade), which begins with a strong El Niño, is smaller than the rate calculated since 1951 (1951–2012; 0.12 [0.08 to 0.14] °C per decade).
But after filling the data gaps this trend is 0.12 °C per decade and thus exactly equal to the long-term trend mentioned by the IPCC.
The corrected data (bold lines) are shown in the graph compared to the uncorrected ones (thin lines). The temperatures of the last three years have become a little warmer, the year 1998 a little cooler.
The trend of 0.12 °C is at first surprising, because one would have perhaps expected that the trend after gap filling has a value close to the GISS data, i.e. 0.08 °C per decade. Cowtan and Way also investigated that difference. It is due to the fact that NASA has not yet implemented an improvement of sea surface temperature data which was introduced last year in the HadCRUT data (that was the transition from the HadSST2 the HadSST3 data – the details can be found e.g. here and here). The authors explain this in more detail in their extensive background material. Applying the correction of ocean temperatures to the NASA data, their trend becomes 0.10 °C per decade, very close to the new optimal reconstruction.
The authors write in their introduction:
While short term trends are generally treated with a suitable level of caution by specialists in the field, they feature significantly in the public discourse on climate change.
This is all too true. A media analysis has shown that at least in the U.S., about half of all reports about the new IPCC report mention the issue of a “warming pause”, even though it plays a very minor role in the conclusions of the IPCC. Often the tenor was that the alleged “pause” raises some doubts about global warming and the warnings of the IPCC. We knew about the study of Cowtan & Way for a long time, and in the face of such media reporting it is sometimes not easy for researchers to keep such information to themselves. But I respect the attitude of the authors to only go public with their results once they’ve been published in the scientific literature. This is a good principle that I have followed with my own work as well.
The public debate about the alleged “warming pause” was misguided from the outset, because far too much was read into a cherry-picked short-term trend. Now this debate has become completely baseless, because the trend of the last 15 or 16 years is nothing unusual – even despite the record El Niño year at the beginning of the period. It is still a quarter less than the warming trend since 1980, which is 0.16 °C per decade. But that’s not surprising when one starts with an extreme El Niño and ends with persistent La Niña conditions, and is also running through a particularly deep and prolonged solar minimum in the second half. As we often said, all this is within the usual variability around the long-term global warming trend and no cause for excited over-interpretation.
No doubt, our climate system is complex and messy. Still, we can sometimes make some inferences about it based on well-known physical principles. Indeed, the beauty of physics is that a complex systems can be reduced into simple terms that can be quantified, and the essential aspects understood.
A recent paper by Sloan and Wolfendale (2013) provides an example where they derive a simple conceptual model of how the greenhouse effect works from first principles. They show the story behind the expression saying that a doubling in CO2 should increase the forcing by a factor of 1+log|2|/log|CO2|. I have a fondness for such simple conceptual models (e.g. I’ve made my own attempt posted at arXiv) because they provide a general picture of the essence – of course their precision is limited by their simplicity.
However, the main issue discussed in the paper by Sloan and Wolfendale was not the greenhouse effect, but rather the question about galactic cosmic rays and climate. The discussion of the greenhouse effect was provided as a reference to the cosmic rays.
Even though we have discussed this question several times here at RC, Sloan and Wolfendale introduce some new information in connection with radiation, ionisation, and cloud formation. Even after having dug into all these other aspects, they do not find much evidence for the cosmic rays plying an important role. Their conclusions fit nicely with my own findings that also recently were published in the journal Environmental Research Letters.
The cosmic ray hypothesis is weakened further by observational evidence from satellites, as shown in another recent paper by Krissansen-Totton and Davies (2013) in Geophysical Research Letters, which also concludes that the there is no statistically significant correlations between cosmic rays and global albedo or globally averaged cloud height. Neither did they find any evidence for any regional or lagged correlations. It’s nice to see that the Guardian has picked up these findings.
Earlier in October, Almeida et al., 2013 had a paper published in Nature on results from the CLOUD experiment at CERN. They found that galactic cosmic rays exert only a small influence on the formation of sulphuric acid–dimethylamine clusters (the embryonic stage before aerosols may act as cloud condensation nuclei). The authors also reported that the experimental results were reproduced by a dynamical model, based on quantum chemical calculations.
Some may ask why we keep revisiting the question about cosmic rays and climate, after presenting all the evidence to the contrary.
One reason is that science is never settled, and there are still some lingering academic communities nourishing the idea that changes in the sun or cosmic rays play a role. For this reason, a European project was estaqblished in 2011, COST-action TOSCA (Towards a more complete assessment of the impact of solar variability on the Earth’s climate), whose objective is to provide a better understanding of the “hotly debated role of the Sun in climate change” (not really in the scientific fora, but more in the general public discourse).
Oldenborgh et al. (2013) also questioned the hypothesised link between extremely cold winter conditions in Europe and weak solar activity, but their analysis did not reproduce such claims.
- T. Sloan, and A.W. Wolfendale, "Cosmic rays, solar activity and the climate", Environmental Research Letters, vol. 8, pp. 045022, 2013. http://dx.doi.org/10.1088/1748-9326/8/4/045022
- J. Krissansen-Totton, and R. Davies, "Investigation of cosmic ray-cloud connections using MISR", Geophysical Research Letters, vol. 40, pp. 5240-5245, 2013. http://dx.doi.org/10.1002/grl.50996
- J. Almeida, S. Schobesberger, A. Kürten, I.K. Ortega, O. Kupiainen-Määttä, A.P. Praplan, A. Adamov, A. Amorim, F. Bianchi, M. Breitenlechner, A. David, J. Dommen, N.M. Donahue, A. Downard, E. Dunne, J. Duplissy, S. Ehrhart, R.C. Flagan, A. Franchin, R. Guida, J. Hakala, A. Hansel, M. Heinritzi, H. Henschel, T. Jokinen, H. Junninen, M. Kajos, J. Kangasluoma, H. Keskinen, A. Kupc, T. Kurtén, A.N. Kvashin, A. Laaksonen, K. Lehtipalo, M. Leiminger, J. Leppä, V. Loukonen, V. Makhmutov, S. Mathot, M.J. McGrath, T. Nieminen, T. Olenius, A. Onnela, T. Petäjä, F. Riccobono, I. Riipinen, M. Rissanen, L. Rondo, T. Ruuskanen, F.D. Santos, N. Sarnela, S. Schallhart, R. Schnitzhofer, J.H. Seinfeld, M. Simon, M. Sipilä, Y. Stozhkov, F. Stratmann, A. Tomé, J. Tröstl, G. Tsagkogeorgas, P. Vaattovaara, Y. Viisanen, A. Virtanen, A. Vrtala, P.E. Wagner, E. Weingartner, H. Wex, C. Williamson, D. Wimmer, P. Ye, T. Yli-Juuti, K.S. Carslaw, M. Kulmala, J. Curtius, U. Baltensperger, D.R. Worsnop, H. Vehkamäki, and J. Kirkby, "Molecular understanding of sulphuric acid–amine particle nucleation in the atmosphere", Nature, vol. 502, pp. 359-363, 2013. http://dx.doi.org/10.1038/nature12663
- G.J. van Oldenborgh, A.T.J. de Laat, J. Luterbacher, W.J. Ingram, and T.J. Osborn, "Claim of solar influence is on thin ice: are 11-year cycle solar minima associated with severe winters in Europe?", Environmental Research Letters, vol. 8, pp. 024014, 2013. http://dx.doi.org/10.1088/1748-9326/8/2/024014
Allan Savory delivered a highly publicized talk at a “Technology, Entertainment, Design (TED)” conference in February of this year (2013) entitled “How to fight desertification and reverse climate change.” Here we address one of the most dramatic claims made – that a specialized grazing method alone can reverse the current trajectory of increasing atmospheric CO2 and climate change.
The talk was attended by many conferees and has since been viewed on the TED website over 1.6 million times. It has received substantial acclaim in social media, some of which is available at the Savory Institute website, but it has also received considerable criticism (of particular note is a blog post from Adam Merberg and an article in Slate magazine. Although these criticism quickly followed Mr. Savory’s presentation and are broadly supported by the available science, his sweeping claims have continued to resonate with lay audiences. An apparent example is his invitation to deliver a speech to Swiss Re during their 150 year anniversary celebration in London in September, in which he is quoted as saying “…only now due largely to my TED talk on the desertification aspect of the global problem, was the public becoming aware of such hope in a world so short on solutions…”.
As a result of the continuing discussion regarding this presentation, we felt compelled to interpret these claims within the context of Earth System science to facilitate broader discussions and evaluation. It is important to recognize that Mr. Savory’s grazing method, broadly known as holistic management, has been controversial for decades. A portion of this controversy and the lack of scientific support for the claims made for his method on livestock productivity and grassland ecosystem function may be found in peer-reviewed papers (e.g. Briske et al. 2008). This presentation, however, argued for an additional application to climate change.
We focus here on the most dramatic claim that Mr. Savory made regarding the reversal of climate change through holistic management of grasslands. The relevant quote (transcript by author from video provided on TED website) is as follows:
“…people who understand far more about carbon than I do calculate that for illustrative purposes, if we do what I’m showing you here, we can take enough carbon out of the atmosphere and safely store it in the grassland soils for thousands of years, and if we just do that on about half the world’s grasslands that I’ve shown you, we can take us back to pre-industrial levels while feeding people. I can think of almost nothing that offers more hope for our planet, for your children, for their children and all of humanity…”
While it is understandable to want to believe that such a dramatic outcome is possible, science tells us that this claim is simply not reasonable. The massive, ongoing additions of carbon to the atmosphere from human activity far exceed the carbon storage capacity of global grasslands.
Approximately 8 Petagrams (Pg; trillion kilograms) of carbon are added to the atmosphere every year from fossil fuel burning and cement production alone. This will increase in the future at a rate that depends largely on global use of fossil fuels. To put these emissions in perspective, the amount of carbon taken up by vegetation is about 2.6 Pg per year. To a very rough approximation then, the net carbon uptake by all of the planet’s vegetation would need to triple (assuming similar transfers to stable C pools like soil organic matter) just to offset current carbon emissions every year. However, the claim was not that holistic management would maintain current atmospheric CO2 levels, but that it would return the atmosphere to pre-industrial levels. Based on IPCC estimates, there are now approximately 240 more Petagrams (Pg) of carbon in the atmosphere than in pre-industrial times. To put this value in perspective, the amount of carbon in vegetation is currently estimated at around 450 Pg, most of that in the wood of trees. The amount of carbon that would need to be removed from the atmosphere and stabilized in soils, in addition to the amount required to compensate for ongoing emissions, to attain pre-industrial levels is equivalent to approximately one-half of the total carbon in all of Earth’s vegetation. Recall that annual uptake of carbon is about two orders of magnitude smaller than the total carbon amount stored in vegetation.
At a global scale, grasslands are generally distributed in regions of low precipitation across a wide range of temperatures, with precipitation particularly limiting grassland productivity. Within a zone, grassland carbon cycles respond significantly and sometimes dramatically to fluctuations in inter-annual precipitation. This is because soil water is essential for vegetation to remove carbon from the atmosphere in the process of photosynthesis and it also drives variation in microbial processes that affect the loss of carbon from soils. Consequently, soil water availability represents a much greater limitation to maximum carbon storage in global grasslands than does grazing management. Grasslands represent approximately 30-40% of the planet’s land surface and only a fraction of annual global productivity and carbon sequestration (~20% of global carbon stocks). It is simply unreasonable to expect that any management strategy, even if implemented on all of the planet’s grasslands, would yield such a tremendous increase in carbon sequestration.
Humanity faces many challenging problems in this period of human domination of planet known at the Anthropocene. These problems, including that of climate change, require efforts to find solutions in all sectors of society and that we engage in diverse and dynamic dialogue about potential solutions, including those that may lie far outside the current mainstream. However, potential solutions must be assessed with a dispassionate and rigorous treatment of risks, benefits, and costs. We should pursue solutions that are most likely to succeed on the basis of scientific validity and societal acceptance . Extravagant claims like those in Mr. Savory’s TED video must be weighed against known physical realities to credibly serve society.
Rangeland management strategies appropriately emphasize conservation of previously stored soil carbon, rather than sequestration of additional carbon, based in part on the limitations previously described. Emphasis should be placed on climate change adaptation, rather than mitigation as advocated by Mr. Savory, to support the well-being of millions of human inhabitants. Mr. Savory argues that we adopt his grazing method as a simple solution to resolve a key Anthropocene contributor – the ongoing perturbation of Earth’s carbon cycle. The appeal of this claim to casual observers is enhanced in that it does not require humans to face any tradeoffs. The implication is that we can continue to use fossil fuels and emit carbon into the atmosphere because application of holisitic management on the Earth’s grasslands provides a ‘silver bullet’ that will sustainably solve the climate change problem and provide abundant livestock products as well. We would be thrilled if a simple solution such as this existed. However, it clearly does not, and it is counter-productive to believe that it does. Humanity must look beyond hope and simple solutions if it is to successfully navigate its way through the Anthropocene.
Some will be luckier than others when it comes to climate change. The effects of a climate change on me will depend on where I live. In some regions, changes may not be as noticeable as in others. So what are the impacts in my region?
In other to understand the local impacts of a climate change, I need to address the question of how I can calculate the regional response from a global change perspective. This is called ‘downscaling‘.
Regional and local climate aspects are computed, based on different climate models, statistical analyses, empirical data, and assumptions. The choice of calculation method varies from case to case, and depends on what I want to know and how I think a local climate change will affect me.
These days, questions about local and regional climate change, as well as methods and climate models, are discussed at the International Conference on Regional Climate – CORDEX2013 (Brussels, 4-7 November, 2013). The major theme of this conference is the Coordinated Regional Downscaling Experiment (CORDEX).
For those who want to follow the news about regional climate modelling efforts, there is a live streaming at the conference website, and through twitter with hash tag ‘#CORDEX2013‘, you can take part in the discussions (please indicate to whom you address your questions).
The conference is organised by the European Commission, the World Climate Research Programme (WCRP), and the Intergovernmental Panel on Climate Change (IPCC). However, most of the high-level talks will take place at the first day, and the subsequent three days will be devoted to the real climate scientists.
This month’s open thread…
A new report on extreme climate events in Europe is just published: ‘Extreme Weather Events in Europe: preparing for climate change adaptation‘. It was launched in Oslo on October 24th by the Norwegian Academy of Science and Letters, and the report is now available online.
What’s new? The new report provides information that is more specific to Europe than the SREX report from the Intergovernmental Panel on Climate Change (IPCC), and incorporate phenomena that have not been widely covered.
It provides some compelling information drawn from the insurance industry, and indeed, a representative from Munich Re participated in writing this report. There is also material on convective storms, hail, lightening, and cold snaps, and the report provides a background on extreme value statistics, risk analysis, impacts, and adaptation.
The main difference with the recent IPCC reports (e.g. the SREX) is the European focus and that it includes more recent results. The report writing process did not have to follow as rigid procedures as the IPCC, and hence the report is less constrained. For instance, it provides set of recommendations for policymakers, based entirely on scientific considerations.
The report, in which I have been involved, was initiated by the Norwegian Academy of Science and Letters, and was written by a committee of experts across Europe. Hence, the final report was published as a joint report by the Norwegian meteorological institute, the Norwegian Academy of Science and Letters, and the European Academies Science Advisory Council (EASAC)
What is happening to sea levels? That was perhaps the most controversial issue in the 4th IPCC report of 2007. The new report of the Intergovernmental Panel on Climate Change is out now, and here I will discuss what IPCC has to say about sea-level rise (as I did here after the 4th report).
Let us jump straight in with the following graph which nicely sums up the key findings about past and future sea-level rise: (1) global sea level is rising, (2) this rise has accelerated since pre-industrial times and (3) it will accelerate further in this century. The projections for the future are much higher and more credible than those in the 4th report but possibly still a bit conservative, as we will discuss in more detail below. For high emissions IPCC now predicts a global rise by 52-98 cm by the year 2100, which would threaten the survival of coastal cities and entire island nations. But even with aggressive emissions reductions, a rise by 28-61 cm is predicted. Even under this highly optimistic scenario we might see over half a meter of sea-level rise, with serious impacts on many coastal areas, including coastal erosion and a greatly increased risk of flooding.
Fig. 1. Past and future sea-level rise. For the past, proxy data are shown in light purple and tide gauge data in blue. For the future, the IPCC projections for very high emissions (red, RCP8.5 scenario) and very low emissions (blue, RCP2.6 scenario) are shown. Source: IPCC AR5 Fig. 13.27.
In addition to the global rise IPCC extensively discusses regional differences, as shown for one scenario below. For reasons of brevity I will not discuss these further in this post.
Fig. 2. Map of sea-level changes up to the period 2081-2100 for the RCP4.5 scenario (which one could call late mitigation, with emissions starting to fall globally after 2040 AD). Top panel shows the model mean with 50 cm global rise, the following panels show the low and high end of the uncertainty range for this scenario. Note that even under this moderate climate scenario, the northern US east coast is risking a rise close to a meter, drastically increasing the storm surge hazard to cities like New York. Source: IPCC AR5 Fig. 13.19.
I recommend to everyone with a deeper interest in sea level to read the sea level chapter of the new IPCC report (Chapter 13) – it is the result of a great effort by a group of leading experts and an excellent starting point to understanding the key issues involved. It will be a standard reference for years to come.
Past sea-level rise
Understanding of past sea-level changes has greatly improved since the 4th IPCC report. The IPCC writes:
Proxy and instrumental sea level data indicate a transition in the late 19th to the early 20th century from relatively low mean rates of rise over the previous two millennia to higher rates of rise (high confidence). It is likely that the rate of global mean sea level rise has continued to increase since the early 20th century.
Adding together the observed individual components of sea level rise (thermal expansion of the ocean water, loss of continental ice from ice sheets and mountain glaciers, terrestrial water storage) now is in reasonable agreement with the observed total sea-level rise.
Models are also now able to reproduce global sea-level rise from 1900 AD better than in the 4th report, but still with a tendency to underestimation. The following IPCC graph shows a comparison of observed sea level rise (coloured lines) to modelled rise (black).
Fig. 3. Modelled versus observed global sea-level rise. (a) Sea level relative to 1900 AD and (b) its rate of rise. Source: IPCC AR5 Fig. 13.7.
Taken at face value the models (solid black) still underestimate past rise. To get to the dashed black line, which shows only a small underestimation, several adjustments are needed.
(1) The mountain glacier model is driven by observed rather than modelled climate, so that two different climate histories go into producing the dashed black line: observed climate for glacier melt and modelled climate for ocean thermal expansion.
(2) A steady ongoing ice loss from ice sheets is added in – this has nothing to do with modern warming but is a slow response to earlier climate changes. It is a plausible but highly uncertain contribution – the IPCC calls the value chosen “illustrative” because the true contribution is not known.
(3) The model results are adjusted for having been spun up without volcanic forcing (hard to believe that this is still an issue – six years earlier we already supplied our model results spun up with volcanic forcing to the AR4). Again this is a plausible upward correction but of uncertain magnitude, since the climate response to volcanic eruptions is model-dependent.
The dotted black line after 1990 makes a further adjustment, namely adding in the observed ice sheet loss which as such is not predicted by models. The ice sheet response remains a not yet well-understood part of the sea-level problem, and the IPCC has only “medium confidence” in the current ice sheet models.
One statement that I do not find convincing is the IPCC’s claim that “it is likely that similarly high rates [as during the past two decades] occurred between 1920 and 1950.” I think this claim is not well supported by the evidence. In fact, a statement like “it is likely that recent high rates of SLR are unprecedented since instrumental measurements began” would be more justified.
The lower panel of Fig. 3 (which shows the rates of SLR) shows that based on the Church & White sea-level record, the modern rate measured by satellite altimeter is unprecedented – even the uncertainty ranges of the satellite data and those of the Church & White rate between 1920 and 1950 do not overlap. The modern rate is also unprecedented for the Ray and Douglas data although there is some overlap of the uncertainty ranges (if you consider both ranges). There is a third data set (not shown in the above graph) by Wenzel and Schröter (2010) for which this is also true. The only outlier set which shows high early rates of SLR is the Jevrejeva et al. (2008) data – and this uses a bizarre weighting scheme, as we have discussed here at Realclimate. For example, the Northern Hemisphere ocean is weighted more strongly than the Southern Hemisphere ocean, although the latter has a much greater surface area. With such a weighting movements of water within the ocean, which cannot change global-mean sea level, erroneously look like global sea level changes. As we have shown in Rahmstorf et al. (2012), much or most of the decadal variations in the rate of sea-level rise in tide gauge data are probably not real changes at all, but simply an artefact of inadequate spatial sampling of the tide gauges. (This sampling problem has now been overcome with the advent of satellite data from 1993 onwards.) But even if we had no good reason to distrust decadal variations in the Jevrejeva data and treated all data sets the same, three out of four global tide gauge compilations show recent rates of rise that are unprecedented – enough for a “likely” statement in IPCC terms.
Future sea-level rise
For an unmitigated future rise in emissions (RCP8.5), IPCC now expects between a half metre and a metre of sea-level rise by the end of this century. The best estimate here is 74 cm.
On the low end, the range for the RCP2.6 scenario is 28-61 cm rise by 2100, with a best estimate of 44 cm. Now that is very remarkable, given that this is a scenario with drastic emissions reductions starting in a few years from now, with the world reaching zero emissions by 2070 and after that succeeding in active carbon dioxide removal from the atmosphere. Even so, the expected sea-level rise will be almost three times as large as that experienced over the 20th Century (17 cm). This reflects the large inertia in the sea-level response – it is very difficult to make sea-level rise slow down again once it has been initiated. This inertia is also the reason for the relatively small difference in sea-level rise by 2100 between the highest and lowest emissions scenario (the ranges even overlap) – the major difference will only be seen in the 22nd century.
There has been some confusion about those numbers: some media incorrectly reported a range of only 26-82 cm by 2100, instead of the correct 28-98 cm across all scenarios. I have to say that half of the blame here lies with the IPCC communication strategy. The SPM contains a table with those numbers – but they are not the rise up to 2100, but the rise up to the mean over 2081-2100, from a baseline of the mean over 1985-2005. It is self-evident that this is too clumsy to put in a newspaper or TV report so journalists will say “up to 2100”. So in my view, IPCC would have done better to present the numbers up to 2100 in the table (as we do below), so that after all its efforts to get the numbers right, 16 cm are not suddenly lost in the reporting.
Table 1: Global sea-level rise in cm by the year 2100 as projected by the IPCC AR5. The values are relative to the mean over 1986-2005, so subtract about a centimeter to get numbers relative to the year 2000.
And then of course there are folks like the professional climate change down-player Björn Lomborg, who in an international newspaper commentary wrote that IPCC gives “a total estimate of 40-62 cm by century’s end” – and also fails to mention that the lower part of this range requires the kind of strong emissions reductions that Lomborg is so viciously fighting.
Fig. 4. Global sea-level projection of IPCC for the RCP6.0 scenario, for the total rise and the individual contributions.
Higher projections than in the past
To those who remember the much-discussed sea-level range of 18-59 cm from the 4th IPCC report, it is clear that the new numbers are far higher, both at the low and the high end. But how much higher they are is not straightforward to compare, given that IPCC now uses different time intervals and different emissions scenarios. But a direct comparison is made possible by table 13.6 of the report, which allows a comparison of old and new projections for the same emissions scenario (the moderate A1B scenario) over the time interval 1990-2100(*). Here the numbers:
AR4: 37 cm (this is the standard case that belongs to the 18-59 cm range).
AR4+suisd: 43 cm (this is the case with “scaled-up ice sheet discharge” – a questionable calculation that was never validated, emphasised or widely reported).
AR5: 60 cm.
We see that the new estimate is about 60% higher than the old standard estimate, and also a lot higher than the AR4 attempt at including rapid ice sheet discharge.
The low estimates of the 4th report were already at the time considered too low by many experts – there were many indications of that (which we discussed back then), including the fact that the process models used by IPCC greatly underestimated the past observed sea-level rise. It was clear that those process models were not mature, and that was the reason for the development of an alternative, semi-empirical approach to estimating future sea-level rise. The semi-empirical models invariably gave much higher future projections, since they were calibrated with the observed past rise.
However, the higher projections of the new IPCC report do not result from including semi-empirical models. Remarkably, they have been obtained by the process models preferred by IPCC. Thus IPCC now confirms with its own methods that the projections of the 4th report were too low, which was my main concern at the time and the motivation for publishing my paper in Science in 2007. With this new generation of process models, the discrepancy to the semi-empirical models has narrowed considerably, but a difference still remains.
Should the semi-empirical models have been included in the uncertainty range of the IPCC projections? A number of colleagues that I have spoken to think so, and at least one has said so in public. The IPCC argues that there is “no consensus” on the semi-empirical models – true, but is this a reason to exclude or include them in the overall uncertainty that we have in the scientific community? I think there is likewise no consensus on the studies that have recently argued for a lower climate sensitivity, yet the IPCC has widened the uncertainty range to encompass them. The New York Times concludes from this that the IPCC is “bending over backward to be scientifically conservative”. And indeed one wonders whether the semi-empirical models would have been also excluded had they resulted in lower estimates of sea-level rise, or whether we see “erring on the side of the least drama” at work here.
What about the upper limit?
Coastal protection professionals require a plausible upper limit for planning purposes, since coastal infrastructure needs to survive also in the worst case situation. A dike that is only “likely” to be good enough is not the kind of safety level that coastal engineers want to provide; they want to be pretty damn certain that a dike will not break. Rightly so.
The range up to 98 cm is the IPCC’s “likely” range, i.e. the risk of exceeding 98 cm is considered to be 17%, and IPCC adds in the SPM that “several tenths of a meter of sea level rise during the 21st century” could be added to this if a collapse of marine-based sectors of the Antarctic ice sheet is initiated. It is thus clear that a meter is not the upper limit.
It is one of the fundamental philosophical problems with IPCC (causing much debate already in conjunction with the 4th report) that it refuses to provide an upper limit for sea-level rise, unlike other assessments (e.g. the sea-level rise scenarios of NOAA (which we discussed here) or the guidelines of the US Army Corps of Engineers). This would be an important part of assessing the risk of climate change, which is the IPCC’s role (**). Anders Levermann (one of the lead authors of the IPCC sea level chapter) describes it thus:
In the latest assessment report of the IPCC we did not provide such an upper limit, but we allow the creative reader to construct it. The likely range of sea level rise in 2100 for the highest climate change scenario is 52 to 98 centimeters (20 to 38 inches.). However, the report notes that should sectors of the marine-based ice sheets of Antarctic collapse, sea level could rise by an additional several tenths of a meter during the 21st century. Thus, looking at the upper value of the likely range, you end up with an estimate for the upper limit between 1.2 meters and, say, 1.5 meters. That is the upper limit of global mean sea-level that coastal protection might need for the coming century.
For the past six years since publication of the AR4, the UN global climate negotiations were conducted on the basis that even without serious mitigation policies global sea-level would rise only between 18 and 59 cm, with perhaps 10 or 20 cm more due to ice dynamics. Now they are being told that the best estimate for unmitigated emissions is 74 cm, and even with the most stringent mitigation efforts, sea level rise could exceed 60 cm by the end of century. It is basically too late to implement measures that would very likely prevent half a meter rise in sea level. Early mitigation is the key to avoiding higher sea level rise, given the slow response time of sea level (Schaeffer et al. 2012). This is where the “conservative” estimates of IPCC, seen by some as a virtue, have lulled policy makers into a false sense of security, with the price having to be paid later by those living in vulnerable coastal areas.
Is the IPCC AR5 now the final word on process-based sea-level modelling? I don’t think so. I see several reasons that suggest that process models are still not fully mature, and that in future they might continue to evolve towards higher sea-level projections.
1. Although with some good will one can say the process models are now consistent with the past observed sea-level rise (the error margins overlap), the process models remain somewhat at the low end in comparison to observational data.
2. Efforts to model sea-level changes in Earth history tend to show an underestimation of past sea-level changes. E.g., the sea-level high stand in the Pliocene is not captured by current ice sheet models. Evidence shows that even the East Antarctic Ice Sheet – which is very stable in models – lost significant amounts of ice in the Pliocene.
3. Some of the most recent ice sheet modelling efforts that I have seen discussed at conferences – the kind of results that came too late for inclusion in the IPCC report – point to the possibility of larger sea-level rise in future. We should keep an eye out for the upcoming scientific papers on this.
There was controversy after AR4 that sea level rise estimates were too low. Now, we have the same problem for AR5 [that they are still too low].
Thus, I would not be surprised if the process-based models will have closed in further on the semi-empirical models by the time the next IPCC report gets published. But whether this is true or not: in any case sea-level rise is going to be a very serious problem for the future, made worse by every ton of CO2 that we emit. And it is not going to stop in the year 2100 either. By 2300, for unmitigated emissions IPCC projects between 1 and more than 3 meters of rise.
I’m usually suspicious of articles that promise to look “behind the scenes”, but this one by Paul Voosen is not sensationalist but gives a realistic and matter-of-fact insight into the inner workings of the IPCC, for the sea-level chapter. Recommended reading!
(*) Note: For the AR5 models table 13.6 gives 58 cm from 1996; we made that 60 cm from 1990.
(**) The Principles Governing IPCC Work explicitly state that its role is to “assess…risk”, albeit phrased in a rather convoluted sentence:
The role of the IPCC is to assess on a comprehensive, objective, open and transparent basis the scientific, technical and socio-economic information relevant to understanding the scientific basis of risk of human-induced climate change, its potential impacts and options for adaptation and mitigation.
- J.A. Church, and N.J. White, "Sea-Level Rise from the Late 19th to the Early 21st Century", Surveys in Geophysics, vol. 32, pp. 585-602, 2011. http://dx.doi.org/10.1007/s10712-011-9119-1
- R.D. Ray, and B.C. Douglas, "Experiments in reconstructing twentieth-century sea levels", Progress in Oceanography, vol. 91, pp. 496-515, 2011. http://dx.doi.org/10.1016/j.pocean.2011.07.021
- M. Wenzel, and J. Schröter, "Reconstruction of regional mean sea level anomalies from tide gauges using neural networks", Journal of Geophysical Research, vol. 115, 2010. http://dx.doi.org/10.1029/2009JC005630
- S. Jevrejeva, J.C. Moore, A. Grinsted, and P.L. Woodworth, "Recent global sea level acceleration started over 200 years ago?", Geophysical Research Letters, vol. 35, 2008. http://dx.doi.org/10.1029/2008GL033611
- S. Rahmstorf, M. Perrette, and M. Vermeer, "Testing the robustness of semi-empirical sea level projections", Climate Dynamics, vol. 39, pp. 861-875, 2012. http://dx.doi.org/10.1007/s00382-011-1226-7
- S. Rahmstorf, "A Semi-Empirical Approach to Projecting Future Sea-Level Rise", Science, vol. 315, pp. 368-370, 2007. http://dx.doi.org/10.1126/science.1135456
- M. Schaeffer, W. Hare, S. Rahmstorf, and M. Vermeer, "Long-term sea-level rise implied by 1.5 °C and 2 °C warming levels", Nature Climate Change, vol. 2, pp. 867-870, 2012. http://dx.doi.org/10.1038/nclimate1584
Maybe you remember the rollout a few years ago of Open Climate 101, a massive open online class (MOOC) that was served sort of free-range from a computer at the University of Chicago. Now the class has been entirely redone as Global Warming: The Science of Climate Change within the far slicker Coursera platform. Beginning on October 21, the class is free and runs for 8 weeks. The videos have been reshot in a short and punchy (2-10 minute) format, for example here (8:13). These seem like they will be easier to watch than traditional 45-minute lectures from a classroom. It’s based on, and will show you how to play with, all-new on-line computer models, including extensive new browsing systems for global climate records and model results from the new AR5 climate model archive, an ice sheet model you can clobber with slugs of CO2 as it evolves, and more. Come and
watch the train wreck join the fun!
The class follows the general structure of Open Climate 101, based on the textbook Global Warming: Understanding the Forecast. This is class about science, but it is intended to be understandable by people without a strong science background.
Weeks 1-4 start from the very simplest model for the temperature of a planet, and build a picture of the complexity of the real climate system on Earth, with the greenhouse effect and climate feedbacks.
Weeks 5-6 consider the past and future carbon cycle.
Weeks 7-8 explain where we are and what can be done.
New On-line Climate Stuff
Coursera seems like a powerful medium for teaching any topic. But my class in particular is like no other Coursera class that I’ve heard of, in that it offers a suite of on-line interactive models that you can see here. They are always up and publicly available, so you teachers can throw students at them, no problem.
A time-series browser provides access to the GHCNM (NOAA link is currently shut down) global meteorological station monthly mean temperatures (7169 stations), and global glacier length records (472 records). These records can be compared with climate model results from the new AR5 model results, extracted from their grids. There are 12 different models and four scenarios, including Historical, HistoricalNat (natural-only), RCP2.6 (an optimistic ramping-down scenario), and RCP8.5 (less optimistic). This is a very open-ended system; my intent is to allow students to investigate a topic of their own devising, which they will write up and submit to grading by other students, a bit of Coursera wizardry.
An AR5 output mapper makes colored maps of output from climate models, including 3-D atmospheric temperature, specific humidity, cloud fraction, and 2-D fields of precipitation, soil moisture, runoff, leaf area index, and snow cover. These are monthly mean values from the Historical then RCP8.5 scenarios. The browser buffers the maps so that you can switch between them quickly, or show them in a slide show or movie. This is only a tiny fraction of the AR5 model respository, but it’s still enough data (about 130 GBytes) that serving this to a large MOOC audience is going to be a challenge.
The time-series browser and the map maker are both designed to be easily extended (by me, not by users). The system takes AR5 netcdf files directly as they download from the CMIP archive, and new climate records can be added to the time-series browser as simple csv format. Maybe realclimate readers or students in the class will have suggestions of what to add (I almost hate to ask!).
A new model for comparing the climate impacts of CO2 and CH4, called the Slugulator, lets you release slugs of either greenhouse gas and compare the antics that ensue. A favorite feature of mine is the comparison of the energy yield from fossil fuels, next to the total greenhouse energy trapped over the lifetimes of the gases.
Plus there are retreads of the old favorites like Modtran, with which you can demonstrate the band saturation effect, Geocarb, which shows the long tail of fossil fuel CO2 in the atmosphere, and lots more.
[Response: Updated links 10-31-13]
Last year I discussed the basis of the AR4 attribution statement:
Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.
In the new AR5 SPM (pdf), there is an analogous statement:
It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together. The best estimate of the human-induced contribution to warming is similar to the observed warming over this period.
This includes differences in the likelihood statement, drivers and a new statement on the most likely amount of anthropogenic warming.
It is useful to remind ourselves that these statements are addressing our confidence in the characterisation of the anthropogenic contribution to the global surface temperature trend since the middle of the 20th Century. This contribution is unavoidably represented by a distribution of values because of the uncertainties associated with forcings, responses and internal variability. The AR4 statement confined itself to quantifying the probability that the greenhouse-gas driven trend was less than half the total trend as being less than 10% (or alternately, that at least 90% of the distribution was above 50% based on the IPCC definition of “very likely”):
Figure 1: Two schematic distributions of possible ‘anthropogenic contributions’ to the warming over the last 50 years. Note that in each case, despite a difference in the mean and variance, the probability of being below 50, is exactly 0.1 (i.e. a 10% likelihood).
In AR5, there are two notable changes to this. First, the likelihood level is now at least 95%, and so the assessment is for less than 5% probability of the trend being less than half of the observed trend. Secondly, they have switched from the ‘anthropogenic greenhouse gas” driven trend, to the total anthropogenic trend. As I discussed last time, the GHG trend is almost certainly larger than the net anthropogenic trend because of the high likelihood that anthropogenic aerosols have been a net cooling over that time. Both changes lead to a stronger statement than in AR4. One change in language is neutral; moving from “most” to “more than half”, but this was presumably to simply clarify the definition.
The second part of the AR5 statement is interesting as well. In the AR4 SPM, IPCC did not give a ‘best estimate’ for the anthropogenic contribution (though again many people were confused on this point). This time they have characterised the best estimate as being close to the observed estimate – i.e. that the anthropogenic trend is around 100% of the observed trend, implying that the best estimates of net natural forcings and internal variability are close to zero. This is equivalent to placing the peak in the distribution in the above figure near 100%.
The basis for these changes is explored in Chapter 10 on detection and attribution and is summarised in the following figure (10.5):
Figure 2. Assessed likely ranges (whiskers) and their mid-points (bars) for attributable warming trends over the 1951–2010 period due to well-mixed greenhouse gases (GHG), other anthropogenic forings (OA), natural forcings (NAT), combined anthropogenic forcings (ANT), and internal variability. The HadCRUT4 observations are shown in black with the 5–95% uncertainty range due to observational uncertainty.
These estimates are the “assessed” trends that come out of fingerprint studies of the temperature changes and account for potential mis-estimates (over or under) of internal variability, sensitivity etc. in the models (and potentially the forcings). The raw material are the model hindcasts of the historical period – using all forcings, just the natural ones, just the anthropogenic ones and various variations on that theme.
The error bars cover the ‘likely’ range (33-66%), so are close to being ±1 standard deviation (except for the observations (5-95%), which is closer to ±2 standard deviations). It is easy enough to see that the ‘ANT’ row (the combination from all anthropogenic forcings) is around 0.7 ± 0.1ºC, and the OBS are 0.65 ± 0.06ºC. If you work that through (assuming normal distributions for the uncertainties), it implies that the probability of the ANT trend being less than half the OBS trend is less than 0.02% – much less than the stated 5% level. The difference is that the less confident statement also takes into account structural uncertainties about the methodology, models and data. Similarly, the best estimate of the ratio of ANT to OBS has a 2 sd range between 0.8 and 1.4 (peak at 1.08). Consistent with this are the rows for natural forcing and internal variability – neither are significantly different to zero in the mean, and the uncertainties are too small for them to explain the observed trend with any confidence. Note that the ANT vs. NAT comparison is independent of the GHG or OA comparisons; the error bars for ANT do not derive from combining the GHG and OA results.
It is worth asking what the higher confidence/lower error bars are associated with. First, the longer time period (an extra 5 years) makes the trends clearer relative to the noise, multiple methodologies have been used which get the same result, and fingerprints have been better constrained by the greater use of spatial information. Small effects may also arise from better characterisations of the uncertainties in the observations (i.e. in moving from HadCRUT3 to HadCRUT4). Because of the similarity of patterns related to aerosols and greenhouse gases, there is more uncertainty in doing the separate attributions rather than looking at anthropogenic forcings collectively. Interestingly, the attribution of most of the trend to GHGs alone would still remain very likely (as in AR4); I estimate a roughly 7% probability that it would account for less than half the OBS trend. A factor that might be relevant (though I would need to confirm this) is that more CMIP5 simulations from a wider range of models were available for the NAT/ANT comparison than previously in CMIP3/4.
It could be argued that since recent trends have fallen slightly below the multi-model ensemble mean, this should imply that our uncertainty has massively increased and hence the confidence statement should be weaker than stated. However this doesn’t really follow. Over-estimates of model sensitivity would be accounted for in the methodology (via a scaling factor of less than one), and indeed, a small over-estimate (by about 10%) is already factored in. Mis-specification of post-2000 forcings (underestimated volcanoes, Chinese aerosols or overestimated solar), or indeed, uncertainties in all forcings in the earlier period, leads to reduced confidence in attribution in the fingerprint studies, and an lower estimate of the anthropogenic contribution. Finally, if the issue is related simply to an random realisation of El Niño/La Niña phases or other sources of internal variability, this simply feeds into the ‘Internal variability’ assessment. Thus the effects of recent years are already embedded within the calculation, and will have led to a reduced confidence compared to a situation where things lined up more. Using this as an additional factor to change the confidence rating again would be double counting.
There is more information on this process in the IPCC chapter itself, and in the referenced literature (particularly Ribes and Terray (2013), Jones et al (2013) and Gillet et al (2013)). There is also a summary of relevant recent papers at SkepticalScience.
Bottom line? These statements are both comprehensible and traceable back to the literature and the data. While they are conclusive, they are not a dramatic departure from basic conclusions of AR4 and subsequent literature – but then, that is exactly what one should expect.
- A. Ribes, and L. Terray, "Application of regularised optimal fingerprinting to attribution. Part II: application to global near-surface temperature", Climate Dynamics, 2013. http://dx.doi.org/10.1007/s00382-013-1736-6
- G.S. Jones, P.A. Stott, and N. Christidis, "Attribution of observed historical near-surface temperature variations to anthropogenic and natural causes using CMIP5 simulations", Journal of Geophysical Research: Atmospheres, vol. 118, pp. 4001-4024, 2013. http://dx.doi.org/10.1002/jgrd.50239
- N.P. Gillett, V.K. Arora, D. Matthews, and M.R. Allen, " Constraining the Ratio of Global Warming to Cumulative CO 2 Emissions Using CMIP5 Simulations* ", Journal of Climate, vol. 26, pp. 6844-6858, 2013. http://dx.doi.org/10.1175/JCLI-D-12-00476.1
Making a film about climate change is difficult, especially if you want it to reach a wide audience. One problem is the long time scale of climate change, which fits badly with the time scale of a typical film narrative. That was the reason why in the Hollywood blockbuster The Day After Tomorrow some laws of physics were treated with a certain artistic freedom, in order to present a dramatic climate change within a few weeks instead of decades.
Mike and I have spent the last few days at a very interesting workshop in Iceland, where climate scientists, social scientists and filmmakers were brought together in conjunction with the Reykjavik International Film Festival. I will make no attempt to reproduce the many exciting discussions which we had, that often continued into the night. Instead, I’d like to present two short films by workshop participants. I chose a contrast of hot and cold.
First, the cold. The following film is a trailer by Phil Coates, a British filmmaker and expedition leader, who has filmed in extreme conditions on all seven continents. It is a “work in progress” under the working title “North Pole Living on Thin Ice”. Coates was dropped off with three scientists on the sea ice near the North Pole. On foot out on the Arctic Ocean they made oceanographic and ice thickness measurements. Soon you will be able to experience this research expedition on film. The scientific findings of the team will of course come out in the scientific literature.
Now, the heat. Peter Sinclair is a cartoonist from the US Midwest. Some years ago, out of anger over the aggressive disinformation campaign of climate deniers (he prefers this term), he started his now well-known video series “Climate Denial Crock of the Week”. Sinclair now also produces the film series, “This is Not Cool” for the renowned Yale Forum on Climate Change and the Media , and has made more than a hundred short films on climate issues. The following short film “Welcome to the Rest of Our Lives” was created in summer 2012 after the record heat wave in the United States. By his own admission, when he had finished it his film brought himself to tears.
As part of the IPCC WG1 SPM(pdf) released last Friday, there was a subtle, but important, change in one of the key figures – the radiative forcing bar-chart (Fig. SPM.4). The concept for this figure has been a mainstay of summaries of climate change science for decades, and the evolution over time is a good example of how thinking and understanding has progressed over the years while the big picture has not shifted much.
The Radiative-Forcing bar chart: AR5 version
The earliest version of a bar-chart that shows radiative forcing is this chart from one of Jim Hansen’s papers (Hansen et al, 1981):
In it, they demonstrate the relative importance – cooling or warming – of a number of relevant changes in radiatively important components (CO2, CH4, the sun, aerosols etc.). While the y-axis is the no-feedback surface temperature response, and the changes aren’t with reference to the pre-industrial, this might qualify as the ‘ur’-figure – the one from which all the others below are derived. (Note, if you know of an earlier version, please let me know and I’ll update the post accordingly).
I can’t find any examples for a decade or so, and in the First Assessment Report (FAR) (1990) there wasn’t such a figure either, even in the main text. (Again, please let me know if I’ve missed one). However, in the early 1990s, the figure appears in a form much closer to what we’ve come to expect. For instance, in Hansen et al (1993), the forcings in 1990 with respect to 1850 are given:
The transition to W/m2 as the unit has now been made, different greenhouse gases are separated, and an acknowledgement of more complicated issues associated with ozone and stratospheric water vapor is included. The main conclusion is that CO2 had been historically the most important forcing (around 1.24 W/m2). Shortly thereafter, the 1995 IPCC Second Assessment Report (pdf) added a couple of innovations:
Namely, an assessment of confidence, and the addition of aerosol forcings, while lumping the well-mixed gases all together. There is also the addition of the non-anthropogenic solar term. The figure was updated in 1998 and 2000 by Hansen and colleagues:
These updates added land use/land cover changes to albedo, decadal trends in volcanoes, and (in 2000) made the subtle point that the greenhouse effect from CFCs was offset a little by the impact CFCs were having on the ozone layer. An analogous diagram was very prominent in the 2001 IPCC Third Assessment report (TAR):
As with the SAR version, the confidence levels are present, there has been a switch from 1850 as a baseline in the SAR version, to 1750 in order to capture the beginning of the industrial rise in the GHGs, and again additional items were included: some aerosol related (sulphates, mineral dust, biomass burning, carbonaceous aerosols (incl. black carbon)), and two associated with aviation (via contrails and enhanced cirrus cloud formation). Concurrently, the Hansen et al (2001) version:
included even more details – the effect of black carbon on snow, nitrate aerosols, and an enhancement of the solar effect via ozone changes.
In the 2007 AR4 SPM, the main innovation was to rotate the axes by 90º and to add a bit more colour:
Though stratospheric water vapour makes a comeback, and the indirect effect of black carbon on snow makes an entrance for IPCC. In the AR5 SPM though, something more interesting happened…
The effects are now grouped by emissions, rather than by concentrations. This too has it’s antecedents, Fig 2.21 in the AR4 full report did the same thing, but was little noticed. In turn, that figure was drawn from work by Shindell et al (2009). This allows many of the indirect effects to be seen clearly. A particular point of interest is that the forcing by emission for CH4 is twice as large than its forcing by concentration, because of the important indirect effects on ozone and aerosols. The inclusion of CO, VOCs and NOx – normally considered as air quality issues – which affect climate via their indirect effects on ozone etc, is a salient reminder that the two issues are very much connected.
The most obvious change over time is that the visual styling of the graphs has improved over time. The latest version is far more comprehensive – including more effects, more connections, more error bars – and is, arguably, more useful. This follows from the fact that it is emissions that can be potentially moderated, and the latest iteration shows explicitly what the key emissions are (as opposed to what their consequences are after atmospheric chemistry has done it’s thing).
A key change over time is of course the increasing forcing from CO2. In 1993 it was 1.24 W/m2, in 2001, 1.4 W/m2, to today’s 1.7 W/m2.
The treatment of aerosols – and particularly the difference between absorbing (i.e. black carbon) and scattering (sulphates, nitrates) – has varied a lot. This is partly because of new information (on sources, concentrations, effects), but also because the aerosol issue has been reframed many times. The situation of black carbon is the most complicated. BC on it’s own is strongly warming, and it’s additional indirect effects on snow albedo amplify that. However, BC is almost always emitted in combination with organic carbonaceous aerosols (and/or secondary organic aerosol precursors), and so with respect to the emission-producing activity, the net effect on temperature is partially compensated (see the TAR version for instance). BC is chiefly associated with incomplete combustion of fossil fuel, or alternatively with biomass burning (through deforestation, land clearance or naturally occurring forest fires), and these two classes of sources have sometimes been grouped (2007), and sometimes separated (2001). The AR5 version groups all the aerosol factors into one bar with each of the separate constituents delineated. A further breakdown of this into contributions by activity would be useful, but as I understand it, this was considered not within the scope of WG1.
One final example is also worth noting. In all of the pre-AR5 figures (except Hansen in 2000), tropospheric and stratospheric ozone were considered separately. But while there are two separate effects going on (ozone precursors increasing in the lower atmosphere, and ozone depletion due to CFCs above), there is not a clean separation between changes in the troposphere and stratosphere. Thus the AR5 version correctly shows the ozone changes as indirect effects of the different emissions without delineating where the changes in ozone are occurring. This is a definite conceptual improvement among many.
- J. Hansen, M. Sato, R. Ruedy, A. Lacis, and V. Oinas, "Global warming in the twenty-first century: An alternative scenario", Proceedings of the National Academy of Sciences, vol. 97, pp. 9875-9880, 2000. http://dx.doi.org/10.1073/pnas.170278997
- D.T. Shindell, G. Faluvegi, D.M. Koch, G.A. Schmidt, N. Unger, and S.E. Bauer, "Improved Attribution of Climate Forcing to Emissions", Science, vol. 326, pp. 716-718, 2009. http://dx.doi.org/10.1126/science.1174760
The time has come: the new IPCC report is here! After several years of work by over 800 scientists from around the world, and after days of extensive discussion at the IPCC plenary meeting in Stockholm, the Summary for Policymakers was formally adopted at 5 o’clock this morning. Congratulations to all the colleagues who were there and worked night shifts. The full text of the report will be available online beginning of next week. Realclimate summarizes the key findings and shows the most interesting graphs.
Update 29 Sept: Full (un-copyedited) report available here.
It is now considered even more certain (> 95%) that human influence has been the dominant cause of the observed warming since the mid-20th century. Natural internal variability and natural external forcings (eg the sun) have contributed virtually nothing to the warming since 1950 – the share of these factors was narrowed down by IPCC to ± 0.1 degrees. The measured temperature evolution is shown in the following graph.
Figure 1 The measured global temperature curve from several data sets. Top: annual values. Bottom: averaged values over a decade.
Those who have these data before their eyes can recognise immediately how misguided the big media attention for the “wiggles” of the curves towards the end has been. Short-term variations like this have always existed, and they always will. These are mostly random, they are (at least so far) not predictable, and the IPCC has never claimed to be able to make predictions for short periods of 10-15 years, precisely because these are dominated by such natural variations.
The last 30 years were probably the warmest since at least 1,400 years. This is a result from improved proxy data. In the 3rd IPCC report this could only be said about the last thousand years, in the 4th about the last 1,300 years.
The future warming by 2100 – with comparable emission scenarios – is about the same as in the previous report. For the highest scenario, the best-estimate warming by 2100 is still 4 °C (see the following chart).
Figure 2 The future temperature development in the highest emissions scenario (red) and in a scenario with successful climate mitigation (blue) – the “4-degree world” and the “2-degree world.”
What is new is that IPCC has also studied climate mitigation scenarios. The blue RCP2.6 is such a scenario with strong emissions reduction. With this scenario global warming can be stopped below 2 ° C.
A large part of the warming will be irreversible: from the point where emissions have dropped to zero, global temperature will remain almost constant for centuries at the elevated level reached by that time. (This is why the climate problem in my opinion is a classic case for the precautionary principle.)
Sea levels are rising faster now than in the previous two millennia, and the rise will continue to accelerate – regardless of the emissions scenario, even with strong climate mitigation. (This is due to the inertia in the system.) The new IPCC scenarios to 2100 are shown the following graph.
Figure 3 Rise of the global sea level until the year 2100, depending on the emissions scenario.
This is perhaps the biggest change over the 4th IPCC report: a much more rapid sea-level rise is now projected (28-98 cm by 2100). This is more than 50% higher than the old projections (18-59 cm) when comparing the same emission scenarios and time periods.
With unabated emissions (and not only for the highest scenario), the IPCC estimates that by the year 2300 global sea levels will rise by 1-3 meters. [Correction: the document actually says: "1 m to more than 3 m"]
Already, there are likely more frequent storm surges as a result of sea level rise, and for the future this becomes very likely.
Land and sea ice
Over the last two decades, the Greenland and Antarctic ice sheets have been losing mass, glaciers have continued to shrink almost worldwide, and Arctic sea ice and Northern Hemisphere spring snow cover have continued to decrease in extent.
The Greenland ice sheet is less stable than expected in the last report. In the Eemian (the last interglacial period 120,000 years ago, when the global temperature was higher by 1-2 °C) global sea level was 5-10 meters higher than today (in the 4th IPCC report this was thought to be just 4-6 meters). Due to better data very high confidence is assigned to this. Since a total loss of the Greenland ice sheet corresponds to a 7 meters rise in sea level, this may indicate ice loss from Antarctica in the Eemian.
In the new IPCC report the critical temperature limit at which a total loss of the Greenland ice sheet will occur is estimated as 1 to 4°C of warming above preindustrial temperature. In the previous report that was still 1.9 to 4.6 °C – and that was one of the reasons why international climate policy has agreed to limit global warming to below 2 degrees.
With unabated emissions (RCP8.5) the Arctic Ocean will likely become virtually ice-free in summer before the middle of the century (see figure). In the last report, this was not expected until near the end of the century.
Figure 4 The ice cover on the Arctic Ocean in the 2-degree world (left) and the 4-degree world (right).
The IPCC expects that dry areas become drier due to global warming, and moist areas even wetter. Extreme rainfall has likely already been increasing in North America and Europe (elsewhere the data are not so good). Future extreme precipitation events are very likely to become more intense and more frequent over most land areas of the humid tropics and mid-latitudes.
At high emissions (red scenario above), the IPCC expects a weakening of the Atlantic Ocean circulation (commonly known as the Gulf Stream system) by 12% to 54% by the end of the century.
Last but not least, our CO2 emissions not only cause climate change, but also an increase in the CO2 concentration in sea water, and the oceans acidify due to the carbonic acid that forms. This is shown by the measured data in the graph below.
Figure 5 Measured CO2 concentration and pH in seawater. Low pH means higher acidity.
The new IPCC report gives no reason for complacency – even if politically motivated “climate skeptics” have tried to give this impression ahead of its release with frantic PR activities. Many wrong things have been written which now collapse in the light of the actual report.
The opposite is true. Many developments are now considered to be more urgent than in the fourth IPCC report, released in 2007. That the IPCC often needs to correct itself “upward” is an illustration of the fact that it tends to produce very cautious and conservative statements, due to its consensus structure – the IPCC statements form a kind of lowest common denominator on which many researchers can agree. The New York Times has given some examples for the IPCC “bending over backward to be scientifically conservative”. Despite or perhaps even because of this conservatism, IPCC reports are extremely valuable – as long as one is aware of it.
Update & Correction (28 Sept): The upper value of the sea-level range is 98 cm, not 97 cm – I overlooked the fact that IPCC corrected this between the final draft and the approved version of the SPM.
Some media wrongly report a rise of “only” up to 82 cm by the year 2100. That is a misunderstanding: 82 cm is the average for the period 2081-2100, not the level reached in 2100. Both the curves up to 2100 and those 20-year averages are shown in Fig. 3 above. Note that the additional rise of up to 16 cm in the final decade illustrates the horrendous rates of rise we can get by the end of the century with unmitigated emissions.
It is also worth noting that the 98 cm is the upper value of a “likely” range (66% probability to be within that range). As IPCC also notes, we could end up “several tens of centimeters” higher if the marine-based parts of the Antarctic ice sheet become unstable. Leading ice experts, like Richard Alley and Rob De Conto, consider this a serious risk.
Mike Mann: Climate-Change Deniers Must Stop Distorting the Evidence
Stefan Rahmstorf: The Known Knowns of Climate Change
The heat content of the oceans is growing and growing. That means that the greenhouse effect has not taken a pause and the cold sun is not noticeably slowing global warming.
NOAA posts regularly updated measurements of the amount of heat stored in the bulk of the oceans. For the upper 2000 m (deeper than that not much happens) it looks like this:
Change in the heat content in the upper 2000 m of the world’s oceans. Source: NOAA
Change in the heat content in the upper 2000 m of the world’s oceans. Source: NOAA
The amount of heat stored in the oceans is one of the most important diagnostics for global warming, because about 90% of the additional heat is stored there (you can read more about this in the last IPCC report from 2007). The atmosphere stores only about 2% because of its small heat capacity. The surface (including the continental ice masses) can only absorb heat slowly because it is a poor heat conductor. Thus, heat absorbed by the oceans accounts for almost all of the planet’s radiative imbalance.
If the oceans are warming up, this implies that the Earth must absorb more solar energy than it emits longwave radiation into space. This is the only possible heat source. That’s simply the first law of thermodynamics, conservation of energy. This conservation law is why physicists are so interested in looking at the energy balance of anything. Because we understand the energy balance of our Earth, we also know that global warming is caused by greenhouse gases – which have caused the largest imbalance in the radiative energy budget over the last century.
If the greenhouse effect (that checks the exit of longwave radiation from Earth into space) or the amount of absorbed sunlight diminished, one would see a slowing in the heat uptake of the oceans. The measurements show that this is not the case.
The increase in the amount of heat in the oceans amounts to 17 x 1022 Joules over the last 30 years. That is so much energy it is equivalent to exploding a Hiroshima bomb every second in the ocean for thirty years.
The data in the graphs comes from the World Ocean Database. Wikipedia has a fine overview of this database. The data set includes nine million measured temperature profiles from all of the world’s oceans. One of my personal heroes, the oceanographer Syd Levitus, has dedicated much of his life to making these oceanographic data freely available to everyone. During the Cold war that even landed him in a Russian jail for espionage for a while, as he was visiting Russia on his quest for oceanographic data (he once told me of that adventure over breakfast in a Beijing hotel).
How to deny data
Ideologically motivated “climate skeptics” know that these data contradict their claims, and respond … by rejecting the measurements. Millions of stations are dismissed as “negligible” – the work of generations of oceanographers vanish with a journalist’s stroke of a pen because what should not exist, cannot be. “Climate skeptics’” web sites even claim that the measurement uncertainty in the average of 3000 Argo probes is the same as that from each individual one. Thus not only are the results of climate research called into question, but even the elementary rules of uncertainty calculus that every science student learns in their first semester. Anything goes when you have to deny global warming. Even more bizarre is the Star Trek argument – but let me save that for later.
Slowdown in the upper ocean
Let us look at the upper ocean (for historic reasons defined as the upper 700 m):
Change in the heat content of the upper 700 m of the oceans. Source: NOAA
Change in the heat content of the upper 700 m of the oceans. Source: NOAA
And here is the direct comparison since 1980:
Changes in the heat content of the oceans. Source: Abraham et al., 2013. The 2-sigma uncertainty for 1980 is 2 x 1022 J and for recent years 0.5 x 1022 J
Changes in the heat content of the oceans. Source: Abraham et al., 2013. The 2-sigma uncertainty for 1980 is 2 x 1022 J and for recent years 0.5 x 1022 J
We see two very interesting things.
First: Roughly two thirds of the warming since 1980 occurred in the upper ocean. The heat content of the upper layer has gone up twice as much as in the lower layer (700 – 2000 m). The average temperature of the upper layer has increased more than three times as much as the lower (because the upper layer is only 700 m thick, and the lower one 1300 m). That is not surprising, as after all the ocean is heated from above and it takes time for the heat to penetrate deeper.
Second: In the last ten years the upper layer has warmed more slowly than before. In spite of this the temperature still is changing as rapidly there as in the lower layer. This recent slower warming in the upper ocean is closely related to the slower warming of the global surface temperature, because the temperature of the overlaying atmosphere is strongly coupled to the temperature of the ocean surface.
That the heat absorption of the ocean as a whole (at least to 2000 m) has not significantly slowed makes it clear that the reduced warming of the upper layer is not (at least not much) due to decreasing heating from above, but rather mostly due to greater heat loss to lower down: through the 700 m level, from the upper to the lower layer. (The transition from solar maximum to solar minimum probably also contributed a small part as planetary heat absorption decreased by about 15%, Abraham, et al., 2013). It is difficult to establish the exact mechanism for this stronger heat flux to deeper water, given the diverse internal variability in the oceans.
Association with El Niño
Completely independently of this oceanographic data, a simple correlation analysis (Foster and Rahmstorf ERL 2011) showed that the flatter warming trend of the last 10 years was mostly a result of natural variability, namely the recently more frequent appearance of cold La Niña events in the tropical Pacific and a small contribution from decreasing solar activity. The effect of La Niña can be seen directly in the following figure, without any statistical analysis. It shows the annual values of the global temperature with El Niño periods highlighted in red and La Niña periods in blue. (Weekly updates on the current El Niño situation can be found here.)
Global surface temperature (average of the three series from NOAA, NASA and HadCRU). Years influenced by El Niño are shown in red, La Niña influenced years in blue. Source: Climate Central, updated figure from the World Meteorological Organization (WMO) p. 15.
Global surface temperature (average of the three series from NOAA, NASA and HadCRU). Years influenced by El Niño are shown in red, La Niña influenced years in blue. Source: Climate Central, updated figure from the World Meteorological Organization (WMO) p. 15.
One finds that both the red El Niño years and the blue La Niña years are getting warmer, but given that we have lately experienced a cluster of La Niña years the overall warming trend over the last ten years is slower. This can be thought of as the “noise” associated with natural variability, not a change in the “signal” of global warming (as discussed many times before here at RealClimate).
This is consistent with the finding that reduced warming is not mainly a result of a change in radiation balance but due to oceanic heat storage. During La Niña events (with cold ocean surface) the ocean absorbs additional heat that it releases during El Niño events (when the ocean surface is warm). The next El Niño event (whenever it comes – that is a stochastic process) is likely to produce a new global mean temperature record (as happened in 2010).
The reason for the change is a specific change in the winds, especially in the subtropical Pacific, where the trade winds have become noticeably stronger. That altered ocean currents, strengthening the subtropical sea water circulation thus providing a mechanism to transport heat into the deeper ocean. This is related to the decadal weather pattern in the Pacific associated with the La Niña phase of the El Niño phenomenon.
New results from climate modelling
A study by Kosaka and Xie recently published in Nature confirms that the slowing rise in global temperatures during recent years has been a result of prevalent La Niña periods in the tropical Pacific. The authors write in the abstract:
Our results show that the current hiatus is part of natural climate variability tied specifically to a La Niña like decadal cooling.
They show this with an elegant experiment, in which they “force” their global climate model to follow the observed history of sea surface temperatures in the eastern tropical Pacific. With this trick the model is made to replay the actual sequence of El Niño and La Niña events found in the real world, rather than producing its own events by chance. The result is that the model then also reproduces the observed global average temperature history with great accuracy.
There are then at least three independent lines of evidence that confirm we are not dealing with a slowdown in the global warming trend, but rather with progressive global warming with superimposed natural variability:
1. Our correlation analysis between global temperature and the El Niño Index.
2. The measurements of oceanic heat uptake.
3. The new model calculation of Kosaka and Xie.
Beam me up Scotty!
Now to the most amusing attempt of “climate skeptics” to wish these scientific results away. Their argument goes like this: It is not possible that warming of the deep ocean accelerates at the same time as warming of the upper ocean slows down, because the heat must pass through the upper layer to reach the depths. A German journalist put it this way:
Winds can do a lot, but can they beam warm surface waters heated by carbon dioxide 700 meters further down?
This argument reveals once again the shocking lack of understanding of basic physics in “climate skeptic” circles. First the alleged problem is lacking any factual basis – after all, in the last decades the upper layer of the oceans has warmed faster than the deeper (even if recently not quite as fast as before). What is the problem with the heat first warming the upper layer before it penetrates deeper? That is entirely as expected.
Second, physically there is absolutely no problem for wind changes to cool the upper ocean at the same time as they warm the deeper layers. The following figure shows a simple example of how this can happen (there are also other possible mechanisms).
The ocean is known to be thermally stratified, with a warm layer, some hundreds of meters thick, lying on top of a cold deep ocean (a). In the real world the transition is more gradual, not a sharp boundary as in the simplified diagram. Panel (b) shows what happens if the wind is turned on. The surface layer (above the dashed depth level) becomes on average colder (less red), the deep layer warmer. The average temperature changes are not the same (because of the different thickness of the layers), but the changes in heat content are – what the upper layer loses in heat, the lower gains. The First Law of Thermodynamics sends greetings.
Incidentally, that is the well-known mechanism of El Niño: (a) corresponds roughly to El Niño (with a warm eastern tropical Pacific) while (b) is like La Niña (cold eastern tropical Pacific). The winds are the trade winds. The figure greatly exaggerates the slope of the layer interface, because in reality the ocean is paper thin. Even a difference of 1000 m across the width of the Pacific (let’s say 10,000 km) leads to a slope of only 1:10,000 – which no one could distinguish from a perfectly horizontal line without massive vertical exaggeration.
Now if during the transition from (a) to (b) the upper layer is heated by the greenhouse effect, its temperature could remain constant while that of the lower one warmed. Simple classical physics without beaming.
Beam me up Scotty! There is no intelligent life on this planet.
Tamino provides his usual detailed analysis of the new study by Kosaka and Xie.
Dana Nuccitelli in the Guardian on the same paper with some further interesting aspects that I have not talked about here.
Another important point that is often forgotten in the discussion: The data hole in the Arctic that explains part of the reduced warming trend (maybe even more than previously thought).
And a reminder: The warming trend of the 15-year period up to 2006 was almost twice as fast as expected (0.3°C per decade, see Fig. 4 here), and (rightly) nobody cared. We published a paper in Science in 2007 where we noted this large trend, and as the first explanation for it we named “intrinsic variability within the climate system”. Which it turned out to be.
Levitus et al. (Geophysical Research Letters 2012). Documentation of the heat increase in the world’s oceans since 1955. Included are uncertainty analyses, maps of the measurement coverage and many illustrations of the regional and vertical distribution of the warming.
Balmaseda et al. (Geophysical Research Letters 2013) shows among other things that El Niño events are associated with a strong loss of heat from the oceans. As discussed above, during an El Niño the ocean loses heat to the surface because the surface of the ocean (see Fig. (a) above) is unusually warm. Further, during volcanic eruptions the ocean cools but for another reason: because volcanic aerosols shade the sun and thus the oceans are heated less than normal.
Guemas et al. (Nature Climate Change 2013) shows that the slower warming of the last ten years cannot be explained by a change in the radiative balance of our Earth, but rather by a change in the heat storage of the oceans, and that this can be at least partially reproduced by climate models, if one accounts for the natural fluctuations associated with El Niño in the initialization of the models.
Abraham et al. (Reviews of Geophysics 2013). Very recent, wide ranging review of temperature measurements in the oceans with a detailed discussion of the accuracy of the data, planetary energy balance and the effect of the warming on sea levels.
Recently a group of researchers from Harvard and Oregon State University has published the first global temperature reconstruction for the last 11,000 years – that’s the whole Holocene (Marcott et al. 2013). The results are striking and worthy of further discussion, after the authors have already commented on their results in this blog.
A while ago, I discussed here the new, comprehensive climate reconstruction from the PAGES 2k project for the past 2000 years. But what came before that? Does the long-term cooling trend that ruled most of the last two millennia reach even further back into the past?
Over the last decades, numerous researchers have painstakingly collected, analyzed, dated, and calibrated many data series that allow us to reconstruct climate before the age of direct measurements. Such data come e.g. from sediment drilling in the deep sea, from corals, ice cores and other sources. Shaun Marcott and colleagues for the first time assembled 73 such data sets from around the world into a global temperature reconstruction for the Holocene, published in Science. Or strictly speaking, many such reconstructions: they have tried about twenty different averaging methods and also carried out 1,000 Monte Carlo simulations with random errors added to the dating of the individual data series to demonstrate the robustness of their results.
To show the main result straight away, it looks like this:
Figure 1 Blue curve: Global temperature reconstruction from proxy data of Marcott et al, Science 2013. Shown here is the RegEM version – significant differences between the variants with different averaging methods arise only towards the end, where the number of proxy series decreases. This does not matter since the recent temperature evolution is well known from instrumental measurements, shown in red (global temperature from the instrumental HadCRU data). Graph: Klaus Bitterman.
The climate curve looks like a “hump”. At the beginning of the Holocene – after the end of the last Ice Age – global temperature increased, and subsequently it decreased again by 0.7 ° C over the past 5000 years. The well-known transition from the relatively warm Medieval into the “little ice age” turns out to be part of a much longer-term cooling, which ended abruptly with the rapid warming of the 20th Century. Within a hundred years, the cooling of the previous 5000 years was undone. (One result of this is, for example, that the famous iceman ‘Ötzi’, who disappeared under ice 5000 years ago, reappeared in 1991.)
The shape of the curve is probably not surprising to climate scientists as it fits with the forcing due to orbital cycles. Marcott et al. illustrate the orbital forcing with this graphic:
Figure 2 Changes in incoming solar radiation as a function of latitude in December, January and annual average, due to the astronomical Milankovitch cycles (known as orbital forcing). Source: Marcott et al., 2013.
In the bottom panel we see the sunlight averaged over the year, as it depends on time and latitude. It declined strongly in the mid to high latitudes over the Holocene, but increased slightly in the tropics. In the Marcott reconstruction the global temperature curve is dominated primarily by the large temperature changes in northern latitudes (30-90 °N). For this, the middle panel is particularly relevant: the summer maximum of the incoming radiation. That reduces massively during the Holocene – by more than 30 watts per square meter. (For comparison: the anthropogenic carbon dioxide in the atmosphere produces a radiative forcing of about 2 watts per square meter – albeit globally and throughout the year.) The climate system is particularly sensitive to this summer insolation, because it is amplified by the snow- and ice-albedo feedback. That is why in the Milanković theory summer insolation is the determining factor for the ice age cycles – the strong radiation maximum at the beginning of the Holocene is the reason why the ice masses of the last Ice Age disappeared.
However a puzzle remains: climate models don’t seem to get this cooling trend over the last 5,000 years. Maybe they are underestimating the feedbacks that amplify the northern orbital forcing shown in Fig. 2. Or maybe the proxy data do not properly represent the annual mean temperature but have a summer bias – as Fig. 2 shows, it is in summer that the solar radiation has declined so strongly since the mid-Holocene. As Gavin has just explained very nicely: a model-data mismatch is an opportunity to learn something new, but it takes work to find out what it is.
Comparison with the PAGES 2k reconstruction
The data used by Marcott et al. are different from those of the PAGES 2k project (which used land data only) mainly in that they come to 80% from deep-sea sediments. Sediments reach further back in time (far further than just through the Holocene – but that’s another story). Unlike tree-ring data, which are mainly suitable for the last two thousand years and rarely reach further. However, the sediment data have poorer time resolution and do not extend right up to the present, because the surface of the sediment is disturbed when the sediment core is taken. The methods of temperature reconstruction are very different from those used with the land data. For example, in sediment data the concentration of oxygen isotopes or the ratio of magnesium to calcium in the calcite shells of microscopic plankton are used, both of which show a good correlation with the water temperature. Thus each sediment core can be individually calibrated to obtain a temperature time series for each location.
Overall, the new Marcott reconstruction is largely independent of, and nicely complementary to, the PAGES 2k reconstruction: ocean instead of land, completely different methodology. Therefore, a comparison between the two is interesting:
Figure 3 The last two thousand years from Figure 1, in comparison to the PAGES 2k reconstruction (green), which was recently described here in detail. Graph: Klaus Bitterman.
As we can see, both reconstruction methods give consistent results. That the evolution of the last one thousand years is virtually identical is, by the way, yet another confirmation of the “hockey stick” by Mann et al. 1999, which is practically identical as well (see graph in my PAGES article).
Is the modern warming unique?
Because of the above-mentioned limitations of sediment cores, the new reconstruction does not reach the present but only goes to 1940, and the number of data curves used already strongly declines before that. (Hence we see the uncertainty range getting wider towards the end and the reconstructions with different averaging methods diverge there – we here show the RegEM method because it deals best with the decreasing data coverage. For a detailed analysis see the article by statistician Grant Foster.) The warming of the 20th Century can only be seen partially – but this is not serious, because this warming is very well documented by weather stations anyway. There can be no doubt about the climatic warming during the 20th Century.
There is a degree of flexibility on how the proxy data (blue) should be joined with the thermometer data (red) – here I’ve done this so that for the period 1000 to 1940 AD the average temperature of the Marcott curve and the PAGES 2k reconstruction are equal. I think this is better than the choice of Marcott et al. (whose paper was published before PAGES 2k) – but this is not important. The relative positioning of the curves makes a difference for whether the temperatures are slightly higher at the end than ever before in the Holocene, or only (as Marcott et al write) higher than during 85% of the Holocene. Let us just say they are roughly as high as during the Holocene optimum: maybe slightly cooler, maybe slightly warmer. This is not critical.
The important point is that the rapid rise in the 20th Century is unique throughout the Holocene. Whether this really is true has been intensively discussed in the blogs after the publication of the Marcott paper. Because the proxy data have only a coarse time resolution – would they have shown it if there had been a similarly rapid warming earlier in the Holocene?
I think for three reasons it is extremely likely that there was not such a rapid warming before:
1. There are a number of high-resolution proxy data series over the Holocene, none of which suggest that there was a previous warming spike as strong as in the 20th Century. Had there been such a global warming before, it would very likely have registered clearly in some of these data series, even if it didn’t show up in the averaged Marcott curve.
2. Grant Foster performed the test and hid some “20th C style” heating spikes in earlier parts of the proxy data to see whether they are revealed by the method of Marcott et al – the answer is a resounding yes, they would show up (albeit attenuated) in the averaged curve, see his article if you are interested in the details. [Update 18 Sept: one of our readers has confirmed this conclusion with a different method (Fourier filtering). Thanks!]
3. Such heating must have a physical basis, and it would have to have quickly disappeared again (would it have lasted, it would be even more evident in the proxy data). There is no evidence in the forcing data that such a climate forcing could have suddenly appeared and disappeared, and I cannot imagine what could have been the mechanism. (A CO2-induced warming would persist until the CO2 concentration decays again over thousands of years – and of course we have good data on the concentration of CO2 and other greenhouse gases for the whole Holocene.)
The curve (or better curves) of Marcott et al. will not be the last word on the global temperature history during the Holocene; like Mann et al. in 1998 it is the opening of the scientific discussion. There will certainly be alternative proposals, and here and there some corrections and improvements. However, I believe that (as was the case with Mann et al. for the last millennium) the basic shape will turn out to be robust: a relatively smooth curve with slow cooling trend lasting millennia from the Holocene optimum to the “little ice age”, mainly driven by the orbital cycles. At the end this cooling trend is abruptly reversed by the modern anthropogenic warming.
The following graph shows the Marcott reconstruction complemented by some context: the warming at the end of the last Ice Age (which 20,000 years ago reached its peak) and a medium projection for the expected warming in the 21st Century if humanity does not quickly reduce greenhouse gas emissions.
Figure 4 Global temperature variation since the last ice age 20,000 years ago, extended until 2100 for a medium emissions scenario with about 3 degrees of global warming. Graph: Jos Hagelaars.
Marcott et al. dryly state about this future prospect:
By 2100, global average temperatures will probably be 5 to 12 standard deviations above the Holocene temperature mean.
In other words: We are catapulting ourselves way out of the Holocene.
Just looking at the known drivers (climate forcings) and the actual temperature history shows it directly, without need for a climate model: without the increase in greenhouse gases caused by humans, the slow cooling trend would have continued. Thus virtually the entire warming of the 20th Century is due to man. This May, for the first time in at least a million years, the concentration of carbon dioxide in our atmosphere has exceeded the threshold of 400 ppm. If we do not stop this trend very soon, we will not recognize our Earth by the end of this century.
It is a truism that all models are wrong. Just as no map can capture the real landscape and no portrait the true self, numerical models by necessity have to contain approximations to the complexity of the real world and so can never be perfect replications of reality. Similarly, any specific observations are only partial reflections of what is actually happening and have multiple sources of error. It is therefore to be expected that there will be discrepancies between models and observations. However, why these arise and what one should conclude from them are interesting and more subtle than most people realise. Indeed, such discrepancies are the classic way we learn something new – and it often isn’t what people first thought of.
The first thing to note is that any climate model-observation mismatch can have multiple (non-exclusive) causes which (simply put) are:
- The observations are in error
- The models are in error
- The comparison is flawed
In climate science there have been multiple examples of each possibility and multiple ways in which each set of errors has arisen, and so we’ll take them in turn.
1. Observational Error
These errors can be straight-up mistakes in transcription, instrument failure, or data corruption etc., but these are generally easy to spot and so I won’t dwell on this class of error. More subtly, most of the “observations” that we compare climate models to are actually syntheses of large amounts of raw observations. These data products are not just a function of the raw observations, but also of the assumptions and the “model” (usually statistical) that go into building the synthesis. These assumptions can relate to space or time interpolation, corrections for non-climate related factors, or inversions of the raw data to get the relevant climate variable. Examples of these kinds of errors being responsible for a climate model/observation discrepancy range from the omission of orbital decay effects in producing the UAH MSU data sets, or the problems of no-modern analogs in the CLIMAP reconstruction of ice age ocean temperatures.
In other fields, these kinds of issues arise in unacknowledged laboratory effects or instrument calibration errors. Examples abound, most recently for instance, the supposed ‘observation’ of ‘faster-than-light’ neutrinos.
2. Model Error
There are of course many model errors. These range from the inability to resolve sub-grid features of the topography, approximations made for computational efficiency, the necessarily incomplete physical scope of the models and inevitable coding bugs. Sometimes model-observation discrepancies can be easily traced to such issues. However, more often, model output is a function of multiple aspects of a simulation, and so even if the model is undoubtedly biased (a good example is the persistent ‘double ITCZ’ bias in simulations of tropical rainfall) it can be hard to associate this with a specific conceptual or coding error. The most useful comparisons are then those that allow for the most direct assessment of the cause of any discrepancy.”Process-based” diagnostics – where comparisons are made for specific processes, rather than specific fields, are becoming very useful in this respect.
When a comparison is being made in a specific experiment though, there are a few additional considerations. Any particular simulation (and hence diagnostic from it) arises as a result from a collection of multiple assumptions – in the model physics itself, the forcings of the simulation (such as the history of aerosols in a 20th Century experiment), and the initial conditions used in the simulation. Each potential source of the mismatch needs to be independently examined.
3. Flawed Comparisons
Even with a near-perfect model and accurate observations, model-observation comparisons can show big discrepancies because the diagnostics being compared while similar in both cases, actually end up be subtly (and perhaps importantly) biased. This can be as simple as assuming an estimate of the global mean surface temperature anomaly is truly global when it in fact has large gaps in regions that are behaving anomalously. This can be dealt with by masking the model fields prior to averaging, but it isn’t always done. Other examples have involved assuming the MSU-TMT record can be compared to temperatures at a specific height in the model, instead of using the full weighting profile. Yet another might be comparing satellite retrievals of low clouds with the model averages, but forgetting that satellites can’t see low clouds if they are hiding behind upper level ones. In paleo-climate, simple transfer functions of proxies like isotopes can often be complicated by other influences on the proxy (e.g. Werner et al, 2000). It is therefore incumbent on the modellers to try and produce diagnostics that are commensurate with what the observations actually represent.
Flaws in comparisons can be more conceptual as well – for instance comparing the ensemble mean of a set of model runs to the single realisation of the real world. Or comparing a single run with its own weather to a short term observation. These are not wrong so much as potentially misleading – since it is obvious why there is going to be a discrepancy, albeit one that doesn’t have much implications for our understanding.
The implications of any specific discrepancy therefore aren’t immediately obvious (for those who like their philosophy a little more academic, this is basically a rephrasing of the Quine/Duhem position on scientific underdetermination). Since any actual model prediction depends on a collection of hypotheses together, as do the ‘observation’ and the comparison, there are multiple chances for errors to creep in. It takes work to figure out where though.
The alternative ‘Popperian’ view – well encapsulated by Richard Feynman:
… we compare the result of the computation to nature, with experiment or experience, compare it directly with observation, to see if it works. If it disagrees with experiment it is wrong.
actually doesn’t work except in the purest of circumstances (and I’m not even sure I can think of a clean example). A recent obvious counter-example in physics was the fact that the ‘faster-than-light’ neutrino experiment has not falsified special relativity – despite Feynman’s dictum.
But does this exposition help in any current issues related to climate science? I think it does – mainly because it forces one to think about the other ancillary hypotheses are. For three particular mismatches – sea ice loss rates being much too low in CMIP3, tropical MSU-TMT rising too fast in CMIP5, or the ensemble mean global mean temperatures diverging from HadCRUT4 – it is likely that there are multiple sources of these mismatches across all three categories described above. The sea ice loss rate seems to be very sensitive to model resolution and has improved in CMIP5 – implicating aspects of the model structure as the main source of the problem. MSU-TMT trends have a lot of structural uncertainty in the observations (note the differences in trends between the UAH and RSS products). And global mean temperature trends are quite sensitive to observational products, masking, forcings in the models, and initial condition sensitivity.
Working out what is responsible for what is, as they say, an “active research question”.
Update: From the comments:
“our earth is a globe
whose surface we probe
no map can replace her
but just try to trace her”
– Steve Waterman, The World of Maps
- M. Werner, U. Mikolajewicz, M. Heimann, and G. Hoffmann, "Borehole versus isotope temperatures on Greenland: Seasonality does matter", Geophysical Research Letters, vol. 27, pp. 723-726, 2000. http://dx.doi.org/10.1029/1999GL006075