Guest commentary from Tim Osborn, Tom Melvin and Keith Briffa, Climatic Research Unit, UEA
Records of tree-ring characteristics such as their width (TRW) and density (usually the maximum density of the wood formed towards the end of the growing season – the “maximum latewood density” – MXD) are widely used to infer past variations in climate over recent centuries and even millennia. Chronologies developed from sites near to the elevational or latitudinal tree lines often show sensitivity to summer temperature and, because of their annual resolution, absolute dating and relatively widespread nature, they have contributed to many local, continental and hemispheric temperature reconstructions. However, tree growth is a complex biological process that is subject to a range of changing environmental influences, not just summer temperature, and so replication, coherence and consistency across records and other proxies are an important check on the results.
Tree-ring records have greater replication (both within a site and between nearby sites) than other types of climate proxy. Good replication helps to minimise the influence of random localised factors when extracting the common signal, and it also allows the comparison of information obtained from different independent sets or sub-sets of data. If independent sets of data – perhaps trees with different mean growth rates or from different sites – show similar variations, then we can have greater confidence that those variations are linked to real variations in climate.
In a new QSR paper (Briffa et al., 2013), (BEA13) we have used these approaches to re-assess the combined tree-ring evidence from the Yamal and Polar Urals region (Yamalia) of northern Siberia, considering the common signal in tree-growth changes at different sites and in subsets of data defined in other ways. Together with our Russian colleagues and co-authors, we have incorporated many new tree-ring data, to increase the replication and to update the chronology to 2005 and have reassessed the inferences about summer temperature change that can be drawn from these data. The paper is published as an open-access paper (no paywall) and supplementary information including the raw tree-ring and instrumental temperature data are available from our website.
Figure 1 illustrates our inferences about past summer temperature variations. Low tree-growth periods for which the inferred summer temperatures are approximately 2.5°C below the 1961-90 reference are apparent in the 15-year smoothed reconstructions (Figure 1d), centred around 1005, 1300 (Figure 1b), 1455 (Figure 1c), 1530, particularly the 1810s where the inferred cooling reaches -4 or even -6°C for individual years (Figure 1a), and the 1880s. These temperature estimates will be interesting for the current debate about the representation of volcanically-induced cooling in temperature reconstructions, and for testing of climate model simulations.
There are numerous periods (Figure 1d) of one or two decades with relatively high growth (and inferred summer temperatures close to the 1961-90 level) but at longer timescales (Figures 1e and 1f) only the 40-year period centred at 250 CE appears comparable with 20th century warmth. This early warm period was both preceded and followed by periods of low ring width and so the central estimates of the temperature reconstruction averaged over the warmest 100-year period near the 3rd century CE (205-304 CE) are 0.4°C cooler than the 1906-2005 mean. Allowing for chronology and reconstruction uncertainty, we find that the mean of the last 100 years of the reconstruction is likely warmer than any century in the last 2000 years in this region.
Figure 1 (from Fig. 13 of BEA13). Summer temperature reconstructions based on either the Yamal ring-width chronology (red line, orange confidence intervals) or by combining information from the Yamal and Polar Urals ring-width chronologies and the Polar Urals density chronology (blue line, blue confidence intervals). The latter is shorter because the Polar Urals data are shorter and also has two versions that differ in how they are calibrated and in the summer temperature that they represent (in panels (a)-(e) it represents mean June–August temperature shown by the black dotted lines, while in panel (f) it represents mean June–July temperature shown by black continuous lines). Each panel shows a different time period and degree of smoothing; the values near to the end of the smoothed series are more uncertain than shown here due to the presence of end effects on the spline filters. The low-frequency agreement between the series is expected because the Yamal ring-width data are common to both reconstructions.
A response to the critics
The publication of our paper provides a timely opportunity to revisit and respond to a series of unfounded criticisms that have been levelled at our work in recent years, mostly originating from Steve McIntyre at the ClimateAudit blog, though they have been widely repeated and embellished by other commentators.
It is of course usual for results to be improved and superseded as science progresses. Our new Yamalia ring-width chronology differs from the Yamal chronology published by Briffa (2000) – see Figure 2a for a comparison. The very recent values are now lower (and extend by a decade more), but so are the estimates around 1000 CE. The consequent differential between medieval and modern growth is hardly changed. The period of high growth centred near to 250 CE (noted above) is also relatively unchanged, and is now the most prominent pre-20th century period of anomalous growth in the last 2000 years. These changes are because of genuine scientific progress, not because – as our critics have claimed – we had previously presented a deceptive chronology. They arise from extra data collection and, particularly, developments in tree-ring standardization methods (see the paper for details).
Figure 2. (a) Comparison of the Briffa (2000) Yamal ring-width chronology (red) and the new Yamalia ring-width chronology (black). (b) Comparison of the new Yamalia ring-width chronology (black) and two chronologies that have been promoted by critics of our work, but which turn out to be biased: the Polar Urals “update” chronology (purple; from Esper et al., 2002) and the Yamal chronology with modern data coming only from the Khadyta River site (blue). All series were scaled to have unit variance before being smoothed with a 10-year filter.
Figure 2b compares the new Yamalia chronology with two alternative chronologies heavily promoted by McIntyre and others – the so-called Polar Urals “update” chronology and a Yamal chronology using modern samples from the Khadyta River site. Both chronologies present a different picture of the difference between peak medieval and peak modern growth rates, with elevated growth around 1000 CE and suppressed growth in the 20th century. Our paper demonstrates that these two alternative chronologies are flawed.
The real Yamal deception
Some background is perhaps needed regarding our preferred chronologies. Briffa et al. (1995) developed chronologies from Polar Urals ring width and density data. Subsequently, Briffa (2000) presented a 2000-year ring width chronology from nearby Yamal, which had much better replication (more trees) than the Polar Urals data and was therefore preferred. The Polar Urals data were later supplemented by additional samples which were used by Esper et al. (2002). Even including these additional samples the Yamal chronology remained better replicated: of the 1213 overlap years, the Briffa (2000) Yamal has 4 years with samples from less than 10 trees, while the “updated” Polar Urals chronology has 264 years with data from less than 10 trees, many of them in the medieval period (see here for more details). The additional sub-fossil data used in our new paper further increases the replication of the Yamal chronology compared with the Polar Urals chronology (Figure EC1 in the SI of the new paper). On the basis of replication and the strength of the common signal, the Yamal record was, and remains, superior to the Polar Urals chronology.
1: Why we didn’t use the Polar Urals “update”
We have been criticised for not archiving the Polar Urals “update” data. The “update” data were in fact archived at the ITRDB thirteen years ago. We have been criticised for not publishing an updated Polar Urals chronology using the updated data (and accused of worse here). The supposed reason for our decision not to do this was that the ‘update’ does not support our supposedly desired message of unprecedented modern warmth (because they appear to suggest that tree growth rate was greater during earlier times including the medieval period; Figure 2b, compare purple and black lines).
However, as reported in BEA13, it turns out that during the medieval period these Polar Urals “updates” are dominated by samples taken from the root collars of trees. Ring widths measured in such root-collar samples tend to be systematically larger than equivalent rings measured higher in the boles (stems) of the same trees. The reason for larger tree-ring widths during medieval times in the Polar Urals “updates” is now clear: it is because more samples were from the root collar with their inherently wider rings. Interpreting this as evidence for warmer temperatures is wrong.
Conclusion: the so-called “Polar Urals update” chronology is severely biased and should not be used as evidence of past changes in temperature; nor should our critics present it as evidence that we had committed scientific fraud by failing to publish a chronology using these data.
2: The Yamal record was not biased by omitting data
CRU has been accused of deception by presenting a Yamal tree-ring chronology biased by the omission of otherwise suitable data. A particular theme, originating again from ClimateAudit, is that tree-ring data from Khadyta River had not been used and would have dramatically altered the character of the chronology – and the NH temperature reconstructions that used the Yamal chronology – if these data had been used (Figure 2b, compare blue and black lines).
As reported in BEA13, through collaboration with our Russian colleagues who have extensive knowledge of tree-rings in this region, we have learnt that the Khadyta River site has problems related to the particular site conditions that differ from other sites in this region, and maybe influenced by changing permafrost. Certainly the trees have reduced growth and appear to be unhealthy, and some even dying. Thus the Khadyta River data that some claimed as being more representative than the data we used turn out to have a common signal that is inconsistent with the majority of site chronologies in this region. They could potentially bias the Yamal chronology had they been included and so for this reason we excluded these data from the main analysis in the new paper.
Conclusion: claims of a deceptive and biased Yamal chronology turn out to rely on outlier data that should be omitted; our new research, based on a greatly expanded dataset, supports the finding that tree-growth (and inferred summer temperature) in this region are likely greater in the last 100 years than for any previous century in the last 2000 years.
3: We did not withhold a combined Yamal and Polar Urals chronology
Separately, some of our incomplete and unpublished work on the Yamal and Polar Urals tree-ring data has been the subject of multiple requests under UK FOI/EIR legislation. (See this previous post for background). To be clear, this was not a request for the raw data that we were using in this area of northern Russia – the raw data were and are freely available. Instead, the request was for a tree-ring chronology that formed part of work that was, at the time, still ongoing.
The EIR has a (very sensible) exemption for material which is unfinished, incomplete or still in the course of completion. Our university (UEA) therefore refused the requests to release our incomplete research (see responses here and here). Steve McIntyre appealed and UEA reconsidered the issues but upheld the original decision. McIntyre then complained to the Information Commissioner’s Office (ICO). The ICO upheld UEA’s decision and rejected McIntyre’s complaint. McIntyre then appealed to the First-Tier Information Tribunal. Two weeks ago, after more than two years defending our right to publish our research at a time when we considered it to be complete rather than at a time dictated to us by Steve McIntyre, the Information Tribunal finally dismissed McIntyre’s appeal.
The research that was the subject of this information request has now been – as we said all along that it would be – completed and published, coincidentally, within days of the Information Tribunal’s decision. Our publication of this work contradicts McIntyre’s explicit accusations that we were hiding the requested chronology because it would have exposed long-standing scientific fraud on our part. These accusations were, and remain, baseless and mistaken.
Over the years, McIntyre has advanced a number of other criticisms of our tree-ring work in northwestern Eurasia. We note here that these too are also wrong.: 1) the original Polar Urals chronology was not wrongly cross-dated as claimed in a 2005 submission to Nature by McIntyre and McKitrick. When we demonstrated this in our response, Nature decided to publish neither their comment nor our response. It is worth noting that this rejection, nor any acknowledgement of his erroneous conclusions, were ever mentioned by McIntyre on his blog. (2) The Grudd (2008) Tornetrask chronology, promoted by some because of its elevated medieval growth (and implied much greater warmth) relative to the modern period, is biased by the issues noted in Melvin et al. (2013).
In conclusion, criticisms of our work have been based on misconceptions and misinformation. The so-called Polar Urals “update” chronology promoted by our critics turns out to be biased by inclusion of samples from tree root collars. The Khadyta River tree-ring data, whose exclusion from the Yamal chronology was portrayed as a severe example of cherry-picking to obtain a pre-conceived outcome, are from trees that appear to be dying and do not have a common signal with other regions. An updated Tornetrask chronology, with apparently elevated medieval warmth, turns out to be biased by combining incompatible groups of measurements.
That the critics have promoted a series of results that have turned out to be flawed is unfortunate but not in itself reason to complain – as science progresses it is usual for results to be improved and superseded. What can be condemned, however, is the long campaign of allegations of dishonesty and scientific fraud made against us on the basis of these false claims. That is the most disquieting legacy of Steve McIntyre and ClimateAudit. The real Yamal deception is their attempt to damage public confidence in science by making speculative and scandalous claims about the actions and motivations of scientists while cloaking them in a pretense of advancing scientific knowledge.
Links to other relevant information
- K.R. Briffa, T.M. Melvin, T.J. Osborn, R.M. Hantemirov, A.V. Kirdyanov, V.S. Mazepa, S.G. Shiyatov, and J. Esper, "Reassessing the evidence for tree-growth and inferred temperature change during the Common Era in Yamalia, northwest Siberia", Quaternary Science Reviews, vol. 72, pp. 83-107, 2013. http://dx.doi.org/10.1016/j.quascirev.2013.04.008
- K.R. Briffa, "Annual climate variability in the Holocene: interpreting the message of ancient trees", Quaternary Science Reviews, vol. 19, pp. 87-105, 2000. http://dx.doi.org/10.1016/S0277-3791(99)00056-6
- K.R. Briffa, P.D. Jones, F. Schweingruber, S. Shiyatov, and E. Cook, "Unusual twentieth-century summer warmth in a 1,000-year temperature record from Siberia", Nature, vol. 376, pp. 156-159, 1995. http://dx.doi.org/10.1038/376156a0
- J. Esper, "Low-Frequency Signals in Long Tree-Ring Chronologies for Reconstructing Past Temperature Variability", Science, vol. 295, pp. 2250-2253, 2002. http://dx.doi.org/10.1126/science.1066208
- H. Grudd, "Torneträsk tree-ring width and density ad 500–2004: a test of climatic sensitivity and a new 1500-year reconstruction of north Fennoscandian summers", Climate Dynamics, vol. 31, pp. 843-857, 2008. http://dx.doi.org/10.1007/s00382-007-0358-2
- T.M. Melvin, H. Grudd, and K.R. Briffa, "Potential bias in 'updating' tree-ring chronologies using regional curve standardisation: Re-processing 1500 years of Tornetrask density and ring-width data", The Holocene, vol. 23, pp. 364-373, 2013. http://dx.doi.org/10.1177/0959683612460791
June’s open thread…
Guest post from PubPeer.com
The process of reviewing published science is constantly occurring and is now commonly being called post-publication peer review. It occurs in many places including on blogs such as this one, review articles, at conferences around the world, and has even been encouraged on the websites of some journals. However, the process of recording and searching these comments is, unfortunately, inefficient and underused by the larger scientific community for several reasons: To successfully impact the publication process, this database of knowledge has to accomplish two important tasks. First it requires participation by a large part of a given scientific community so that it reflects an average impression instead of an outlier’s impression. Second, it requires that the collective knowledge is centralized and easy to search in order find out what the community collectively thinks about an individual paper or a body of work. A recent initiative, the San Francisco Declaration on Research Assessment (DORA), echoes many of these same concerns.
In an attempt to assemble such a database, a team of scientists, have put together a website called PubPeer.com that is searchable and encourages participation by the larger scientific community. With a critical mass of usage an organized system of post-publication review could improve both the process of scientific publication as well as the research that underlies those publications.
Those of us involved in the creation of PubPeer.com believe that in an ideal world, a scientist’s main goal would be to discover something interesting about the world and simply report it to other scientists to use and build upon. This idealistic view of the scientific process is however not matched in reality because, for academic scientists, our publications count for much more than a simple contribution to the scientific record. For example, the majority of candidates are eliminated from consideration for tenure track positions at a major universities based on the names of the journals that have published their recent findings.
Review committees use this method because publications are the best measure of past and potential scientific output, but by potentially overvaluing “high impact” journal names, these committees and study sections effectively defer to journal editors to help them choose the best candidates for jobs and grants. However, these journals select their articles based on more than just good science – the papers also need to be of ‘wider interest’ and this can sometimes skew the publications towards ‘exciting’ results over those that are more measured, and perhaps more likely to be correct (for instance). The sometimes disproportinate attention given to a high profile paper also makes it a tempting target for more unscrupilous scientists.
It’s never going to be possible for us to thoroughly read all of the papers submitted to a job advertisement, nor all of the papers referenced in grant applications, but we can easily reduce the importance that journal names play in decisions and replace it with something that is more meaningful and directly in our hands instead of the hands of publishers. After reading any publication, we all have impressions about whether the reported observations are useful, interesting, elegant, irrelevant, flawed, etc. If a particular scientific field that is interested in a given publication were able to compile all of it’s impressions of that publication, that collective information would be infinitely more useful to search committees and study sections than the name of the journal in which it was published.
Outlined below are a few aspects of PubPeer.com that differentiate it from the current post-publication review systems and which will hopefully make it more widely used.
- A key issue that we have decided on is the importance of anonymity. One of the reasons that we have never commented on articles directly on journal websites is because the colleagues whose publications we are most qualified to comment on are likely reviewing our publications and grant proposals. Even the most well-intentioned criticism could potentially irk these potential reviewers. Since publications are so precious to everyone’s future career advancement, there is a huge psychological barrier for early stage scientists to attach names to any comments that could be considered critical.
Therefore, in order to encourage as much participation in this post-publication review process as possible, PubPeer allows comments to be left anonymously if someone is so inclined. Critics of this feature sometimes email us to point out that anonymity allows for baseless slander or to proclaim that a commenter’s name is essential for judging the validity of a comment. We strongly disagree with this second point because good comments are good regardless of whether they come from a senior scientist or a graduate student. We can all judge for ourselves the content of comments and on PubPeer it is possible to vote the good comments up and the bad comments down into the noise so that community as a whole can decide together what is worth paying attention to. Baseless defamation, rumors, and ad hominen attacks are not tolerated at all and are immediately removed from the site.
The people involved with PubPeer are all active scientists and we are trying to remain anonymous for the time being for several reasons: 1) we can imagine scenarios in which pressure could be put on us to remove/alter comments if our identities were known and 2) we would like to protect our families and private bank accounts from the more litigious among our readers.
- A main drawback of the current practice of post-publication peer review is that the reviews can be spread across many different blogs and journal websites. If one wants to know what the community thinks of a given body of work (whether it be a discipline, an author’s output, a university department, etc.) it takes a major time investment to track down the information from all of these different sources. PubPeer provides a centralized and easily searchable database that contains comments on all published articles.
- PubPeer also provides for a system of alerts. In order to be effective, authors and others interested should be able to be alerted to comments on their favorite publications or topics. Pubpeer automatically notifies corresponding authors of new comments on their articles and anyone can set up email alerts on articles they find interesting.
We’d like to thank realclimate.org for this invitation to explain PubPeer and we welcome any suggestions, comments, criticism, or questions on the contact page pubpeer.com/contact or in the comments section below. Many conversations have already started on the site both with and without author responses that are sometimes quite detailed (a list of all of the most recent articles receiving comments can be found at pubpeer.com/recent). Some comments have already led to important corrections in high profile studies (see also Nature News, Science Insider, Labrigger). We hope that this can be even further expanded in future.
This month’s open thread.
There has been an unusual surge of interest in the climate sensitivity based on the last decade’s worth of temperature measurements, and a lengthy story in the Economist tries to argue that the climate sensitivity may be lower than previously estimated. I think its conclusion is somewhat misguided because it missed some important pieces of information (also see skepticalscience’s take on this story here).
While the Economist referred to some unpublished work, it missed a new paper by Balmaseda et al. (2013) which provides a more in-depth insight. Balmaseda et al suggest that the recent years may not have much effect on the climate sensitivity after all, and according to their analysis, it is the winds blowing over the oceans that may be responsible for the ‘slow-down’ presented in the Economist.
It is well-known that changes in temperature on decadal time scales are strongly influenced by natural and internal variations, and should not be confused with a long-term trend (Easterling and Wehner, 2009;Foster and Rahmstorf, 2011).
An intensification of the trades has affected surface ocean currents called the subtropical gyres, and these changes have resulted in a predominance of the La Nina state. The La Nina phase is associated with a lower global mean temperature than usual.
Balmaseda et al’s results also suggested that a negative phase of the pacific decadal oscillation (PDO) may have made an imprint on the most recent years. In addition, they found that the deep ocean has warmed over the recent years, while the upper 300m of the oceans have ‘stabilised’.
The oceans can be compared to a battery that needs to be recharged after going flat. After the powerful 1997-98 El Nino, heat flowed out of the tropical oceans in order to heat the atmosphere (evaporative cooling) and the higher latitudes. The warming resumed after the ‘deflation’, but something happened after 1998: since then, the warming has involved the deep ocean to a much greater extent. A weakening of the Atlantic meridional overturning circulation (MOC) may have played a role in the deep ocean warming.
The recent changes in these decade-scale variations appear to have masked the real accumulation of heat on Earth.
The new knowledge from this paper, the way I read it, is the revelation of the role of winds for vertical mixing/diffusion of heat in a new analysis of the world oceans. Their results were derived through a set of different experiments testing the sensitivity to various assumptions and choices made for data inclusion and the ocean model assimilation set-up.
The analysis involved a brand new ocean analysis (ORAS4; Balmaseda et al., 2013) based on an optimal use of observations, data assimilation, and an ocean model forced with state-of-the-art description of the atmosphere (reanalyses).
By running a set of different experiments with the ocean model, including different conditions, such as surface winds and different types of data, they explored which influence the different conditions have on their final conclusion.
The finding that the winds play a role for the state of the warming may not be surprising to oceanographers, although it may not necessarily be the first thing a meteorologist may consider.
Other related discussions: OSS
- M.A. Balmaseda, K.E. Trenberth, and E. Källén, "Distinctive climate signals in reanalysis of global ocean heat content", Geophysical Research Letters, pp. n/a-n/a, 2013. http://dx.doi.org/10.1002/grl.50382
- D.R. Easterling, and M.F. Wehner, "Is the climate warming or cooling?", Geophysical Research Letters, vol. 36, 2009. http://dx.doi.org/10.1029/2009GL037810
- G. Foster, and S. Rahmstorf, "Global temperature evolution 1979–2010", Environmental Research Letters, vol. 6, pp. 044022, 2011. http://dx.doi.org/10.1088/1748-9326/6/4/044022
- M.A. Balmaseda, K. Mogensen, and A.T. Weaver, "Evaluation of the ECMWF ocean reanalysis system ORAS4", Quarterly Journal of the Royal Meteorological Society, pp. n/a-n/a, 2013. http://dx.doi.org/10.1002/qj.2063
Guest commentary by Darrell Kaufman (N. Arizona U.)
In a major step forward in proxy data synthesis, the PAst Global Changes (PAGES) 2k Consortium has just published a suite of continental scale reconstructions of temperature for the past two millennia in Nature Geoscience. More information about the study and its implications are available at the FAQ on the PAGES website and the datasets themselves are available at NOAA Paleoclimate.
The main conclusion of the study is that the most coherent feature in nearly all of the regional temperature reconstructions is a long-term cooling trend, which ended late in the 19th century, and which was followed by a warming trend in the 20th C. The 20th century in the reconstructions ranks as the warmest or nearly the warmest century in all regions except Antarctica. During the last 30-year period in the reconstructions (1971-2000 CE), the average reconstructed temperature among all of the regions was likely higher than anytime in at least ~1400 years. Interestingly, temperatures did not fluctuate uniformly among all regions at multi-decadal to centennial scales. For example, there were no globally synchronous multi-decadal warm or cold intervals that define a worldwide Medieval Warm Period or Little Ice Age. Cool 30-year periods between the years 830 and 1910 CE were particularly pronounced during times of weak solar activity and strong tropical volcanic eruptions and especially if both phenomena often occurred simultaneously.
Figure: Thirty-year mean relative temperatures for the seven PAGES 2k continental-scale regions arranged vertically from north to south.
The origin of the ‘PAGES 2k Network‘ and its activities can be found here and consists of nearly 80 individual collaborators. The Consortium’s collection of local expertise and proxy records was transformed into a synthesis by a smaller team of lead authors, but the large author list recognizes that the expertise of the wider team was essential in increasing the range of data used and interpreting it.
In addition to the background available at the FAQ, I think it is important to also highlight some aspects of the analytical procedures behind the study and the vital contributions of three young co-authors.
The benefit of the ‘regions-up’ approach embodied in the PAGES-2k consortium is that it made it easy to take advantage of local expertise and include a large amount of new data that would have been more difficult to assemble for a centralized global reconstruction. However, being decentralized, the groups in different regions opted for different methodologies for building their default reconstructions. While justifiable, this does raise a question about the impact different methodologies would have. To address this, the synthesis team (ably led by Nicholas McKay) applied three particular reconstruction methods to all of the regions, as well as looking at the basic area-averaged and weighted composites. He further analyzed the site-level records individually and without many of the assumptions that underlie the regional temperature reconstructions. These results show that the long-term cooling trend and recent warming are dominant features of the dataset however you analyze it. There is a sizable fraction of the records that do not conform to the continental averages, highlighting the spatial variability and/or the noise level in specific proxies.
One of the new procedures used to reconstruct temperature is an approach developed by Sami Hanhijärvi (U. Helsinki), which was also recently applied to the North Atlantic region. The method (PaiCo) relies on pairwise comparisons to arrive at a time series that integrates records with differing temporal resolutions and relaxes assumptions about the relation between the proxy series and temperature. Hanhijärvi applied this procedure to the proxy data from each of the continental-scale regions and found that reconstructions using different approaches are similar and generally support the primary conclusions of the study.
Regions where this study helps clarify the temperature history are mainly in the Southern Hemisphere. We include new and updated temperature reconstructions from Antarctica, Australasia and South America. The proxy records from these three regions come from many sources, ranging from glacier ice to trees and from lake sediment to corals. Raphael Neukom (Swiss Federal Research Institute WSL and University of Bern) played a key role in the analyses across the Southern Hemisphere. He used principal components regression (Australasia), a scaled composite (Antarctica), and an integration of these two approaches (South America) to create the time series of annual temperature change.
Inevitably, assembling such a large and diverse dataset involves many judgement calls. The PAGES-2k consortium has tried to assess the impact of these structural decisions by using multiple methods, but we hope that this synthesis is really just the start of a more detailed analysis of regional temperature trends and we welcome constructive suggestions for improvements.
- M. Ahmed, K.J. Anchukaitis, A. Asrat, H.P. Borgaonkar, M. Braida, B.M. Buckley, U. Büntgen, B.M. Chase, D.A. Christie, E.R. Cook, M.A.J. Curran, H.F. Diaz, J. Esper, Z. Fan, N.P. Gaire, Q. Ge, J. Gergis, J.F. González-Rouco, H. Goosse, S.W. Grab, N. Graham, R. Graham, M. Grosjean, S.T. Hanhijärvi, D.S. Kaufman, T. Kiefer, K. Kimura, A.A. Korhola, P.J. Krusic, A. Lara, A. Lézine, F.C. Ljungqvist, A.M. Lorrey, J. Luterbacher, V. Masson-Delmotte, D. McCarroll, J.R. McConnell, N.P. McKay, M.S. Morales, A.D. Moy, R. Mulvaney, I.A. Mundo, T. Nakatsuka, D.J. Nash, R. Neukom, S.E. Nicholson, H. Oerter, J.G. Palmer, S.J. Phipps, M.R. Prieto, A. Rivera, M. Sano, M. Severi, T.M. Shanahan, X. Shao, F. Shi, M. Sigl, J.E. Smerdon, O.N. Solomina, E.J. Steig, B. Stenni, M. Thamban, V. Trouet, C.S. Turney, M. Umer, T. van Ommen, D. Verschuren, A.E. Viau, R. Villalba, B.M. Vinther, L. von Gunten, S. Wagner, E.R. Wahl, H. Wanner, J.P. Werner, J.W. White, K. Yasue, and E. Zorita, "Continental-scale temperature variability during the past two millennia", Nature Geoscience, vol. 6, pp. 339-346, 2013. http://dx.doi.org/10.1038/ngeo1797
- S. Hanhijärvi, M.P. Tingley, and A. Korhola, "Pairwise comparisons to reconstruct mean temperature in the Arctic Atlantic Region over the last 2,000 years", Climate Dynamics, 2013. http://dx.doi.org/10.1007/s00382-013-1701-4
It is well known that ice shelves on the Antarctic Peninsula have collapsed on several occasions in the last couple of decades, that ice shelves in West Antarctica are thinning rapidly, and that the large outlet glaciers that drain the West Antarctic ice sheet (WAIS) are accelerating. The rapid drainage of the WAIS into the ocean is a major contributor to sea level rise (around 10% of the total, at the moment).
All of these observations match the response, predicted in the late 1970s by glaciologist John Mercer, of the Antarctic to anthropogenic global warming. As such, they are frequently taken as harbingers of greater future sea level rise to come. Are they?
Two papers published this week in Nature Geoscience provide new information that helps to address this question. One of the studies (led by me) says “probably”, while another (Abram et al.) gives a more definitive “yes”.
The somewhat different details of the two papers appear to have hopelessly confused many journalists (though the Christian Science Monitor has an excellent article, despite a somewhat misleading headline), but both are really just telling different aspects of the same story.
There is already strong evidence that anthropogenic forcing has played a significant role in the collapse of ice shelves on the Antarctic Peninsula, cause by significant melting at the surface during summer. The warm summer air temperatures have been related to an increase in the “Southern Annular Mode” (SAM), essentially the strength of the circumpolar westerlies. Increased CO2 is clearly part of the forcing of the observed positive trend in the SAM, though a larger player is likely to be ozone depletion in the stratosphere. Nevertheless, the short length of the observations – of both the ice sheet and climate – make it difficult to assess to what extent these changes are unusual. There is evidence for one ice shelf that a collapse like that observed in the 1990s has not occurred since at least the mid-Holocene, but comparable evidence is lacking elsewhere.
The connection between climate change and glacier response is more complex for the West Antarctic Ice Sheet than the Peninsula. As on the Peninsula, temperatures over the WAIS have risen significantly in the last few decades, but this is a symptom, rather than a cause. For WAIS, the culprit for the rapid thinning of ice shelves is increased delivery of warm ocean water to the base of the ice shelves. This isn’t due to a warming ocean (though the deep water off the Antarctic coast line is indeed warming), but to changes in the winds that have forced more circumpolar deep water onto the continental shelf. Circumpolar deep water, at about +2°C, is very hot compared with the in situ melting point of glacier ice. In a series of papers, we’ve shown that the warmer temperatures observed over the WAIS are the result of those same atmospheric circulation changes, which are not related to the SAM, but rather to the remote forcing from changes in the tropical Pacific: changes in the character of ENSO (Steig et al., 2012; Ding et al., 2011; 2012).
As on the Peninsula, there is evidence of anthropogenic forcing for the WAIS too: anomalous conditions since the 1980s in the tropical Pacific are characteristic of the expected fingerprint of global warming (e.g. Trenberth and Hoar, 1997; Collins et al., 2010). Still, as on the Peninsula, the short length of the instrumental observations make it difficult to say anything very definitive about long term trends.
Both our paper and that of Abram et al. add to our understanding of recent climate, glacier, and ice sheet changes in Antarctica by placing them into a longer-term context. Amidst the continuous chatter in the blogosphere about the strengths and limitations about “multiproxy” studies, these studies may be a refreshing return to simpler methods relying on just one type of “proxy”: data from ice cores. While ice core data aren’t perfect proxies of climate, they come pretty close, and aren’t subject to the same kinds of uncertainties that are unavoidable in biological proxies like tree rings.
Our study is the culmination of about a decade of ice core drilling and analysis in West Antarctica, through the ITASE program and the WAIS Divide ice core project. I’m the lead author on the paper but the author list is rightfully long; a lot of people have been involved in drilling and analyzing cores all across Antarctica.
The only “proxy” we use are oxygen isotope ratios. Oxygen isotope ratios (δ18O) in polar snow are well known to be correlated with temperature, and the underlying physics of the relationship is very well understood. In our study, we compile all the available δ18O data from high-resolution well-dated ice cores in West Antarctica and take a look at the average variability through the last 200 years. We also include data from the new WAIS Divide ice core that goes back 2000 years (actually, this core goes back to 68,000 years, and is annually resolved back to at least 30,000 years, but that’s a story for another time).
The average of the records for the last 50 years looks very much like temperature records from the last 50 years, with scaling of about 0.5‰/°C, exactly as expected, providing yet another piece of evidence that recent warming in West Antarctica has been both rapid and widespread (see the figure below). A critical point, though, is that it isn’t necessary to use the δ18O data as a proxy for temperature. Because the physics controlling δ18O is well understood, and we are able to implement δ18O in climate models, we can actually just use δ18O as a proxy for, well, δ18O. This simplifies the problem from “how significant is the recent warming?” to “how significant is the recent rise in δ18O”? We’ve shown previously, and show again in this paper, that δ18O in West Antarctic precipitation reflects the relevant changes in atmospheric circulation just as well (if not better) than temperature or other conventional climate variables do. Putting δ18O into a GCM and using the same experiments that reproduce the observed warming over West Antarctica also produces the observed δ18O increase in the last 50 years.
Figure 1. (a) Comparison of averaged δ18O (blue) across West Antarctica with the recent temperature record of Bromwich et al. (2013) from central West Antarctica (yellow). The light blue background is the decadal smoothed values +/- 1 standard error assuming Gaussian statistics. (b) Number of records used, and probability that the decadal average is as elevated as the 1990s (green).
Data sources: Most of the data for this figure have been available at http://nsidc.org/data/NSIDC-0425.html for some time. There’s a new location (which will link to the old one) where more recent data sets will be placed, but it’s not all up yet: http://nsidc.org/data/nsidc-0536.html.
Our results show that the strong trend in δ18O in West Antarctica in the last 50 years is largely driven by anomalously high δ18O in the most recent two decades, particularly in the 1990s (less so the 2000s). This is evident in the temperature data as well (top panel of the figure). The 1990s were also very anomalous in the tropics — there were several large long-lived El Niño events with a strong central tropical Pacific expression, as well as only very weak La Niña events. As in the tropics, so in West Antarctica: the 1990s were likely the most anomalous decade of the last 200 years.
Our results thus show that, indeed, recent decades in West Antarctica, which have been characterized by very rapid warming, and very rapid loss of ice from the West Antarctic Ice Sheet, are highly unusual. Nevertheless, some caution is in order in interpreting this to mean that current rates of rapid ice loss from West Antarctica represent a long term trend. What we’ve observed is unusual, but it is also dominated by decadal climate variability, and can’t be considered “unprecendeted”. Furthermore, our statistical confidence that recent decades are truly exceptional is low. Our data suggest that there is about a 30% chance the 1940s were just as anomalous as the 1990s, and the 1830s have about a 10% chance of being like the 1990s. Based on the relatively small amount of available evidence from the tropics, both the 1940s and the 1830s were similarly characterized by long-lived El Niños. Looking at the very long term record from the WAIS Divide ice core, it appears that similar conditions could have occurred about once per century over the last 2000 years. Hence our answer to the question, “are the observations of the last few decades a harbinger of continued ice sheet collapse in West Antarctica?”, is tentative: “Probably”.
Anyone expecting a more dramatic result need only turn to the other new ice core paper in Nature Geoscience. Last year, Rob Mulvaney and others from the British Antarctic Survey (BAS), along with French, American, and German colleagues, reached a very similar conclusion to ours, from an ice core from James Ross Island, on the northern Antarctic Peninsula. We discussed that paper at Realclimate last year. With δ18O data alone, it was possible to demonstrate only that recent warming on James Ross Island was “unusual”. The new paper, led by Nerelie Abram, adds a record of melt layers in the ice core to the assessment. The findings: a veritable Antarctic ice hockey stick.
Figure 2. δ18O (scaled to temperature) and melt layer frequency from the James Ross Island ice core.
Abram et al.’s paper is elegant in its simplicity. The key thing that matters to the ice shelves on the Antarctic Peninsula is how much melting occurs in summer, and this is almost exactly what Abram et al. are looking at. I say “almost” because formation of melt layers requires both that melting occurs and that it gets preserved, which depends a bit on the snow structure, the previous winter temperature, etc. But the results are unequivocal: there’s about 5 times the fraction of melt layers in the core as there has been on average over previous decades, and at least twice the maximum of any time before about the 1950s. The amount of melting occurring now is greater than at any time in the past 1000 years. If there has ever been a question about whether the “hockey stick” shape of Northern Hemisphere temperatures extends to at least some areas of the Southern Hemisphere, this record provides a decisive and positive answer.
Why the difference between the Peninsula and the WAIS? After all, both locations are warming at about the same rate. We could speculate that if there were melt layers in the WAIS cores, they would also show a significant increase like the James Ross Island core does. (It’s too cold at all the WAIS sites to have summer melting at all, so such information isn’t available.) I don’t think that is likely though. More important is the specific location of James Ross Island, on the eastern side of the Antarctic Peninsula. On the western Antarctic Peninsula, temperature trends are greatest in winter and spring, just as they are over the WAIS, and we’ve argued elsewhere that the causes are similar: changes in regional circulation forced by anomalous conditions in the tropics (Ding and Steig, in press). But it is on the eastern Peninsula that the most rapid summer warming has occurred, and where the surface-melting has caused ice shelf collapse (indeed, James Ross Island wasn’t really an island until 1995, when the Prince Gustav ice shelf collapsed). Both statistical assessments and modeling results show that the trend in the SAM accounts for this warming trend. As I noted in the introduction to this post, the SAM trend is partly explained by ozone depletion in the stratosphere, and the most clearly anomalous melt in the James Ross Island core occurs after the late 1970s, about the time the ozone hole appeared. But the melt data also show that melting has increased nearly monotonically since the 1930s, well before the advent of the ozone hole. As in West Antarctic δ18O, there was a bit of an increase in melt in the 1830s and the 1940s at James Ross Island, perhaps also ENSO-related, but these little bumps pale in comparison with the amount of melting occurring since the 1950s.
So what does all this mean for the fate of Antarctic Peninsula glaciers and the West Antarctic ice sheet? Both our paper and the Abram et al. paper add substantial new evidence that something rather unusual is occurring in Antarctica. It is not just happenstance that rapid ice sheet, glacier, and ice shelf changes are occurring now, when we have finally begun to observe them closely. Rather, these changes are occurring along with what is happening to the rest of the planet. That said, it appears that we not have yet driven West Antarctic climate (nor West Antarctic glaciers) definitively beyond what might be expected from natural variability alone. In particular, I won’t be surprised if continued decade-to-decade variability in atmospheric circulation results in more, and less, intrusion of circumpolar deep water onto the continental shelf, and to more, and less, rapid thinning of ice shelves in West Antarctica*. On the Peninsula, though, it seems very clear that we have already pushed the system well beyond “normal”, and into conditions reminiscent of the mid-Holocene. I don’t think we’re going to see a return to “normal” conditions any time soon. It’s worth noting that most model projections suggest that the SAM trend may level off for a while as the ozone hole gradually declines, but those same model projections suggest the SAM trend will recover as CO2 continues to rise. See. e.g. Thompson et al. (2011).
The real take home message here is that the ice loss from the WAIS and from the Antarctic Peninsula that have been observed in the last few decades are indeed likely to be harbingers of things to come. The very rapid rate of change in West Antarctica that we’ve seen over the last few decades is clearly overprinted by substantial decadal variability, so caution is in order in projecting that rate forward in time. The magnitude of the century scale trend will depend quite a bit, in my view, on what happens in the tropics over the next century. The sign of the trend, however, is clear. On the Peninsula, it’s crystal clear.
Note: An excellent summary of these two papers by Tas van Ommen will appear in Nature Geoscience in the May issue.
*I’ll have much more to say about this in a future post, but this is work in preparation at the moment.
Guest post by Geert Jan van Oldenborgh, Francisco Doblas-Reyes, Sybren Drijfhout and Ed Hawkins
Climate information for the future is usually presented in the form of scenarios: plausible and consistent descriptions of future climate without probability information. This suffices for many purposes, but for the near term, say up to 2050, scenarios of emissions of greenhouse gases do not diverge much and we could work towards climate forecasts: calibrated probability distributions of the climate in the future.
This would be a logical extension of the weather, seasonal and decadal forecasts in existence or being developed (Palmer, BAMS, 2008). In these fields a fundamental forecast property is reliability: when the forecast probability of rain tomorrow is 60%, it should rain on 60% of all days with such a forecast.
This is routinely checked: before a new model version is introduced a period in the past is re-forecast and it is verified that this indeed holds. In seasonal forecasting a reliable forecast is often constructed on the basis of a multi-model ensemble, as forecast systems tend to be overconfident (they underestimate the actual uncertainties).
As the climate change signal is now emerging from the noise in many regions of the world, the verification of regional past trends in climate models has become possible. The question is whether the recent CMIP5 multi-model ensemble, interpreted as a probability forecast, is reliable.
As there is only one trend estimate per grid point, necessarily the verification has to be done spatially, over all regions of the world. The CMIP3 ensemble was analysed in this way by Räisänen (2007) and Yokohata et al. (2012). In the last few months three papers have appeared that approach this question for the CMIP5 ensemble with different methodologies: Bhend and Whetton (2013), van Oldenborgh et al. (2013) and Knutson et al (J. Climate, to appear).
All these studies reach similar conclusions. For temperature: the ensemble is reliable if one considers the full signal, but this is due to the differing global mean temperature responses (Total Climate Responses, TCR).
When the global mean temperature trend is factored out, the ensemble becomes overconfident: the spatial variability is too low. For annual mean precipitation the ensemble is also found to be overconfident. Precipitation trends in 3-month seasons have so much natural variability compared to the trends that the overconfidence is no longer visible.
These conclusions match with earlier work using the Detection and Attribution framework showing that the continental-averaged temperature trends can be attributed to anthropogenic factors (eg Stott et al, 2003), but zonally-averaged precipitation trends are not reproduced correctly by climate models (Zhang et al, 2007).
The spatial patterns for annual mean temperature and precipitation are shown in figure 1 below. The trends are defined as regressions on the modelled global mean temperature, i.e., we plot B(x,y) in
(1) T(x,y,t) = B(x,y) Tglobal,mod(t) + η(x,y,t)
This definition excludes the TCR and minimises the noise η(x,y,t) better than a trend that is linear in time.
The conclusion that the ensemble is somewhat overconfident is based on the bottom two panels. These show that over 10%–20% of the maps the observed trends are in the top and bottom 5% of the ensemble. For a reliable ensemble this should be 5%. The deviations are larger than we obtain from the differences between the models (the grey area).
On the maps above the areas where the modelled trends fall in the tails of the ensemble are coloured. In part of these areas the discrepancies are due to random weather fluctuations, but a large fraction has to be ascribed to forecast system biases. (The results do not depend strongly on the observational dataset used, with HadCRUT220.127.116.11, NCDC LOST and CRU TS 3.1 we obtain very similar figures, see the Supplementary Material of van Oldenborgh et al).
These forecast system biases can arise in three ways.
First, the models may underestimate low-frequency natural variability. Knutson et al show that natural variability in the warm pool around the Maritime Continent is indeed underestimated up to time scales of >10 years, contributing to the discrepancy there in Fig.1e. In most other regions the models have the correct or too large variability.
Another cause may be the incorrect specification of local forcings such as aerosols or land use. As an example, visibility observations suggest that aerosol loadings in Europe where higher in winter in the 1950s than assumed in CMIP5. This influences temperature via mist and fog (Vautard et al, 2009) and other mechanisms.
Finally, the model response to the changes in greenhouse gases, aerosols and other forcings may be incorrect. The trend differences in Asia and Canada are mainly in winter and could be due to problems in simulating the stable boundary layers there.
To conclude, climate models can and have been verified against observations in a property that is most important for many users: the regional trends. This verification shows that many large-scale features of climate change are being simulated correctly, but smaller-scale observed trends are in the tails of the ensemble more often than predicted by chance fluctuations. The CMIP5 multi-model ensemble can therefore not be used as a probability forecast for future climate. We have to present the useful climate information in climate model ensembles in other ways until these problems have been resolved.
- T.N. Palmer, F.J. Doblas-Reyes, A. Weisheimer, and M.J. Rodwell, "Toward Seamless Prediction: Calibration of Climate Change Projections Using Seasonal Forecasts", Bulletin of the American Meteorological Society, vol. 89, pp. 459-470, 2008. http://dx.doi.org/10.1175/BAMS-89-4-459
- J. RÄISÄNEN, "How reliable are climate models?", Tellus A, vol. 59, pp. 2-29, 2007. http://dx.doi.org/10.1111/j.1600-0870.2006.00211.x
- T. Yokohata, J.D. Annan, M. Collins, C.S. Jackson, M. Tobis, M.J. Webb, and J.C. Hargreaves, "Reliability of multi-model and structurally different single-model ensembles", Climate Dynamics, vol. 39, pp. 599-616, 2012. http://dx.doi.org/10.1007/s00382-011-1203-1
- J. Bhend, and P. Whetton, "Consistency of simulated and observed regional changes in temperature, sea level pressure and precipitation", Climatic Change, vol. 118, pp. 799-810, 2013. http://dx.doi.org/10.1007/s10584-012-0691-2
- G.J. van Oldenborgh, F.J. Doblas Reyes, S.S. Drijfhout, and E. Hawkins, "Reliability of regional climate model trends", Environmental Research Letters, vol. 8, pp. 014055, 2013. http://dx.doi.org/10.1088/1748-9326/8/1/014055
- P.A. Stott, "Attribution of regional-scale temperature changes to anthropogenic and natural causes", Geophysical Research Letters, vol. 30, 2003. http://dx.doi.org/10.1029/2003GL017324
- X. Zhang, F.W. Zwiers, G.C. Hegerl, F.H. Lambert, N.P. Gillett, S. Solomon, P.A. Stott, and T. Nozawa, "Detection of human influence on twentieth-century precipitation trends", Nature, vol. 448, pp. 461-465, 2007. http://dx.doi.org/10.1038/nature06025
- R. Vautard, P. Yiou, and G.J. van Oldenborgh, "Decline of fog, mist and haze in Europe over the past 30 years", Nature Geoscience, vol. 2, pp. 115-119, 2009. http://dx.doi.org/10.1038/ngeo414
Some of my friends have made a film, Thin Ice, which tells the story of CO2 and climate from the standpoint of the climate scientists who are out there in the trenches trying to figure out what is going on. I have a small role in the film myself, and I am sure RealClimate readers will recognize many more familiar faces. One of the many things I like about this film is that it puts a human face on climate science. It’s harder to demonize people when you feel you know them, and realize that in the end they’re not that different from you and your neighbors (except maybe they know more about CO2 and climate than some others you might meet).
A description of the project, including trailers and clips can be found here . The film will be available during Earth Week for free streaming. Or even better, you can arrange a free screening for your group (details for obtaining a free Earth Week download for screening are available here ). Read below the fold for more information
Here is what Peter Barrett, the team leader for the film project has to say:
“A group of us have produced another film about climate science, but in this one scientists do the talking.
Some are well known to you, others not so, but all talk with passion , concern and some humour about their work. The film is mainly the work of geologist and photographer Simon Lamb and science documentary producer David Sington, DOX Productions, who worked together on Earth Story (BBC Horizon, 1998). The story line is Simon’s journey as a geologist. He has heard the terrible things the press have been reporting about his climate science colleagues, so he decides to take his camera and find out what’s really happening.
The key messages from this 73 minute film are that scientists can be trusted and that ultimately we have to quit using fossil fuels. We do not try and say how this should be done, but we hope that the film will lead audiences into some deeper thinking on the issue and perhaps even a shift toward solutions. Check out the website www.thiniceclimate.org , where you can see the 3 minute trailer. The website contains another 3 hours of supplementary material in 37 short video clips about various aspects of climate science.
We’d like your help in spreading these messages by hosting a screening in your community. It’s also a chance to talk with them afterwards through a panel discussion/Q&A. We are making the film available as a free download (2GB) for a 2 ½ day period after Earth Day starts in New Zealand – just complete the Screenings Information sheet attached and e-mail to email@example.com so we can post it on the website and send you download instructions. The film will also be available free for streaming to those who are happy just to watch it at home.
Feel free to pass this message on. Looking forward to hearing from you soon.
Peter Barrett for the Thin Ice Team
PS While the film is in English with a range of accents we’ll have versions with subtitles in English, Mandarin, Spanish, French and German.”
Kerr (2013) recently provided a critical review of regional climate models (“RCMs”). I think his views has caused a stir in the regional climate model community. So what’s the buzz all about?
RCMs provide important input to many climate services, for which there is a great deal of vested interest on all levels. On the international stage, high-level talks lead to the establishment of a Global Framework for Climate Services (GFCS) during the World Climate Conference 3 (WCC3) in Geneva 2009.
Other activities include CORDEX, and the International Conference on Climate Services 2 (ICCS2). On a more regional multi-national level, there are several activities on climate services which have just started, and only in Europe there are several big projects: JPI-Climate, SPECS, EUPORIAS, IMPACT2C, ECLISE, CLIM-RUN, IN-ENES, BALTEX, and ENSEMBLES. Many of these project rely on global and regional climate models.
There are well-known limitations to both global and regional climate models, and many of these are described in Maslin and Austin (2012) as “uncertainty”. Maslin and Austin highlighted several reasons for why the regional-scale predictions made by these models are only tentative, and Racherla et al. (2012) observed that:
there is not a strong relationship between skill in capturing climatological means and skill in capturing climate change.
They acknowledged that the problem is not so much the RCMs, but the global climate models’ (GCMs) ability to predict climate changes on a regional scale. This finding is not surprising, however, it is important to establish this fact for the record.
Racherla et al. assessed the skill of an RCM and a GCMs, based on simulated and observed temperature and precipitation for two 10-year time slices (December 1968-December 1978 and December 1995 – December 2005). While they realised that estimating change from two different ten-year intervals is prone to errors caused by spontaneous natural year-to-year (and even slower undulations in temperature and precipitation – e.g. the AMO), they argued that such set-up was common in many climate change studies.
This aspect takes us back to our previous post about the role of large-scale atmospheric circulation associated with natural and ‘internal’ variations. The GCMs may in fact be able to reproduce many of the year-to-year variations and the slower variations, however, we know that these fluctuations are not synchronised with the real world.
The apparent lack of skill may not necessarily be a shortcoming of the individual climate models – indeed, they successfully compute the sensitivity of the subsequent large-scale atmospheric flow to small differences in their starting point.
These variations come on top of the historical long-term climate change trend. In the past, the regional natural variations have often been more pronounced than the regional climate change, and if they are out of synch, then we should expect neither a RCM nor a GCM to be able to predict the change between the two decades.
Hence, the fact that the past has been blurred by natural year-to-year variations does not invalidate the climate models. A proper evaluation of skill would involve looking at longer time scales or many different model runs. One important message is that one should never use a single GCM for making future regional climate projections.
For proper validation, we must look at a large number of different simulations with GCMs, and then apply a statistical test to see if the observed changes are outside the range of changes predicted by the models. By running many models, we get a statistical sample of natural variations following different courses.
Running RCMs is computationally expensive and it may not be possible to let them compute results for many decades or many GCMs. However, empirical-statistical downscaling (ESD) is an alternative that does not require much computing power. ESD and RCMs have different strengths and weaknesses, and thus complement each other.
The figure above, taken from Førland et al. (2012) shows a comparison between ESD and RCM results for the Arctic island Spitsbergen (a part of the Svalbard archipelago), where the ESD has been applied to the entire 1900-2100 period as well as 48 different GCM simulations.
Racherla et al. (2012) also discussed another concern, which is how RCMs and GCMs are combined. Since RCM only cover a limited space, the values at their boundaries must be specified explicitly (referred to as ‘boundary conditions‘), by the results from a coarser GCM or observation-based data (reanalysis).
The GCMs used to force the RCMs, however, do not account for situations where they and the RCMs describe a different states (e.g. precipitation or wind). This problem arises in the situation called upscaling, where small features grow in spatial extent (not atypical for chaotic systems).
It is possible to remedy some of the inconsistencies between the large-scale flow in the RCMs and the embedding GCMs by imposing so-called ‘nudging’.
Furthermore, imposing boundary values on models like RCMs may also sometimes cause problems such as spurious oscillations, and are by some labelled as an “ill-posed problem“. These problems can nevertheless be alleviated by using a “buffer-zone” along the RCM’s bundaries.
A finer grid mesh in the RCMs gives an improved description of mountains over that in the GCM, and introduces further details sugh as higher mountain peaks. This improvement alters the way air is forced upward over mountains, compared to the coarser GCM, and the amount precipitated out (‘orographic precipitation’).
Different ways of computing the cloud processes (cloud parametrisation) affect the condensation of vapour, the outgoing long-wave radiation, and precipitation.
A finer spatial grid also affects the wind structure and the evaporation near the surface (which depends on the wind speed). Furthermore, the energy transported in the atmosphere through eddies may not correspond between models with fine and coarse resolutions respectively.
Such differences between RCMs and GCMs may lead to inconsistent physics, however, are these concerns important, or just second-order effects?
Once again, a comparison between ESD and RCM results will provide some idea, and in many cases, there is a fair degree of agreement between these downscaling strategies. The problems with RCMs are absent in ESD (which have different caveats), however, the important question is whether the GCMs, used to drive both, provide a realistic description of the regional climate.
The figure above indicates that the GCMs (and the ESD results) underestimate some of the local natural variations in the past – which probably are connected with the Arctic sea ice (Benestad et al., 2002). The GCMs used in these calculations do not seem to capture the recent decline in the Arctic sea-ice cover (Stroeve et al., 2012).
Another problem may be that the RCMs do not represent the precipitation statistics well, even when data based on real observations (the ERA40 reanalysis) are used to provide the boundary values (Orskaug et al., 2011). For climate services, it is important that the precipitation statistics is realistic, and in the past, systematic biases have been “fixed” (in a not very satisfactory way) by bias-correction.
Boberg and Christensen (2012) presented one type of validation analysis, that may partially meet the concerns expressed in Kerr (2013). They reported that many RCMs overestimate the temperature in warm and dry climates (e.g. around the Mediterranean). This temperature bias was greater with higher temperature.
Most of the GCMs too had similar temperature biases, suggesting that the deficiencies seen in the RCM results were not different to those in the GCMs. Furthermore, these deficiencies cannot be explained in terms of the differences between the measured temperatures and the data used as boundary conditions for the RCMs (see figure below).
According to Kerr (2013), the RCMs need a more thorough validation, addressing the question whether they are able to describe changes in the local climate.
It is also important to verify that they actually provide a consistent representation of the physics when embedded in the GCMs. Do the energy and mass (moisture) fluxes across the lateral and top boundaries of the RCM correspond with the fluxes through the cross-sections in the GCMs corresponding to the RCM’s boundaries?
There are also new initiatives on proper validation of regional climate modelling (ESD and RCMs), and the European project VALUE represents one notable example.
(p.s. One of the references below has wrong author and title, but correct link and DOI).
- R.A. Kerr, "Forecasting Regional Climate Change Flunks Its First Test", Science, vol. 339, pp. 638-638, 2013. http://dx.doi.org/10.1126/science.339.6120.638
- M. Maslin, and P. Austin, "Uncertainty: Climate models at their limit?", Nature, vol. 486, pp. 183-184, 2012. http://dx.doi.org/10.1038/486183a
- P.N. Racherla, D.T. Shindell, and G.S. Faluvegi, "The added value to global model projections of climate change by dynamical downscaling: A case study over the continental U.S. using the GISS-ModelE2 and WRF models", Journal of Geophysical Research: Atmospheres, vol. 117, pp. n/a-n/a, 2012. http://dx.doi.org/10.1029/2012JD018091
- E.J. Førland, R. Benestad, I. Hanssen-Bauer, J.E. Haugen, and T.E. Skaugen, "Temperature and Precipitation Development at Svalbard 1900–2100", Advances in Meteorology, vol. 2011, pp. 1-14, 2011. http://dx.doi.org/10.1155/2011/893790
- N. Roberts, "An observational study of multiple cloud head structure in the fastex iop 16 cyclone", Atmospheric Science Letters, vol. 3, pp. 59-70, 2002. http://dx.doi.org/10.1006/asle.2002.0050
- J.C. Stroeve, V. Kattsov, A. Barrett, M. Serreze, T. Pavlova, M. Holland, and W.N. Meier, "Trends in Arctic sea ice extent from CMIP5, CMIP3 and observations", Geophysical Research Letters, vol. 39, pp. n/a-n/a, 2012. http://dx.doi.org/10.1029/2012GL052676
- E. ORSKAUG, I. SCHEEL, A. FRIGESSI, P. GUTTORP, J.E. HAUGEN, O.E. TVEITO, and O. HAUG, "Evaluation of a dynamic downscaling of precipitation over the Norwegian mainland", Tellus A, vol. 63, pp. 746-756, 2011. http://dx.doi.org/10.1111/j.1600-0870.2011.00525.x
- F. Boberg, and J.H. Christensen, "Overestimation of Mediterranean summer temperature projections due to model deficiencies", Nature Climate Change, vol. 2, pp. 433-436, 2012. http://dx.doi.org/10.1038/nclimate1454
This year, the Geological Society of America is rolling out their SWITCH Energy Awareness campaign . The centerpiece of the campaign is a documentary film, SWITCH, which purports to be about the need for a transformation in the world’s energy systems. Recently, I attended the Chicago premier of the film, presented as part of the Environmental Film Series of the Lutheran School of Theology. I had high hopes for this film. They were disappointed. Given the mismatch between what the movie promises and what it delivers, it would be more aptly titled, “BAIT AND SWITCH.”
The film is soporifically narrated by Scott Tinker , of the Texas Bureau of Economic Geology, who was also the major content advisor for the film. This a guy who has never met a fossil fuel he didn’t like. Dramatic footage of giant coal seams being merrily blasted to bits and carted off by hefty he-men driving 400 ton trucks are interspersed with wide-eyed kid-gloves interviews of energy-industry workers and executives in which Tinker looks like he’s overdosed on Quaaludes by way of preparation. There are a few segments on renewables thrown in, and even the token environmentalist or two, but the impression you get over most of the film is that only the fossil fuel guys have the right stuff.
Fossil fuels are unrelentingly portrayed as powerful, cool and desirable. Problems are swept under the rug, or given only the barest mention, mostly as a prelude to casual dismissal. Shots of the giant scar of an open pit coal mine in the Powder River basin cut over to shots of a credulous Tinker nodding like a bobble-headed doll while the foreman explains to him how it will all be all right because they saved the topsoil and will put it all back the way it was. Maybe that’s true, but given the intuitive implausibility of recreating a living, breathing ecosystem from the lunar lanscape the mining created, one would like to see at least a little probing of how well that all works out. Imagine Tinker coming upon a bunch of kids fiddling with a disemboweled flayed cat. This is how I imagine the interview would play out:
TINKER: Looks like you guys got yourself a dead cat there!
BOYS: Yep, did it ourselves. But dontcha worry, we saved the fur, and we’re gonna put everything back JUST THE WAY IT WAS!
TINKER: (glassy-eyed and nodding) Why, that’s just AMAZING!
Be that as it may, you never get to see or hear anything about mountain top removal coal mining (hint: they don’t save the mountaintop and put it back). On a tour of the Alberta Tar Sands, you get to see the insides of an antiseptic lab where happy technicians reverently pass around an adorable little flask of dilute bitumen (it looks so pure don’t you just want to drink it right down) while Tinker gapes in awe, but you never get to see the vast scale of environmental destruction wrought by tar sands mining outside. And while the film eventually gets around to loving natural gas, it skirts around the paradox that the tar sands consume a relatively low-carbon clean fuel (natural gas) that could be used directly as transportation fuel, to produce a dirty high carbon product (dilute bitumen and petcoke). Happy drillers on a mighty Shell offshore platform duly tsk-tsk about the big Oopsie! that was the Deepwater Horizon blowout, while assuring viewers that they’ve got that one licked, and golly no that couldn’t happen to us. Why, they even have Internet so they can get advice from the mainland if they need it!
Renewables, in contrast, are portrayed in a way that makes them seem wimpy — mainly by making inappropriate comparisons between small scale distributed power production sites and massive centralized power plants or oil production facilities. Tinker makes a lot of noise about the fact that the solar thermal site he visits in Spain was clouded over during the whole time they were filming it, which is probably meant to teach some lesson about intermittency, but instead leaves the viewer with a vague impression that renewables are not to be trusted. The film manages to say some nice things about the benefits of wind power in West Texas, and about Icelandic geothermal power, but on the whole the potential for renewable power comes off as fairly marginal, maybe the sort of thing little countries like Denmark or Iceland or Norway can rely on, but not big important places like us.
The truly fatal flaw of SWITCH, however, is that it never comes right out and explains why it is so critical for the world’s energy systems to switch off of fossil fuels, and why time is of the essence in making the switch. There are some oblique references to CO2 emissions, but no mention of the essentially irreversible effect of these emissions on climate, of the need to keep cumulative emissions under a trillion tonnes of carbon if we are to have a chance of limiting warming to 2 degrees C, or of how short the remaining time is before we hit this limit at the rate we are going. On the contrary, SWITCH positively revels in the idea that fossil fuels will never run out, given a high enough price (which, by the way, is probably not true). The clueless Washington Post review of SWITCH shows how utterly the film has failed in what should have been its prime educational mission. The reviewer writes “Why not continue to use coal and oil while developing other energy sources and technologies?” The answer, my friend, is that CO2 is forever, and its effects are not nearly so pretty as diamonds. But neither the reviewer, nor any other viewer, could be expected to learn this from SWITCH.
You begin to suspect something is really wrong when the first guy on screen to say something about climate is Richard Muller, of Berkeley Earth Surface Temperature Project fame, who managed to convert himself from a climate change denialist to a lukewarmer by arduously and noisily rediscovering what every working climate scientist already knew to be true. What Muller has to say about climate is that burning fossil fuels will cause the Earth to warm by about 2 degrees (“if the calculations are right”), but it’s going to be too expensive to stop it so we’ll just learn to live with it. There are so many things wrong with Muller’s statement that I hardly know where to begin. First, it is far from clear that a 2 degree warmer world is one that we can adapt to, or that the damages caused by such a climate would not overwhelm the costs of keeping it from happening in the first place. Second, if climate sensitivity is at the high end of the IPCC range or even beyond, we could be facing far greater than 2 degrees of warming even if we hold the line at cumulative emissions of a trillion tonnes of carbon. Third, even if climate sensitivity is at the middle of the IPCC range, that 2 degree figure assumes that we hold the line at burning one trillion tonnes of carbon (and we’re already halfway there). There are probably enough economically recoverable fossil fuels to go way beyond a trillion tonnes, which would take us to truly scary territory, especially in conjunction with high climate sensitivity. It gets worse once you realize that Muller’s cheery dismissal of the problem is essentially all you’re going to hear about the connection between fossil fuel burning and climate disruption. OK, so if the producer’s aim is for this film to play well in Nebraska, you can understand why he might not have wanted Tinker to interview somebody like Jim Hansen who’s been on the front lines of the climate wars and spent time in pokey for it, but how about Susan Solomon or Isaac Held, or Myles Allen or Richard Alley? How about any real climate scientist at all who could give an honest appraisal of what the world is going to be like if we continue unrestrained burning of fossil fuels — especially if fossil fuels never run out, as this film so cheerily predicts.
SWITCH is made to appeal to fans of an “all of the above” energy strategy, but it never confronts the fact that if we want to preserve a livable climate, “all” simply cannot include continued (let alone expanded) use of fossil fuels for very much longer. The biggest challenge we face is not learning how to extract every last scrap of fossil fuel, but learning how to leave most of it in the ground. This fault pervades every nook and cranny of the film. When discussing carbon capture and storage (CCS), an interviewee quite rightly declares that the only clean coal would be coal burned with CCS; Tinker goes on to lament that we could make coal clean, but it’s too expensive so we won’t do it. The only conclusion to be drawn from this would be that in that case coal has to be crossed off the energy menu. But instead Tinker moves on without ever giving a thought to this discomforting conclusion. And it is not very comforting to hear Steve Koonin (former chief scientist for BP/Amoco and currently Obama’s Undersecretary of Energy) and Ernest Moniz (head of MIT’s energy program, Obama’s pick to head the DOE, and a major natural gas booster) spend so much time on screen defending fossil fuels. “It can’t be all bad,” says Koonin, in reference to coal. Well, actually, from here on in, coal is all bad, and the less of it anybody burns, the better.
The segment on the developing world fails because it never addresses the question of what pattern of development could sustainably provide a decent standard of living for the worlds’ poor. Instead, in essence, it asks the question of what it would take to remake the world in Scott Tinker’s image — with all the energy usage that entails. In fact, you never get to see anybody but Tinker’s family using energy in their home, so you get no impression of how much access to a mere 200 watts of reliable power could transform the lives of poor Indians or Africans. At the outset of the film, Tinker arrogantly sets up his own energy consumption in his life as a Texas professor driving his oversized car from his sprawling house in the sprawling suburbs to wherever he is going in the course of his day as the measure of the energy required to support “a person” throughout the rest of the film. SWITCH shows no awareness that living in cities in and of itself leads to a lower carbon footprint, and that sound urban planning can multiply this advantage. This is an especially glaring omission, since most of the world’s people now live in cities, and the proportion is set to increase in the future. SWITCH never tells you that China could attain the standard of living of France without increasing its emissions at all, just by increasing the carbon efficiency of its economy to the current French level; nor does it tell you about China’s growing efforts in that direction, including most recently, a carbon tax. What SWITCH teaches you about the developing world is: They’re all gonna want cars and big houses like us, and they won’t go low-carbon because it’s too expensive, they’ll never pay for it and we won’t pay for them to do it either, so their emissions will soon swamp ours and nothing we do to reduce our own emissions will make much difference. It’s pretty much the standard “But … China!” argument promulgated by opponents of action to protect our climate. The fact that we will all pay for the consequences of a wrecked climate never figures into any of the costs mentioned in this movie.
SWITCH plays Pollyanna on energy technologies to such an extent that I found it off-putting even when the film was advocating things I basically agree with. I think cheap, fracked natural gas has made a useful contribution to reducing the growth rate of US CO2 emissions, but I cringe when SWITCH parrots the industry-sponsored myth that we have a sure 100 year supply of natural gas (we don’t ). Further, as Michael Levi’s cogent study points out, natural gas has at best a very short-lived role as a bridge fuel. Moreover, if cheap natural gas kills off renewables and next generation nuclear, it is not only a short bridge, but a bridge to nowhere. I think expansion of nuclear energy has an essential role to play in decarbonizing our energy supply, and I greatly admire the success France has had with their transition to nuclear electricity. But I doubt I would have found the credulous interviews with American and French nuclear workers particularly reassuring if I weren’t already familiar with the issues from other sources. Even the segment on Norwegian hydropower, with which SWITCH auspiciously opens, manages to give the false impression that most Nordic hydropower is free-run hydro with a relatively light footprint on the environment; In fact, Norwegian and Swedish hydropower rely on a massive network of dams and reservoirs which have disrupted the lives of indigenous peoples killed off salmon runs, and destroyed whole ecosystems. When the Suorva dam created Akkajaure in Northern Sweden, it drowned a biologically diverse chain of lakes and wetlands and turned off what used to be Europe’s largest waterfall.There is no question that hydropower is an important component of a carbon-free energy supply, but it is not helpful to sweep its environmental costs under the rug. Hydropower provides an example of the kind of difficult choice about conflicting environmental goods that global warming forces upon us. Given the facts, some of us might prefer a few more nukes to a few more Suorvas.
Way at the end of the film Tinker finally gets around to the benefits of energy conservation, but by then it’s too late. The message has already gotten through that we’re really good at fossil energy so why bother, especially since the developing world is going to burn them up anyway? None of the incomprehensible moving lines on graphs which are supposed to make the case for the importance of conservation make a dent in this impression. Tinker’s big ideas about conservation seem laughably puny: a new water heater, a bit of attic insulation, and driving his kids to school in … golf carts! One wonders what’s wrong with his kids, or his neighborhood, that they can’t walk or ride their bikes.
It would be easy to shrug off this film if it were just a matter of another hack with a minicam following Bjorn Lomborg around, but this has the backing of the GSA. The GSA has its share of members in the fossil fuel industries, but it is a respectable scientifically sound organization, which has taken a decent position on global warming. The GSA has not only blessed the film with its prestige, but is heavily promoting it as the anchor of its energy awareness campaign, with solicitation for Inconvenient Truth style “ambassadors” to promote the film’s agenda, and even a K-12 educational component. I think I do understand how the film took a wrong turn somewhere along the line. If you want to change minds and touch the heartstrings of a new audience rather than just preaching to the choir, it is probably more effective to find common ground in talking about solutions rather than by scaring the pants off people by talking about the scary consequences of global warming. I’m entirely sympathetic to this approach. But there’s a difference between positive messaging and losing sight of the nature of the problem that needs to be solved, to the point that one even loses sight of the message that needs to be conveyed. That is where SWITCH not only takes a wrong turn, but drives right off the cliff.
The GSA ought to distance itself from this fiasco. Schools should avoid it like the plague. Without being kept on life-support by the GSA, the film is so boring it will probably die a natural death. This film is a lot like those “duck and cover” movies that I saw as a kid, from which I learned that I could survive a nuclear strike if I put my head down against the lockers and covered up with a winter coat (just hope The Bomb doesn’t get dropped in summer). The message of SWITCH is the climate equivalent of the infamous quote by T.K. Jones, Reagan’s civil defense planner, that when it comes to nuclear war “If there are enough shovels to go around, everybody’s going to make it” . In the case of SWITCH, the message that gets across is that if we keep figuring out ever more ingenious ways of extracting fossil fuels, and maybe burn more natural gas, insulate our attics and drive our kids to school in golf carts, everything’s gonna be OK. We have a right to expect better from the GSA, and the sooner SWITCH disappears from the public discourse, the better.
Open thread for April…
Readers will be aware of the paper by Shaun Marcott and colleagues, that they published a couple weeks ago in the journal Science. That paper sought to extend the global temperature record back over the entire Holocene period, i.e. just over 11 kyr back time, something that had not really been attempted before. The paper got a fair amount of media coverage (see e.g. this article by Justin Gillis in the New York Times). Since then, a number of accusations from the usual suspects have been leveled against the authors and their study, and most of it is characteristically misleading. We are pleased to provide the authors’ response, below. Our view is that the results of the paper will stand the test of time, particularly regarding the small global temperature variations in the Holocene. If anything, early Holocene warmth might be overestimated in this study.
Update: Tamino has three excellent posts in which he shows why the Holocene reconstruction is very unlikely to be affected by possible discrepancies in the most recent (20th century) part of the record. The figure showing Holocene changes by latitude is particularly informative.
Prepared by Shaun A. Marcott, Jeremy D. Shakun, Peter U. Clark, and Alan C. Mix
Primary results of study
Global Temperature Reconstruction: We combined published proxy temperature records from across the globe to develop regional and global temperature reconstructions spanning the past ~11,300 years with a resolution >300 yr; previous reconstructions of global and hemispheric temperatures primarily spanned the last one to two thousand years. To our knowledge, our work is the first attempt to quantify global temperature for the entire Holocene.
Structure of the Global and Regional Temperature Curves: We find that global temperature was relatively warm from approximately 10,000 to 5,000 years before present. Following this interval, global temperature decreased by approximately 0.7°C, culminating in the coolest temperatures of the Holocene around 200 years before present during what is commonly referred to as the Little Ice Age. The largest cooling occurred in the Northern Hemisphere.
Holocene Temperature Distribution: Based on comparison of the instrumental record of global temperature change with the distribution of Holocene global average temperatures from our paleo-reconstruction, we find that the decade 2000-2009 has probably not exceeded the warmest temperatures of the early Holocene, but is warmer than ~75% of all temperatures during the Holocene. In contrast, the decade 1900-1909 was cooler than~95% of the Holocene. Therefore, we conclude that global temperature has risen from near the coldest to the warmest levels of the Holocene in the past century. Further, we compare the Holocene paleotemperature distribution with published temperature projections for 2100 CE, and find that these projections exceed the range of Holocene global average temperatures under all plausible emissions scenarios.
Frequently Asked Questions and Answers
Q: What is global temperature?
A: Global average surface temperature is perhaps the single most representative measure of a planet’s climate since it reflects how much heat is at the planet’s surface. Local temperature changes can differ markedly from the global average. One reason for this is that heat moves around with the winds and ocean currents, warming one region while cooling another, but these regional effects might not cause a significant change in the global average temperature. A second reason is that local feedbacks, such as changes in snow or vegetation cover that affect how a region reflects or absorbs sunlight, can cause large local temperature changes that are not mirrored in the global average. We therefore cannot rely on any single location as being representative of global temperature change. This is why our study includes data from around the world.
We can illustrate this concept with temperature anomaly data based on instrumental records for the past 130 years from the National Climatic Data Center (http://www.ncdc.noaa.gov/cmb-faq/anomalies.php#anomalies). Over this time interval, an increase in the global average temperature is documented by thermometer records, rising sea levels, retreating glaciers, and increasing ocean heat content, among other indicators. Yet if we plot temperature anomaly data since 1880 at the same locations as the 73 sites used in our paleotemperature study, we see that the data are scattered and the trend is unclear. When these same 73 historical temperature records are averaged together, we see a clear warming signal that is very similar to the global average documented from many more sites (Figure 1). Averaging reduces local noise and provides a clearer perspective on global climate.
Figure 1: Temperature anomaly data (thin colored lines) at the same locations as the 73 paleotemperature records used in Marcott et al. (2013), the average of these 73 temperature anomaly series (bold black line), and the global average temperature from the National Climatic Data Center blended land and ocean dataset (bold red line) (data from Smith et al., 2008).
New Scientist magazine has an “app” that allows one to point-and-plot instrumental temperatures for any spot on the map to see how local temperature changes compare to the global average over the past century (http://warmingworld.newscientistapps.com/).
Q: How does one go about reconstructing temperatures in the past?
A: Changes in Earth’s temperature for the last ~160 years are determined from instrumental data, such as thermometers on the ground or, for more recent times, satellites looking down from space. Beyond about 160 years ago, we must turn to other methods that indirectly record temperature (called “proxies”) for reconstructing past temperatures. For example, tree rings, calibrated to temperature over the instrumental era, provide one way of determining temperatures in the past, but few trees extend beyond the past few centuries or millennia. To develop a longer record, we used primarily marine and terrestrial fossils, biomolecules, or isotopes that were recovered from ocean and lake sediments and ice cores. All of these proxies have been independently calibrated to provide reliable estimates of temperature.
Q: Did you collect and measure the ocean and land temperature data from all 73 sites?
A: No. All of the datasets were previously generated and published in peer-reviewed scientific literature by other researchers over the past 15 years. Most of these datasets are freely available at several World Data Centers (see links below); those not archived as such were graciously made available to us by the original authors. We assembled all these published data into an easily used format, and in some cases updated the calibration of older data using modern state-of-the-art calibrations. We made all the data available for download free-of-charge from the Science web site (see link below). Our primary contribution was to compile these local temperature records into “stacks” that reflect larger-scale changes in regional and global temperatures. We used methods that carefully consider potential sources of uncertainty in the data, including uncertainty in proxy calibration and in dating of the samples (see step-by-step methods below).
NOAA National Climate Data Center: http://www.ncdc.noaa.gov/paleo/paleo.html
Holocene Datasets: http://www.sciencemag.org/content/339/6124/1198/suppl/DC1
Q: Why use marine and terrestrial archives to reconstruct global temperature when we have the ice cores from Greenland and Antarctica?
A: While we do use these ice cores in our study, they are limited to the polar regions and so give only a local or regional picture of temperature changes. Just as it would not be reasonable to use the recent instrumental temperature history from Greenland (for example) as being representative of the planet as a whole, one would similarly not use just a few ice cores from polar locations to reconstruct past temperature change for the entire planet.
Q: Why only look at temperatures over the last 11,300 years?
A: Our work was the second half of a two-part study assessing global temperature variations since the peak of the last Ice Age about 22,000 years ago. The first part reconstructed global temperature over the last deglaciation (22,000 to 11,300 years ago) (Shakun et al., 2012, Nature 484, 49-55; see also http://www.people.fas.harvard.edu/~shakun/FAQs.html), while our study focused on the current interglacial warm period (last 11,300 years), which is roughly the time span of developed human civilizations.
Q: Is your paleotemperature reconstruction consistent with reconstructions based on the tree-ring data and other archives of the past 2,000 years?
A: Yes, in the parts where our reconstruction contains sufficient data to be robust, and acknowledging its inherent smoothing. For example, our global temperature reconstruction from ~1500 to 100 years ago is indistinguishable (within its statistical uncertainty) from the Mann et al. (2008) reconstruction, which included many tree-ring based data. Both reconstructions document a cooling trend from a relatively warm interval (~1500 to 1000 years ago) to a cold interval (~500 to 100 years ago, approximately equivalent to the Little Ice Age).
Q: What do paleotemperature reconstructions show about the temperature of the last 100 years?
A: Our global paleotemperature reconstruction includes a so-called “uptick” in temperatures during the 20th-century. However, in the paper we make the point that this particular feature is of shorter duration than the inherent smoothing in our statistical averaging procedure, and that it is based on only a few available paleo-reconstructions of the type we used. Thus, the 20th century portion of our paleotemperature stack is not statistically robust, cannot be considered representative of global temperature changes, and therefore is not the basis of any of our conclusions. Our primary conclusions are based on a comparison of the longer term paleotemperature changes from our reconstruction with the well-documented temperature changes that have occurred over the last century, as documented by the instrumental record. Although not part of our study, high-resolution paleoclimate data from the past ~130 years have been compiled from various geological archives, and confirm the general features of warming trend over this time interval (Anderson, D.M. et al., 2013, Geophysical Research Letters, v. 40, p. 189-193; http://www.agu.org/journals/pip/gl/2012GL054271-pip.pdf).
Q: Is the rate of global temperature rise over the last 100 years faster than at any time during the past 11,300 years?
A: Our study did not directly address this question because the paleotemperature records used in our study have a temporal resolution of ~120 years on average, which precludes us from examining variations in rates of change occurring within a century. Other factors also contribute to smoothing the proxy temperature signals contained in many of the records we used, such as organisms burrowing through deep-sea mud, and chronological uncertainties in the proxy records that tend to smooth the signals when compositing them into a globally averaged reconstruction. We showed that no temperature variability is preserved in our reconstruction at cycles shorter than 300 years, 50% is preserved at 1000-year time scales, and nearly all is preserved at 2000-year periods and longer. Our Monte-Carlo analysis accounts for these sources of uncertainty to yield a robust (albeit smoothed) global record. Any small “upticks” or “downticks” in temperature that last less than several hundred years in our compilation of paleoclimate data are probably not robust, as stated in the paper.
Q: How do you compare the Holocene temperatures to the modern instrumental data?
A: One of our primary conclusions is based on Figure 3 of the paper, which compares the magnitude of global warming seen in the instrumental temperature record of the past century to the full range of temperature variability over the entire Holocene based on our reconstruction. We conclude that the average temperature for 1900-1909 CE in the instrumental record was cooler than ~95% of the Holocene range of global temperatures, while the average temperature for 2000-2009 CE in the instrumental record was warmer than ~75% of the Holocene distribution. As described in the paper and its supplementary material, Figure 3 provides a reasonable assessment of the full range of Holocene global average temperatures, including an accounting for high-frequency changes that might have been damped out by the averaging procedure.
Q: What about temperature projections for the future?
A: Our study used projections of future temperature published in the Fourth Assessment of the Intergovernmental Panel on Climate Change in 2007, which suggest that global temperature is likely to rise 1.1-6.4°C by the end of the century (relative to the late 20th century), depending on the magnitude of anthropogenic greenhouse gas emissions and the sensitivity of the climate to those emissions. Figure 3 in the paper compares these published projected temperatures from various emission scenarios to our assessment of the full distribution of Holocene temperature distributions. For example, a middle-of-the-road emission scenario (SRES A1B) projects global mean temperatures that will be well above the Holocene average by the year 2100 CE. Indeed, if any of the six emission scenarios considered by the IPCC that are shown on Figure 3 are followed, future global average temperatures, as projected by modeling studies, will likely be well outside anything the Earth has experienced in the last 11,300 years, as shown in Figure 3 of our study.
Technical Questions and Answers:
Q. Why did you revise the age models of many of the published records that were used in your study?
A. The majority of the published records used in our study (93%) based their ages on radiocarbon dates. Radiocarbon is a naturally occurring isotope that is produced mainly in the upper atmosphere by cosmic rays. This form of carbon is then distributed around the world and incorporated into living things. Dating is based on the amount of this carbon left after radioactive decay. It has been known for several decades that radiocarbon years differ from true “calendar” years because the amount of radiocarbon produced in the atmosphere changes over time, as does the rate that carbon is exchanged between the ocean, atmosphere, and biosphere. This yields a bias in radiocarbon dates that must be corrected. Scientists have been able to determine the correction between radiocarbon years and true calendar year by dating samples of known age (such as tree samples dated by counting annual rings) and comparing the apparent radiocarbon age to the true age. Through many careful measurements of this sort, they have demonstrated that, in general, radiocarbon years become progressively “younger” than calendar years as one goes back through time. For example, the ring of a tree known to have grown 5700 years ago will have a radiocarbon age of ~5000 years, whereas one known to have grown 12,800 years ago will have a radiocarbon age of ~11,000 years.
For our paleotemperature study, all radiocarbon ages needed to be converted (or calibrated) to calendar ages in a consistent manner. Calibration methods have been improved and refined over the past few decades. Because our compilation included data published many years ago, some of the original publications used radiocarbon calibration systems that are now obsolete. To provide a consistent chronology based on the best current information, we thus recalibrated all published radiocarbon ages with Calib 6.0.1 software (using the databases INTCAL09 for land samples or MARINE09 for ocean samples) and its state-of-the-art protocol for site-specific locations and materials. This software is freely available for online use at http://calib.qub.ac.uk/calib/.
By convention, radiocarbon dates are recorded as years before present (BP). BP is universally defined as years before 1950 CE, because after that time the Earth’s atmosphere became contaminated with artificial radiocarbon produced as a bi-product of nuclear bomb tests. As a result, radiocarbon dates on intervals younger than 1950 are not useful for providing chronologic control in our study.
After recalibrating all radiocarbon control points to make them internally consistent and in compliance with the scientific state-of-the-art understanding, we constructed age models for each sediment core based on the depth of each of the calibrated radiocarbon ages, assuming linear interpolation between dated levels in the core, and statistical analysis that quantifies the uncertainty of ages between the dated levels. In geologic studies it is quite common that the youngest surface of a sediment core is not dated by radiocarbon, either because the top is disturbed by living organisms or during the coring process. Moreover, within the past hundred years before 1950 CE, radiocarbon dates are not very precise chronometers, because changes in radiocarbon production rate have by coincidence roughly compensated for fixed decay rates. For these reasons, and unless otherwise indicated, we followed the common practice of assuming an age of 0 BP for the marine core tops.
Q: Are the proxy records seasonally biased?
A: Maybe. We cannot exclude the possibility that some of the paleotemperature records are biased toward a particular season rather than recording true annual mean temperatures. For instance, high-latitude proxies based on short-lived plants or other organisms may record the temperature during the warmer and sunnier summer months when the organisms grow most rapidly. As stated in the paper, such an effect could impact our paleo-reconstruction. For example, the long-term cooling in our global paleotemperature reconstruction comes primarily from Northern Hemisphere high-latitude marine records, whereas tropical and Southern Hemisphere trends were considerably smaller. This northern cooling in the paleotemperature data may be a response to a long-term decline in summer insolation associated with variations in the earth’s orbit, and this implies that the paleotemperature proxies here may be biased to the summer season. A summer cooling trend through Holocene time, if driven by orbitally modulated seasonal insolation, might be partially canceled out by winter warming due to well-known orbitally driven rise in Northern-Hemisphere winter insolation through Holocene time. Summer-biased proxies would not record this averaging of the seasons. It is not currently possible to quantify this seasonal effect in the reconstructions. Qualitatively, however, we expect that an unbiased recorder of the annual average would show that the northern latitudes might not have cooled as much as seen in our reconstruction. This implies that the range of Holocene annual-average temperatures might have been smaller in the Northern Hemisphere than the proxy data suggest, making the observed historical temperature averages for 2000-2009 CE, obtained from instrumental records, even more unusual with respect to the full distribution of Holocene global-average temperatures.
Q: What do paleotemperature reconstructions show about the temperature of the last 100 years?
A: Here we elaborate on our short answer to this question above. We concluded in the published paper that “Without filling data gaps, our Standard5×5 reconstruction (Figure 1A) exhibits 0.6°C greater warming over the past ~60 yr B.P. (1890 to 1950 CE) than our equivalent infilled 5° × 5° area-weighted mean stack (Figure 1, C and D). However, considering the temporal resolution of our data set and the small number of records that cover this interval (Figure 1G), this difference is probably not robust.” This statement follows from multiple lines of evidence that are presented in the paper and the supplementary information: (1) the different methods that we tested for generating a reconstruction produce different results in this youngest interval, whereas before this interval, the different methods of calculating the stacks are nearly identical (Figure 1D), (2) the median resolution of the datasets (120 years) is too low to statistically resolve such an event, (3) the smoothing presented in the online supplement results in variations shorter than 300 yrs not being interpretable, and (4) the small number of datasets that extend into the 20th century (Figure 1G) is insufficient to reconstruct a statistically robust global signal, showing that there is a considerable reduction in the correlation of Monte Carlo reconstructions with a known (synthetic) input global signal when the number of data series in the reconstruction is this small (Figure S13).
Q: How did you create the Holocene paleotemperature stacks?
A: We followed these steps in creating the Holocene paleotemperature stacks:
1. Compiled 73 medium-to-high resolution calibrated proxy temperature records spanning much or all of the Holocene.
2. Calibrated all radiocarbon ages for consistency using the latest and most precise calibration software (Calib 6.0.1 using INTCAL09 (terrestrial) or MARINE09 (oceanic) and its protocol for the site-specific locations and materials) so that all radiocarbon-based records had a consistent chronology based on the best current information. This procedure updates previously published chronologies, which were based on a variety of now-obsolete and inconsistent calibration methods.
3. Where applicable, recalibrated paleotemperature proxy data based on alkenones and TEX86 using consistent calibration equations specific to each of the proxy types.
4. Used a Monte Carlo analysis to generate 1000 realizations of each proxy record, linearly interpolated to constant time spacing, perturbing them with analytical uncertainties in the age model and temperature estimates, including inflation of age uncertainties between dated intervals. This procedure results in an unbiased assessment of the impact of such uncertainties on the final composite.
5. Referenced each proxy record realization as an anomaly relative to its mean value between 4500 and 5500 years Before Present (the common interval of overlap among all records; Before Present, or BP, is defined by standard practice as time before 1950 CE).
6. Averaged the first realization of each of the 73 records, and then the second realization of each, then the third, the fourth, and so on, to form 1000 realizations of the global or regional temperature stacks.
7. Derived the mean temperature and standard deviation from the 1000 simulations of the global temperature stack.
8. Repeated this procedure using several different area-weighting schemes and data subsets to test the sensitivity of the reconstruction to potential spatial and proxy biases in the dataset.
9. Mean-shifted the global temperature reconstructions to have the same average as the Mann et al. (2008) CRU-EIV temperature reconstruction over the interval 510-1450 years Before Present. Since the CRU-EIV reconstruction is referenced as temperature anomalies from the 1961-1990 CE instrumental mean global temperature, the Holocene reconstructions are now also effectively referenced as anomalies from the 1961-1990 CE mean.
10. Estimated how much higher frequency (decade-to-century scale) variability is plausibly missing from the Holocene reconstruction by calculating attenuation as a function of frequency in synthetic data processed with the Monte-Carlo stacking procedure, and by statistically comparing the amount of temperature variance the global stack contains as a function of frequency to the amount contained in the CRU-EIV reconstruction. Added this missing variability to the Holocene reconstruction as red noise.
11. Pooled all of the Holocene global temperature anomalies into a single histogram, showing the distribution of global temperature anomalies during the Holocene, including the decadal-to century scale high-frequency variability that the Monte-Carlo procedure may have smoothed from the record (largely from the accounting for chronologic uncertainties).
12. Compared the histogram of Holocene paleotemperatures to the instrumental global temperature anomalies during the decades 1900-1909 CE and 2000-2009 CE. Determined the fraction of the Holocene temperature anomalies colder than 1900-1909 CE and 2000-2009 CE.
13. Compared global temperature projections for 2100 CE from the Fourth Assessment Report of the Intergovernmental Panel on Climate Change for various emission scenarios.
14. Evaluated the impact of potential sources of uncertainty and smoothing in the Monte-Carlo procedure, as a guide for future experimental design to refine such analyses.
The link between extreme weather events, climate change, and national security is discussed in Extreme Realities, a new episode in PBS’ series Journey To Planet Earth hosted by Matt Damon.
The video features a number of extreme weather phenomena: hurricanes, tornadoes, floods, wild fires, and flooding. The discussion is about climate change and the consequences on the ground – or, how climate change may affect you.
It is important to ask what is the story behind the assertions made in the video. What scientific support is there for the link between such extremes and climate change?
Linking global warming to some of these extreme weather and climate phenomena has been tricky in the past. In some cases the record of past events may not be sufficiently complete to identify whether there is a dependency to the global state, mainly because many extremes are both rare and take place at irregular intervals. However, there has been substantial progress over the recent years.
Global climate models may provide a tool for studying such links, but they are designed to provide a picture of general large-scale features such as the greenhouse effect and how the air moves around, rather than local extreme phenomena. For some types of extremes such as heat waves, they can nevertheless provide valuable insight (Hansen et al., 2012).
Heat waves and droughts often extend over space and time, and the global climate models may provide a good representation of droughts and heat waves if they manage to predict the frequency and duration of high-pressure systems and the soil moisture associated with these events.
The way the air flows is in some circumstances difficult to predict, for instance where the storms move (storm tracks) and changes in the large-scale atmospheric circulation. The reason for this is described in earlier posts on chaos and climate, and was first discussed by Lorenz.
The climate models manage to reproduce the Hadley cell, El Nino Southern Oscillation, the Jet streams, the Trades, and the westerlies, but not tornadoes, derechoes, and thunderstorms. They do not provide the details needed to describe the local climate and many extreme phenomena affecting society and ecosystems.
Our knowledge about extremes and climate is based on more evidence than just climate model results. One elegant example is the recent paper in PNAS by Petoukhov et al., (2013) based on mathematics, physics, and measured air flow.
From physics, we know that different conditions such as soil moisture and cloud micro-physics both affect weather extremes, although different types and on different scales. Convective storms and tornadoes, as opposed to heat waves, have in the past gone undetected and tend to pass below the radar of the global climate models.
New studies, such as Petoukhov et al., (2013), are emerging in the scientific literature that provide additional support for a link between climate change and a wider range of extreme phenomena. These are based on our physical understanding, observational data, new ways of analysing data, and attribution studies (Coumou and Rahmstorf, 2012).
We are also learning more about local convective storms, and a recent example is provided by the Swedish Rossby Centre, reporting that showery convective rainfall type intensifies faster than the more spatially extensive stratiform type in response to warmer temperatures (Berg et al., 2013).
The analysis of the past observations has not always given a clear picture. So far, no clear connection has been found between the global warming and mid-latitude storms (or wind speed), and efforts comparing different ways to analyse past storm observations have only recently been published (Neu et al. (2012). If we understand why some analytical methods give different results for past storms, then we will be in a better position to detect potential dependencies to the state of the global climate.
Extreme events are a natural part of the climate system, and a climate change means that their frequencies and intensities may change. Detecting the changes in probabilities in rare events is statistically challenging. However, counting the recurrence of record-breaking extremes can provide an indication of whether the extreme values are changing (Benestad, 2008).
The consequences of a climate change involves some known aspects as well as some which we cannot predict. Extreme phenomena take place in certain environmental conditions, favourable for forming e.g. tornadoes, storms, or droughts. We also know that our models have their limitations, and that the range of possible outcomes can be fairly wide.
The incomplete knowledge is no different to any other field, as the future always seems to involve some surprises. Societies have traditionally tackled the absence of complete certainties by adopting various forms for risk analyses, e.g. fire brigades, police, defence, hospitals, and so on.
Better safe than sorry. Here, there are some known connections of concern. The bottom line is that we need pragmatic ways of dealing with issues that may have devastating effects for people or societies – and this is the red thread in ‘Extreme Realities‘.
- J. Hansen, M. Sato, and R. Ruedy, "From the Cover: PNAS Plus: Perception of climate change", Proceedings of the National Academy of Sciences, vol. 109, pp. E2415-E2423, 2012. http://dx.doi.org/10.1073/pnas.1205276109
- D. Coumou, and S. Rahmstorf, "A decade of weather extremes", Nature Climate Change, 2012. http://dx.doi.org/10.1038/nclimate1452
- P. Berg, C. Moseley, and J.O. Haerter, "Strong increase in convective precipitation in response to higher temperatures", Nature Geoscience, vol. 6, pp. 181-185, 2013. http://dx.doi.org/10.1038/ngeo1731
- U. Neu, M.G. Akperov, N. Bellenbaum, R. Benestad, R. Blender, R. Caballero, A. Cocozza, H.F. Dacre, Y. Feng, K. Fraedrich, J. Grieger, S. Gulev, J. Hanley, T. Hewson, M. Inatsu, K. Keay, S.F. Kew, I. Kindem, G.C. Leckebusch, M.L.R. Liberato, P. Lionello, I.I. Mokhov, J.G. Pinto, C.C. Raible, M. Reale, I. Rudeva, M. Schuster, I. Simmonds, M. Sinclair, M. Sprenger, N.D. Tilinina, I.F. Trigo, S. Ulbrich, U. Ulbrich, X.L. Wang, and H. Wernli, "IMILAST – a community effort to intercompare extratropical cyclone detection and tracking algorithms: assessing method-related uncertainties", Bulletin of the American Meteorological Society, pp. 120919072158001, 2012. http://dx.doi.org/10.1175/BAMS-D-11-00154.1
- R.E. Benestad, "A Simple Test for Changes in Statistical Distributions", Eos, Transactions American Geophysical Union, vol. 89, pp. 389, 2008. http://dx.doi.org/10.1029/2008EO410002
A new open thread – hopefully for some new climate science topics…
Guest Commentary by Zeke Hausfather and Matthew Menne (NOAA)
The impact of urban heat islands (UHI) on temperature trends has long been a contentious area, with some studies finding no effect of urbanization on large-scale temperature trend and others finding large effects in certain regions. The issue has reached particular prominence on the blogs, with some claiming that the majority of the warming in the U.S. (or even the world) over the past century can be attributed to urbanization. We therefore set out to undertake a thorough examination of UHI in the Conterminous United States (CONUS), examining multiple ‘urban’ proxies, different methods of analysis, and temperature series with differing degrees of homogenization and urban-specific corrections (e.g. the GISTEMP nightlight method; Hansen et al, 2010). The paper reporting our results has just been published in the Journal of Geophysical Research.
In our paper (Hausfather et al, 2013) (pdf, alt. site), we found that urban-correlated biases account for between 14 and 21% of the rise in unadjusted minimum temperatures since 1895 and 6 to 9% since 1960. Homogenization of the monthly temperature data via NCDC’s Pairwise Homogenization Algorithm (PHA) removes the majority of this apparent urban bias, especially over the last 50 to 80 years. Moreover, results from the PHA using all available station data and using only data from stations classified as rural are broadly consistent, which provides strong evidence that the reduction of the urban warming signal by homogenization is a consequence of the real elimination of an urban warming bias present in the raw data rather than a consequence of simply forcing agreement between urban and rural station trends through a ‘spreading’ of the urban signal to series from nearby stations.
Homogenization is a somewhat complex term for a conceptually simple idea. Climate variations tend not to be purely local so changes in temperatures over long time spans (longer than a month) will be highly spatially correlated. Any major changes over time in individual stations that are not reflected in nearby stations are likely due to local (rather than regional) effects such as station moves, instrument changes, time of observation changes, or even such things as a tree growing over the thermometer stand. By removing any artifacts of individual station records not shared with other stations in their region, we can get a more accurate estimate of regional climate changes.
The conterminous United States (CONUS) has some of the most dense, publicly available digital surface temperature data in the world with over 7000 Cooperative Observer (Coop) stations reporting daily maximum and minimum temperature. This provides a unique resource to compare subsets of stations with various characteristics (e.g. urban form, sensor types, etc.) without suffering bias due to differing spatial coverage, a factor that often complicates global-scale studies of UHI. The Coop Program also maintains accurate station location data (roughly 30 meter accuracy), which allows for the accurate indexing of Coop stations against high-resolution spatial datasets that are useful for identifying urban and rural areas.
We use four datasets to classify stations as urban or rural, all with 1 km spatial resolution:
- Satellite nightlights – how bright a specific location is at night as observed from space.
- Impermeable surfaces (ISA) – what percent of the area is covered in concrete or similar materials.
- GRUMP – an urban boundary database using administrative borders and other factors (including nightlights) produced by Columbia University.
- Population growth – 1930 to 2000 population growth data interpolated to kilometer resolution using U.S. Census data.
We also used two different methods to compare urban and rural stations: a station pairing method, where we looked at all possible permutations of urban and rural stations within 100 miles (160 km) of each other for each urban proxy, and a spatial gridding method where we used a grid-based approach to calculate CONUS temperatures separately using only urban and rural stations and compared the results. For the station pairing method, we imposed additional restrictions that the pairs must both have the same instrument type, to avoid accidentally conflating bias due to urban-correlated differences in the frequency transition from liquid-in-glass thermometers to electric MMTS instruments with actual urban-related warming.
Finally, we examined six different versions of U.S. temperature data separately for both maximum and minimum temperatures:
- Raw station data with no adjustments
- Station data with only time-of-observation bias (TOBs) adjustments
- Station data with both TOBs and full PHA homogenization
- Station data with TOBs, full PHA homogenization, and GISTEMP satellite nightlight-based corrections
- Station data with both TOBs and rural-only PHA homogenization.
- Station data with both TOBs and urban-only PHA homogenization.
We created estimates of urban-rural differences for each of the four temperature proxies, two analysis methods, six temperature datasets, and maximum and minimum temperatures for a total of 96 different combinations.
As shown in Figure 2 from our paper, there are significant differences in the warming rate of urban and rural stations in the raw (and TOBs-adjusted) data that are largely eliminated by homogenization, even when that homogenization is limited to using only rural stations (to avoid the possibility of ‘spreading’ the urban signal).
This can also be seen in the figure below (from our paper’s supplementary information), which shows urban-rural differences over the 1895-2010 period using the spatial gridding method:
We conclude that homogenization does a good job at removing urban-correlated biases subsequent to 1930. Prior to that date, there are significantly fewer stations available in the network with which to detect breakpoints or localized trend biases, and homogenization does less well (though the newly released USHCN version 2.5 does substantially better than version 2.0). In general, there might be a need for additional urban-specific adjustments like those performed in NASA’s GISTEMP for areas and/or periods of time in which station density is sparse, but they are rather unnecessary for the post-1930s CONUS data. The simple take-away is that while UHI and other urban-correlated biases are real (and can have a big effect), current methods of detecting and correcting localized breakpoints are generally effective in removing that bias. Blog claims that UHI explains any substantial fraction of the recent warming in the US are just not supported by the data.
In case people are interested in playing around with our data or code and replicating our approach, all relevant materials to conduct the analysis (as well as versions of the code in both STATA and Java) are available on the NOAA NCDC FTP server. More detail on the specific methods used can be found in our paper, and our supplementary materials contain more detailed tests to ensure that homogenization was properly removing urban-correlated biases. This paper is also somewhat interesting as it arose out of a a blog post back in 2010, and represents a productive collaboration between a number of climate bloggers (Troy Masters, Ron Broberg, David Jones, and Zeke) with climate scientists at NCDC (Matt Menne and Claude Williams).
- D.E. Parker, "Climate: Large-scale warming is not urban", Nature, vol. 432, pp. 290-290, 2004. http://dx.doi.org/10.1038/432290a
- X. Yang, Y. Hou, and B. Chen, "Observed surface warming induced by urbanization in east China", Journal of Geophysical Research, vol. 116, 2011. http://dx.doi.org/10.1029/2010JD015452
- J. Hansen, R. Ruedy, M. Sato, and K. Lo, "GLOBAL SURFACE TEMPERATURE CHANGE", Reviews of Geophysics, vol. 48, 2010. http://dx.doi.org/10.1029/2010RG000345
- Z. Hausfather, M.J. Menne, C.N. Williams, T. Masters, R. Broberg, and D. Jones, "Quantifying the effect of urbanization on U.S. historical climatology network temperature records", Journal of Geophysical Research, 2012. http://dx.doi.org/10.1029/2012JD018509
Time for the 2012 updates!
As has become a habit (2009, 2010, 2011), here is a brief overview and update of some of the most discussed model/observation comparisons, updated to include 2012. I include comparisons of surface temperatures, sea ice and ocean heat content to the CMIP3 and Hansen et al (1988) simulations.
First, a graph showing the annual mean anomalies from the CMIP3 models plotted against the surface temperature records from the HadCRUT4, NCDC and GISTEMP products (it really doesn’t matter which). Everything has been baselined to 1980-1999 (as in the 2007 IPCC report) and the envelope in grey encloses 95% of the model runs.
Correction (02/11/12): Graph updated using calendar year mean HadCRUT4 data instead of meteorological year mean.
The La Niña event that persisted into 2012 (as with 2011) produced a cooler year in a global sense than 2010, although there were extensive regional warm extremes (particularly in the US). Differences between the observational records are less than they have been in previous years mainly because of the upgrade from HadCRUT3 to HadCRUT4 which has more high latitude coverage. The differences that remain are mostly related to interpolations in the Arctic. Checking up on the predictions from last year, I forecast that 2012 would be warmer than 2011 and so a top ten year, but still cooler than 2010 (because of the remnant La Niña). This was true looking at all indices (GISTEMP has 2012 at #9, HadCRUT4, #10, and NCDC, #10).
This was the 2nd warmest year that started off (DJF) with a La Niña (previous La Niña years by this index were 2008, 2006, 2001, 2000 and 1999 using a 5 month minimum for a specific event) in all three indices (after 2006). Note that 2006 has recently been reclassified as a La Niña in the latest version of this index (it wasn’t one last year!); under the previous version, 2012 would have been the warmest La Niña year.
Given current near ENSO-neutral conditions, 2013 will almost certainly be a warmer year than 2012, so again another top 10 year. It is conceivable that it could be a record breaker (the Met Office has forecast that this is likely, as has John Nielsen-Gammon), but I am more wary, and predict that it is only likely to be a top 5 year (i.e. > 50% probability). I think a new record will have to wait for a true El Niño year – but note this is forecasting by eye, rather than statistics.
People sometimes claim that “no models” can match the short term trends seen in the data. This is still not true. For instance, the range of trends in the models for cherry-picked period of 1998-2012 go from -0.09 to 0.46ºC/dec, with MRI-CGCM (run3 and run5) the laggards in the pack, running colder than the observations (0.04–0.07 ± 0.1ºC/dec) – but as discussed before, this has very little to do with anything.
In interpreting this information, please note the following (mostly repeated from previous years):
- Short term (15 years or less) trends in global temperature are not usefully predictable as a function of current forcings. This means you can’t use such short periods to ‘prove’ that global warming has or hasn’t stopped, or that we are really cooling despite this being the warmest decade in centuries. We discussed this more extensively here.
- The CMIP3 model simulations were an ‘ensemble of opportunity’ and vary substantially among themselves with the forcings imposed, the magnitude of the internal variability and of course, the sensitivity. Thus while they do span a large range of possible situations, the average of these simulations is not ‘truth’.
- The model simulations use observed forcings up until 2000 (or 2003 in a couple of cases) and use a business-as-usual scenario subsequently (A1B). The models are not tuned to temperature trends pre-2000.
- Differences between the temperature anomaly products is related to: different selections of input data, different methods for assessing urban heating effects, and (most important) different methodologies for estimating temperatures in data-poor regions like the Arctic. GISTEMP assumes that the Arctic is warming as fast as the stations around the Arctic, while HadCRUT4 and NCDC assume the Arctic is warming as fast as the global mean. The former assumption is more in line with the sea ice results and independent measures from buoys and the reanalysis products.
- Model-data comparisons are best when the metric being compared is calculated the same way in both the models and data. In the comparisons here, that isn’t quite true (mainly related to spatial coverage), and so this adds a little extra structural uncertainty to any conclusions one might draw.
Given the importance of ENSO to the year to year variability, removing this effect can help reveal the underlying trends. The update to the Foster and Rahmstorf (2011) study using the the latest data (courtesy of Tamino) (and a couple of minor changes to procedure) shows the same continuing trend:
Similarly, Rahmstorf et al. (2012) showed that these adjusted data agree well with the projections of the IPCC 3rd (2001) and 4th (2007) assessment reports.
Ocean Heat Content
Figure 3 is the comparison of the upper level (top 700m) ocean heat content (OHC) changes in the models compared to the latest data from NODC and PMEL (Lyman et al (2010) ,doi). I only plot the models up to 2003 (since I don’t have the later output). All curves are baselined to the period 1975-1989.
This comparison is less than ideal for a number of reasons. It doesn’t show the structural uncertainty in the models (different models have different changes, and the other GISS model from CMIP3 (GISS-EH) had slightly less heat uptake than the model shown here). Neither can we assess the importance of the apparent reduction in trend in top 700m OHC growth in the 2000s (since we don’t have a long time series of the deeper OHC numbers). If the models were to be simply extrapolated, they would lie above the observations, but given the slight reduction in solar, uncertain changes in aerosols or deeper OHC over this period, I am no longer comfortable with such a simple extrapolation. Analysis of the CMIP5 models (which will come at some point!) will be a better apples-to-apples comparison since they go up to 2012 with ‘observed’ forcings. Nonetheless, the long term trends in the models match those in the data, but the short-term fluctuations are both noisy and imprecise.
Summer sea ice changes
Sea ice changes this year were again very dramatic, with the Arctic September minimum destroying the previous records in all the data products. Updating the Stroeve et al. (2007)(pdf) analysis (courtesy of Marika Holland) using the NSIDC data we can see that the Arctic continues to melt faster than any of the AR4/CMIP3 models predicted. This is no longer so true for the CMIP5 models, but those comparisons will need to wait for another day (Stroeve et al, 2012).
Hansen et al, 1988
Finally, we update the Hansen et al (1988) (doi) comparisons. Note that the old GISS model had a climate sensitivity that was a little higher (4.2ºC for a doubling of CO2) than the best estimate (~3ºC) and as stated in previous years, the actual forcings that occurred are not the same as those used in the different scenarios. We noted in 2007, that Scenario B was running a little high compared with the forcings growth (by about 10%) using estimated forcings up to 2003 (Scenario A was significantly higher, and Scenario C was lower), and we see no need to amend that conclusion now.
Correction (02/11/12): Graph updated using calendar year mean HadCRUT4 data instead of meteorological year mean.
The trends for the period 1984 to 2012 (the 1984 date chosen because that is when these projections started), scenario B has a trend of 0.29+/-0.04ºC/dec (95% uncertainties, no correction for auto-correlation). For the GISTEMP and HadCRUT4, the trends are 0.18 and 0.17+/-0.04ºC/dec respectively. For reference, the trends in the CMIP3 models for the same period have a range 0.21+/-0.16 ºC/dec (95%).
As discussed in Hargreaves (2010), while this simulation was not perfect, it has shown skill in that it has out-performed any reasonable naive hypothesis that people put forward in 1988 (the most obvious being a forecast of no-change). However, concluding much more than this requires an assessment of how far off the forcings were in scenario B. That needs a good estimate of the aerosol trends, and these remain uncertain. This should be explored more thoroughly, and I will try and get to that at some point.
The conclusion is the same as in each of the past few years; the models are on the low side of some changes, and on the high side of others, but despite short-term ups and downs, global warming continues much as predicted.
- G. Foster, and S. Rahmstorf, "Global temperature evolution 1979–2010", Environmental Research Letters, vol. 6, pp. 044022, 2011. http://dx.doi.org/10.1088/1748-9326/6/4/044022
- S. Rahmstorf, G. Foster, and A. Cazenave, "Comparing climate projections to observations up to 2011", Environmental Research Letters, vol. 7, pp. 044035, 2012. http://dx.doi.org/10.1088/1748-9326/7/4/044035
- J.M. Lyman, S.A. Good, V.V. Gouretski, M. Ishii, G.C. Johnson, M.D. Palmer, D.M. Smith, and J.K. Willis, "Robust warming of the global upper ocean", Nature, vol. 465, pp. 334-337, 2010. http://dx.doi.org/10.1038/nature09043
- J. Stroeve, M.M. Holland, W. Meier, T. Scambos, and M. Serreze, "Arctic sea ice decline: Faster than forecast", Geophysical Research Letters, vol. 34, 2007. http://dx.doi.org/10.1029/2007GL029703
- J.C. Stroeve, V. Kattsov, A. Barrett, M. Serreze, T. Pavlova, M. Holland, and W.N. Meier, "Trends in Arctic sea ice extent from CMIP5, CMIP3 and observations", Geophysical Research Letters, vol. 39, pp. n/a-n/a, 2012. http://dx.doi.org/10.1029/2012GL052676
- J. Hansen, I. Fung, A. Lacis, D. Rind, S. Lebedeff, R. Ruedy, G. Russell, and P. Stone, "Global climate changes as forecast by Goddard Institute for Space Studies three-dimensional model", Journal of Geophysical Research, vol. 93, pp. 9341, 1988. http://dx.doi.org/10.1029/JD093iD08p09341
- J.C. Hargreaves, "Skill and uncertainty in climate models", Wiley Interdisciplinary Reviews: Climate Change, vol. 1, pp. 556-564, 2010. http://dx.doi.org/10.1002/wcc.58
This month’s open thread on climate science…
Last July (2012), I heard from a colleagues working at the edge of the Greenland ice sheet, and from another colleague working up at the Summit. Both were independently writing to report the exceptional conditions they were witnessing. The first was that the bridge over the Watson river by the town of Kangerlussuaq, on the west coast of Greenland, was being breached by the high volumes of meltwater coming down from the ice sheet. The second was that there was a new melt layer forming at the highest point of the ice sheet, where it very rarely melts.
A front loader being swept off a bridge into the Watson River, Kangerlussuaq, Greenland, in July 2012. Fortunately, nobody was in it at the time. Photo: K. Choquette
I’ve been remiss in not writing about these observations until now. I’m prompted to do so by the publication in Nature today (January 23, 2013) of another new finding about Greenland melt. This paper isn’t about the modern climate, but about the climate of the last interglacial period. It has relevance to the modern situation though, a point to which I’ll return at the end of this post.
The new paper in Nature, Eemian interglacial reconstructed from a Greenland folded ice core (NEEM Community Members, (2013)) is the culmination of many years of work in Greenland led by Dorethe Dahl-Jensen and her team from the Centre for Ice and Climate at the Niels Bohr institute in Copenhagen, with substantial involvement of scientists from around Europe, the U.S., China, Japan, and Canada.*
The big news is that this group has managed to obtain and use the information in ice from the Eemian — the peak of the last interglacial period, about 125,000 years ago — in Greenland. Getting usable Eemian ice from Greenland has been a Holy Grail of ice core research for the better part of two decades. We thought, back in the early 1990s, that we had obtained Eemian ice in the GISP2 and GRIP ice cores drilled near the ice sheet summit. It turned out that the lowermost part — anything older than 100,000 years — was messed up by ice flow, making it impossible to learn anything much about climate from it. The Danish group then led a project further to the north at “North GRIP” that, based on radar-echo-sounding data, should have had an intact Eemian period. But the temperature at the base at NGRIP was higher than expected, and the Eemian ice had melted away.
The latest attempt was the “North Eemian” (NEEM) site in northeast Greenland. Here too, the initial results were disappointing. As at GISP2 and GRIP, there are folds in the ice, and some of the layers containing the ice of Eemian age are repeated several times. However, in this case the folds are very large, and there are continuous sections that are not scrambled; they are just a bit out of order. It took significant work, but the group has unfolded the data from the folded layers and it is now evident that the goal of the NEEM project– having an interpretable section of Eemian ice — has succeeded after all.
The findings are spectacular. In the Eemian ice, there is clear evidence of significant melting of what would then have been snow at the surface. The amount of air trapped in the ice undergoes rapid fluctuations, resulting from the fact that ice that melts and then refreezes generally winds up with fewer air bubbles in it than the original porous snow. There are also strong fluctuations observed in soluble gases such as N2O whereas variations in the oxygen isotope concentration — both in the molecular oxygen (O2) in the air and in the ice (H2O) itself — are small. The isotope concentration of the O2 can be matched to that in undisturbed ice from the same time period in ice cores from Antarctica, providing a way to date the ice, showing unambiguously that non-disturbed layers are preserved from the peak of the Eemian period, about 125,000 years ago.
Qualitatively, the evidence for melt in the NEEM Eemian ice shows that it was warm at the time. Obviously. But more interestingly, the last year of the NEEM project was 2012, and researchers were able to witness first hand what the formation of melt layers mean at NEEM in terms of the ambient conditions. In July 2012, the NEEM saw above-freezing temperatures for six consecutive days (10 to 15 July), with rain events on 11 and 13 July. When the water refroze, it formed several distinct, clear layers of ice (which we call a “melt layers”) between 5 and about 60 cm down in the snow, about 1 cm thick. This is a rare event. It was so warm over Greenland in that week that a significant melt layer also formed up at the Summit; in fact, the entire surface of the ice sheet was melting.
That hasn’t happened — not once — in the entire satellite record (see Jason Box’s excellent blog, meltfactor.org for more on this, and Marco Tedesco's paper.). In fact, examination of melt layer records from ice cores at Summit shows that a melt layer like the one that formed in 2012 was the most significant Greenland melt event since at least the late 19th century. If you drill about 100 m down into the ice and recover an ice core, you invariably find that layer, shown in the photo below (the bright line at which the person’s thumb is pointing).
.Greenland ice core from ~80 m depth. E. Steig photo.
According to a recent paper on the 2012 melt by Nghiem et al., in Geophysical Research Letters, the 19th century event dates to 1889. One has to go back about 700 years to find the next such event, and overall, these are about once-in-250 year events over the last 4000 years. Prior to that, they occur more frequently — about once per century during the mid Holocene “climatic optimum”, when it was on average much warmer than present in Greenland in summer, due to the peak in Northern Hemisphere insolation due to changes in the earth’s orbit (Milankovitch forcing). Even during the mid-Holocene, though, there is no evidence from the ice cores that there was sufficient melting to create such strong anomalies in the air content and trace gas concentrations in the ice, as was observed in the Eemian in the NEEM ice. Thus, it was even warmer during Eemian than during the mid Holocene.
How much warmer was it? Jason Box estimates from satellite data that the temperature in July 2012 at high elevations over the Greenland ice sheet was a full 10°C (18°F) warmer than the daily average of the 2000′s decade; 1 standard deviation is about 3°C, so this is about a 3-sigma event. If, as the NEEM researchers estimate, the same sort of temperatures were required to produce the EEM melt layers, it suggests that during the EEM in Greenland it was also about 10°C warmer than present in the summer — but not just once per century, but much more often, perhaps every summer. I’m interpreting a bit here: the NEEM group doesn’t actually use the presence of melt layers per se to estimate the summer temperature; rather, they use the observation that the δ18O values of the ice at this time are >>-33 ‰. δ18O is a proxy for temperature in Greenland ice, and the NEEM paper uses this to estimate that the temperature must have been about 8°C (+/-4°C) warmer than present. Not coincidentally, the δ18O values of the snow and rain that fell in July 2012 was also >-33 ‰.
None of this should be interpreted to suggest that we are in “Eemian-like” conditions just yet. After all, there has only been one Eemian-like melt event observed in modern times, and the extremely warm summer of 2012 clearly involved anomalous weather conditions — a particular pattern of pressure anomalies over the northern high latitudes (e.g. Tedesco et al. (2012)) that may also partly account for the exceptional low sea ice cover that year. The 2012 event, however, gives us a flavor of what the future is likely to bring. It will be very interesting to watch the satellite imagery over Greenland in the next decade and beyond.
What are the implications for the Greenland ice sheet? Possibly, that it is less sensitive to climate warming than some of the higher-end estimates suggest (e.g. Cuffey and Marshall (2000) suggested Greenland could have contributed > ~4 m to EEM sea level), though very much in line with more recent estimates (e.g. Pfeffer et al. (2008)). The estimated temperature change of ~8°C is quite a bit warmer than most previous estimates which are more in the range of 2-5°C (though the uncertainty estimates clearly overlap). Thus, whatever the contribution of mass loss from the Greenland ice sheet to the huge (4-8 m) rise in sea level of the Eemian, it occurred under very strong temperature forcing.
The presence of Eemian ice at the NEEM site itself places constraints on the ice sheet configuration. It obviously rules out any configuration in which this area of the Greenland ice sheet was gone. That typically occurs in ice-sheet model simulations that involve more than about 2 m of sea-level-equivalent mass loss. Thus, the NEEM ice core record suggests both that temperatures may have been warmer than once thought, and and that the ice sheet mass loss was unlikely to have been >2 m of sea level.
The new data from the NEEM ice core may also point to a lower limit on the magnitude of the Eemian sea level contribution from Greenland. Evidently, it can become very warm indeed over Greenland — much warmer than most previous modeling exercises have considered. Combined climate/ice sheet model estimates in which the Greenland surface temperature was as high during the Eemian as indicated by the NEEM ice core record suggest that loss of less than about 1 m sea level equivalent is very unlikely (e.g. Robinson et al. (2011).
There are caveats of course — the new data is just from one site, and estimates of the total ice loss don’t provide information about the rate at which that loss occurred. Still the new data show that Greenland, while evidently contributing significantly to Eemian sea level, cannot have contributed more than half the total — despite the strong forcing. This once again points to Antarctica as the major source of Eemian sea level rise. There are only about 3 m of sea level rise available from West Antarctica, and it remains unclear whether all of West Antarctica may have collapsed. On that subject, look for some more exciting ice core news in the near future, from a core at Roosevelt Island by a New Zealand led team.
Note: There is a nice summary of the implications of the paper on the Nature web site, though note that I’m pretty sure I didn’t say — or didn’t mean to say! — that “We are in a similar climate regime as the world was in the early Eemian,” as I am quoted. A key difference is that CO2 was not as high as today, but insolation forcing was much higher. So the analogy only goes so far. See the paper by van de Berg et al. Significant contribution of insolation to Eemian melting of the Greenland ice sheet for an in-depth discussion about these differences.
Update: The http://www.greenlandmelting.com/ website looks like a great resource for those interesting in following the modern melt progression in Greenland.
*There was also significant logistical support from the U.S. Air National Guard, who contract with the National Science Foundation to supply C130 Transport plane support for these kinds of projects.
- D. Dahl-Jensen, M.R. Albert, A. Aldahan, N. Azuma, D. Balslev-Clausen, M. Baumgartner, A. Berggren, M. Bigler, T. Binder, T. Blunier, J.C. Bourgeois, E.J. Brook, S.L. Buchardt, C. Buizert, E. Capron, J. Chappellaz, J. Chung, H.B. Clausen, I. Cvijanovic, S.M. Davies, P. Ditlevsen, O. Eicher, H. Fischer, D.A. Fisher, L.G. Fleet, G. Gfeller, V. Gkinis, S. Gogineni, K. Goto-Azuma, A. Grinsted, H. Gudlaugsdottir, M. Guillevic, S.B. Hansen, M. Hansson, M. Hirabayashi, S. Hong, S.D. Hur, P. Huybrechts, C.S. Hvidberg, Y. Iizuka, T. Jenk, S.J. Johnsen, T.R. Jones, J. Jouzel, N.B. Karlsson, K. Kawamura, K. Keegan, E. Kettner, S. Kipfstuhl, H.A. Kjær, M. Koutnik, T. Kuramoto, P. Köhler, T. Laepple, A. Landais, P.L. Langen, L.B. Larsen, D. Leuenberger, M. Leuenberger, C. Leuschen, J. Li, V. Lipenkov, P. Martinerie, O.J. Maselli, V. Masson-Delmotte, J.R. McConnell, H. Miller, O. Mini, A. Miyamoto, M. Montagnat-Rentier, R. Mulvaney, R. Muscheler, A.J. Orsi, J. Paden, C. Panton, F. Pattyn, J. Petit, K. Pol, T. Popp, G. Possnert, F. Prié, M. Prokopiou, A. Quiquet, S.O. Rasmussen, D. Raynaud, J. Ren, C. Reutenauer, C. Ritz, T. Röckmann, J.L. Rosen, M. Rubino, O. Rybak, D. Samyn, C.J. Sapart, A. Schilt, A.M.Z. Schmidt, J. Schwander, S. Schüpbach, I. Seierstad, J.P. Severinghaus, S. Sheldon, S.B. Simonsen, J. Sjolte, A.M. Solgaard, T. Sowers, P. Sperlich, H.C. Steen-Larsen, K. Steffen, J.P. Steffensen, D. Steinhage, T.F. Stocker, C. Stowasser, A.S. Sturevik, W.T. Sturges, A. Sveinbjörnsdottir, A. Svensson, J. Tison, J. Uetake, P. Vallelonga, R.S.W. van de Wal, G. van der Wel, B.H. Vaughn, B. Vinther, E. Waddington, A. Wegner, I. Weikusat, J.W.C. White, F. Wilhelms, M. Winstrup, E. Witrant, E.W. Wolff, C. Xiao, and J. Zheng, "Eemian interglacial reconstructed from a Greenland folded ice core", Nature, vol. 493, pp. 489-494, 2013. http://dx.doi.org/10.1038/nature11789
- M. Tedesco, X. Fettweis, T. Mote, J. Wahr, P. Alexander, J. Box, and B. Wouters, "Evidence and analysis of 2012 Greenland records from spaceborne observations, a regional climate model and reanalysis data", The Cryosphere Discussions, vol. 6, pp. 4939-4976, 2012. http://dx.doi.org/10.5194/tcd-6-4939-2012
- K.M. Cuffey, and S.J. Marshall, , Nature, vol. 404, pp. 591-594, 2000. http://dx.doi.org/10.1038/35007053
- W.T. Pfeffer, J.T. Harper, and S. O'Neel, "Kinematic Constraints on Glacier Contributions to 21st-Century Sea-Level Rise", Science, vol. 321, pp. 1340-1343, 2008. http://dx.doi.org/10.1126/science.1159099
- A. Robinson, R. Calov, and A. Ganopolski, "Greenland ice sheet model parameters constrained using simulations of the Eemian Interglacial", Climate of the Past, vol. 7, pp. 381-396, 2011. http://dx.doi.org/10.5194/cp-7-381-2011
- W.J. van de Berg, M. van den Broeke, J. Ettema, E. van Meijgaard, and F. Kaspar, "Significant contribution of insolation to Eemian melting of the Greenland ice sheet", Nature Geoscience, vol. 4, pp. 679-683, 2011. http://dx.doi.org/10.1038/ngeo1245