I am always interested in non-traditional data sets that can shed some light on climate changes. Ones that I’ve discussed previously are the frequency of closing of the Thames Barrier and the number of vineyards in England. With the exceptional warmth in Alaska last month (which of course was coupled with colder temperatures elsewhere), I was reminded of another one, the Nenana Ice Classic.
For those that don’t know what the ‘Classic’ is, it is lottery competition that has been running since 1917 to guess the date on which the Nenana river ice breaks up in the spring. The Nenana river is outside of Fairbanks, Alaska and can be relied on to freeze over every year. The locals put up a tripod on the ice, and when the ice breaks up in the spring, the tripod gets swept away. The closest estimate to the exact time this happens wins the lottery, which can have a quite substantial pot.
Due to the cold spring in Alaska last year, the ice break up date was the latest since 1917, consistent with the spring temperature anomaly state-wide being one of the coldest on record (unsurprisingly the Nenana break up date is quite closely correlated to spring Alaskan temperatures). This year is shaping up to be quite warm (though current temperatures in Nenana (as of March 7) are still quite brisk!).
Since there is now an almost century-long record of these break up dates, it makes sense to look at them as potential indicators of climate change (and interannual variability). The paper by Sagarin and Micheli (2001) was perhaps the first such study, and it has been alluded to many times since (for instance, in the Wall Street Journal and Physics Today in 2008).
The figure below shows the break up date in terms of days after a nominal March 21, or more precisely time from the vernal equinox (the small correction is so that the data don’t get confused by non-climatic calendar issues). The long term linear trend (which is negative and has a slope of roughly 6 days per century) indicates that on average the break up dates have been coming earlier in the season. This is clear despite a lot of year-to-year variability:
Figure: Break up dates at Nenena in Julian days (either from a nominal March 21 (JD-80), or specifically tied to the Vernal Equinox). Linear trend in the VE-corrected data is ~6 days/century (1917-2013, ±4 days/century, 95% range).
In the 2008 WSJ article Martin Jeffries, a local geophysicist, said:
The Nenana Ice Classic is a pretty good proxy for climate change in the 20th century.
And indeed it is. The break-up dates are strongly correlated to regional spring temperatures, which have warmed over the century, tracking the Nenana trend. But as with the cool weather January 2014 in parts of the lower 48, or warm weather in Europe or Alaska, the expected very large variability in winter weather can be relied on to produce outliers on a regular basis.
Given that year-to-year variability, it is predictable that whenever the annual result is above trend, it often gets cherry-picked to suggest that climate change is not happening (despite the persistent long-term trend). There are therefore no prizes for guessing which years’ results got a lot of attention from the ‘climate dismissives’*. This is the same phenomenon that happens every winter whenever there is some cold weather or snow somewhere. Indeed, it is so predictable** that it even gets its own xkcd cartoon:
(Climate data sourced from Climate Central).
For fun, I calculated some of the odds (Monte-Carlo simulations using observed mean, a distribution of trends based on the linear fit and the standard deviation of the residuals). This suggests that a date as late as May 20 (as in 2013) is very unexpected even without any climate trends (<0.7%) and even more so with (<0.2%), but that the odds of a date before April 29 have more than doubled (from 10% to 22%) with the trend. The most favored date is May 3rd (with no trend it would have been May 6th), but the odds of the break-up happening in that single 24 hour period are only around 1 in 14.
So, the Nenana ice Classic – unlike the other two examples I mentioned in the opening paragraph – does appear to be a useful climate metric. That isn’t to say that every year is going to follow the long-term trend (clearly it doesn’t), but you’d probably want to factor that in to (ever so slightly) improve your odds of winning.
- R. Sagarin, "Climate Change in Nontraditional Data Sets", Science, vol. 294, pp. 811-811, 2001. http://dx.doi.org/10.1126/science.1064218
- J.A. Francis, and S.J. Vavrus, "Evidence linking Arctic amplification to extreme weather in mid-latitudes", Geophysical Research Letters, vol. 39, pp. n/a-n/a, 2012. http://dx.doi.org/10.1029/2012GL051000
- E.A. Barnes, "Revisiting the evidence linking Arctic Amplification 1 to extreme weather in midlatitudes", Geophysical Research Letters, pp. n/a-n/a, 2013. http://dx.doi.org/10.1002/grl.50880
Guest commentary from Zeke Hausfather and Robert Rohde
Daily temperature data is an important tool to help measure changes in extremes like heat waves and cold spells. To date, only raw quality controlled (but not homogenized) daily temperature data has been available through GHCN-Daily and similar sources. Using this data is problematic when looking at long-term trends, as localized biases like station moves, time of observation changes, and instrument changes can introduce significant biases.
For example, if you were studying the history of extreme heat in Chicago, you would find a slew of days in the late 1930s and early 1940s where the station currently at the Chicago O’Hare airport reported daily max temperatures above 45 degrees C (113 F). It turns out that, prior to the airport’s construction, the station now associated with the airport was on the top of a black roofed building closer to the city. This is a common occurrence for stations in the U.S., where many stations were moved from city cores to newly constructed airports or wastewater treatment plants in the 1940s. Using the raw data without correcting for these sorts of bias would not be particularly helpful in understanding changes in extremes.
Berkeley Earth has a newly released homogenized daily temperature field is built as a refinement upon our monthly temperature field using similar techniques. In constructing the monthly temperature field, we identified inhomogeneities in station time series caused by biasing events such as station moves and instrument changes, and measured their impact. The daily analysis begins by applying the same set of inhomogeneity breakpoints to the corresponding daily station time series.
Each daily time series is transformed into a series of temperature anomalies by subtracting from each daily observation the corresponding monthly average at the same station. These daily anomaly series are then combined using similar mathematics to our monthly process (e.g. Kriging), but with an empirically determined correlation vs. distance function that falls off more rapidly and accounts for the more localized nature of daily weather fluctuations. For each day, resulting daily temperature anomaly field is then added to the corresponding monthly temperature field to create an overall daily temperature field. The resulting daily temperature field captures the large day-to-day fluctuations in weather but also preserves the long-term characteristics already present in the monthly field.
As there are substantially more monthly records than daily records in the early periods (e.g. 1880-1950), treating the daily data as a refinement to the monthly data allows us to get maximum utility from the monthly data when determining long-term trends. Additionally, performing the Kriging step on daily anomaly fields simplifies the computation in a way that makes it much more computationally tractable.
You can find the homogenized gridded 1-degree latitude by 1-degree longitude daily temperature data here in NetCDF format (though note that a single daily series like TMin is ~4.2 GB). We will be releasing an individual homogenized station series in the near future. Additional videos of daily absolute temperatures are also available.
Climate Data Visualization with Google Maps Engine
Through a partnership with Google we’ve created a number of different interactive climate maps using their new Maps Engine. These include maps of temperature trends from different periods to present (1810, 1860, 1910, 1960, 1990), maps of record high and low temperatures, and other interesting climate aspects like average temperature, daily temperature range, seasonal temperature range, and temperature variability.
These maps utilize a number of neat features of Google’s Maps Engine. They dynamically update as you zoom in, changing both the contour lines shown and the points of interest. For example, in the 1960-present trend map shown above, you can click on a point to show the regional climate summary for each location. As you zoom in, you can get see more clickable points appear. You can also enter a specific address in the search bar and see aspects of the climate at that location.
We also have a map of all ~40,000 stations in our database, markers for each that show the homogenized record for that station when clicked. These highlight all the detected breakpoints, and show how the station record compares to the regional expectation once breakpoints have been corrected. You can also click on the image to get to a web page for each station that shows the raw data as well as various statistics about the station.
New Global Temperature Series
Berkeley Earth has a new global temperature series available. This was created by combining our land record with a Kriged ocean time series using data from HadSST3. The results fit quite well with other published series, and are closest to the new estimates by Cowtan and Way. This is somewhat unsurprising, as both of us use HadSST3 as our ocean series and use Kriging for spatial interpolation.
If we zoom into the period since 1979, both Berkeley and Cowtan and Way have somewhat higher trends than other series over the last decade. This is due mainly to the coverage over the arctic (note that we use air temperature over ice instead of sea temperature under ice for areas of the world with seasonal sea ice coverage).
There has been a veritable deluge of new papers this month related to recent trends in surface temperature. There are analyses of the CMIP5 ensemble, new model runs, analyses of complementary observational data, attempts at reconciliation all the way to commentaries on how the topic has been covered in the media and on twitter. We will attempt to bring the highlights together here. As background, it is worth reading our previous discussions, along with pieces by Simon Donner and Tamino to help put in context what is being discussed here.
The papers and commentaries address multiple aspects of recent trends: the climate drivers over recent decades, the internal variability of the system, new analyses and model-observation comparisons – much as we suggested would be the case in any discussions of model-observations mismatches last year. We will take each in turn:
Two papers (which I was an author on) are focussed mainly on examining the impact of updated forcings on the temperature: Santer et al (2014) ($) looks in detail at the impact of small volcanoes post-2000 on the vertical structure of temperature changes, while the commentary by Schmidt et al (2014) (OA, with registration) also updates the solar, aerosol and GHG forcing to estimate what the CMIP5 ensemble would have looked like if it had used this input data instead of the earlier estimates and initial forecasts (panel b in the figure).
The contribution of internal variability to decadal trends is the focus of commentaries by Lisa Goddard (OA) and Martin Visbeck (OA). They focus on recent trends in ocean heat uptake, the Pacific Decadal Oscillation and the potential for initialised decadal predictions (such as Keenlyside et al) to capture these variations. These relate to both the earlier Kosaka and Xie and England et al papers, but are mostly reviews.
[Update: See also the Clement and DeNezio perspective in Science.]
Following on from Kosaka and Xie, Fyfe and Gillett (2014) ($) show that the trends in the Eastern Pacific (1993-2012) are well outside the spread in the CMIP5 ensemble (see figure on right). This is consistent with England et al results, and yet it is unclear to what extent mis-specifications in the forcing might have affected it. For instance, if the updates to volcanic forcings from Vernier et al and used in the Santer and Schmidt papers are correct, trends from 1993 (at the maximum post-Pinatubo cooling) will be too large.
As the Nature Geoscience editorial emphasizes, there is more to climate change than the global mean temperature anomaly, and the paper by Seneviratne et al ($) shows that trends in extreme temperatures over land have continued apace throughout this time period.
Media, outreach and editorial response
Finally, there are some commentaries that look at the impact these questions have had on the wider public discussion: Hawkins et al (OA, including Twitter stalwarts Tamsin Edwards and Doug McNeall) discuss the opportunities that interest in recent trends gives scientists to discuss science on social media (also blogged about here), while Max Boykoff (OA) focuses on the framing of the issue in traditional media. Both Nature Clim. Chg. (OA) and Nature Geoscience (OA) have interesting editorials.
Overall, this is a great set of overviews of the issues – observations, comparisons, and modelling – and leads to some very specific directions for future research. There is unlikely to be any pause in that.
- B.D. Santer, C. Bonfils, J.F. Painter, M.D. Zelinka, C. Mears, S. Solomon, G.A. Schmidt, J.C. Fyfe, J.N.S. Cole, L. Nazarenko, K.E. Taylor, and F.J. Wentz, "Volcanic contribution to decadal changes in tropospheric temperature", Nature Geosci, vol. 7, pp. 185-189, 2014. http://dx.doi.org/10.1038/ngeo2098
- G.A. Schmidt, D.T. Shindell, and K. Tsigaridis, "Reconciling warming trends", Nature Geosci, vol. 7, pp. 158-160, 2014. http://dx.doi.org/10.1038/ngeo2105
- L. Goddard, "Heat hide and seek", Nature Climate change, vol. 4, pp. 158-161, 2014. http://dx.doi.org/10.1038/nclimate2155
- M. Visbeck, "Bumpy path to a warmer world", Nature Geosci, vol. 7, pp. 160-161, 2014. http://dx.doi.org/10.1038/ngeo2104
- A. Clement, and P. DiNezio, "The Tropical Pacific Ocean--Back in the Driver's Seat?", Science, vol. 343, pp. 976-978, 2014. http://dx.doi.org/10.1126/science.1248115
- J.C. Fyfe, and N.P. Gillett, "Recent observed and simulated warming", Nature Climate change, vol. 4, pp. 150-151, 2014. http://dx.doi.org/10.1038/nclimate2111
- "Hiatus in context", Nature Geosci, vol. 7, pp. 157-157, 2014. http://dx.doi.org/10.1038/ngeo2116
- S.I. Seneviratne, M.G. Donat, B. Mueller, and L.V. Alexander, "No pause in the increase of hot temperature extremes", Nature Climate change, vol. 4, pp. 161-163, 2014. http://dx.doi.org/10.1038/nclimate2145
- E. Hawkins, T. Edwards, and D. McNeall, "Pause for thought", Nature Climate change, vol. 4, pp. 154-156, 2014. http://dx.doi.org/10.1038/nclimate2150
- M.T. Boykoff, "Media discourse on the climate slowdown", Nature Climate change, vol. 4, pp. 156-158, 2014. http://dx.doi.org/10.1038/nclimate2156
- "Scientist communicators", Nature Climate change, vol. 4, pp. 149-149, 2014. http://dx.doi.org/10.1038/nclimate2167
This month’s open thread.
A new paper in Nature Climate Change out this week by England and others joins a number of other recent papers seeking to understand the climate dynamics that have led to the so-called “slowdown” in global warming. As we and others have pointed out previously (e.g. here), the fact that global average temperatures can deviate for a decade or longer from the long term trend comes as no surprise. Moreover, it’s not even clear that the deviation has been as large as is commonly assumed (as discussed e.g. in the Cowtan and Way study earlier this year), and has little statistical significance in any case. Nevertheless, it’s still interesting, and there is much to be learned about the climate system from studying the details.
Several studies have shown that much of the excess heating of the planet due to the radiative imbalance from ever-increasing greenhouses gases has gone into the ocean, rather than the atmosphere (see e.g. Foster and Rahmstorf and Balmaseda et al.). In their new paper, England et al. show that this increased ocean heat uptake — which has occurred mostly in the tropical Pacific — is associated with an anomalous strengthening of the trade winds. Stronger trade winds push warm surface water towards the west, and bring cold deeper waters to the surface to replace them. This raises the thermocline (boundary between warm surface water and cold deep water), and increases the amount of heat stored in the upper few hundred meters of the ocean. Indeed, this is what happens every time there is a major La Niña event, which is why it is globally cooler during La Niña years. One could think of the last ~15 years or so as a long term “La-Niña-like” anomaly (punctuated, of course, by actual El Niño (like the exceptionally warm years 1998, 2005) and La Niña events (like the relatively cool 2011).
A very consistent understanding is thus emerging of the coupled ocean and atmosphere dynamics that have caused the recent decadal-scale departure from the longer-term global warming trend. That understanding suggests that the “slowdown” in warming is unlikely to continue, as England explains in his guest post, below. –Eric Steig
Guest commentary by Matthew England (UNSW)
For a long time now climatologists have been tracking the global average air temperature as a measure of planetary climate variability and trends, even though this metric reflects just a tiny fraction of Earth’s net energy or heat content. But it’s used widely because it’s the metric that enjoys the densest array of in situ observations. The problem of course is that this quantity has so many bumps and kinks, pauses and accelerations that predicting its year-to-year path is a big challenge. Over the last century, no single forcing agent is clearer than anthropogenic greenhouse gases, yet zooming into years or decades, modes of variability become the signal, not the noise. Yet despite these basics of climate physics, any slowdown in the overall temperature trend sees lobby groups falsely claim that global warming is over. Never mind that the globe – our planet – spans the oceans, atmosphere, land and ice systems in their entirety.
This was one of the motivations for our study out this week in Nature Climate Change (England et al., 2014) With the global-average surface air temperature (SAT) more-or-less steady since 2001, scientists have been seeking to explain the climate mechanics of the slowdown in warming seen in the observations during 2001-2013. One simple way to address this is to examine what is different about the recent decade compared to the preceding decade when the global-mean SAT metric accelerated. This can be quantified via decade-mean differences, or via multi-decadal trends, which are roughly equivalent if the trends are more-or-less linear, or if the focus is on the low frequency changes.
A first look at multi-decadal trends over the past two decades (see below) shows a dramatic signature in the Pacific Ocean; with sea surface cooling over the east and central Pacific and warming in the west, extending into the subtropics. Sea-level records also reveal a massive trend across the Pacific: with the east declining and the west rising well above the global average. Basic physical oceanography immediately suggests a trade wind trend as the cause: as this helps pile warm water up in the west at the expense of the east. And sure enough, that is exactly what had occurred with the Pacific wind field.
A consistent picture has now emerged to explain the slowdown in global average SAT since 2001 compared to the rapid warming of the 1980s and 1990s: this includes the link between hiatus decades and the Interdecadal Pacific Oscillation, the enhanced ocean heat uptake in the Pacific (see previous posts) and the role of East Pacific cooling. All of these factors are consistent with a picture of strengthened trade winds, enhanced heat uptake in the western Pacific thermocline, and cooling in the east – as you can see in this schematic:
As our study set out to reconcile the emerging divide between observations and the multi-model mean across CMIP5 and CMIP3 simulations, we took a slightly different approach, although there are obvious parallels to Kosaka and Xie’s study assessing the impact of a cooler East Pacific. In particular, we incorporated the recent 20-year trend in trade winds into both an ocean and a climate model, to quantify its impact. It turns out that with this single perturbation, much of the ‘hiatus’ can be simulated. The slowdown in warming occurs as a combined result of both increased heat uptake in the Western Pacific Ocean, and increased cooling of the east and central Pacific (the latter leads to atmospheric teleconnections of reduced warming in other locations). We find that the heat content change within the ocean accounts for about half of the slowdown, the remaining half comes from the atmospheric teleconnections from the east Pacific.
Unfortunately, however, the hiatus looks likely to be temporary, with projections suggesting that when the trade winds return to normal strength, warming is set to be rapid (see below). This is because the recent accelerated heat uptake in the Pacific Ocean is by no means permanent; this is consistent with the shallow depths at which the excess heat can now be found, at the 100-300m layer just below the surface mixed layer that interacts with the atmosphere. [Ed: though see also Mike's commentary on this aspect of the paper]
Even if the excess heat fluxed into the ocean were longer-term, burying the heat deep in the ocean would not come without its consequences; ocean thermal expansion translates this directly into sea-level rise, with Western Pacific Island nations already acutely aware of this from the recent trends.
Our study addresses some important topics but also raises several new questions. For example, we find that climate models do not appear to capture the observed scale of multi-decadal variability in the Pacific – for example, none reproduce the magnitude of the observed Pacific trade wind acceleration – the best the models can do is around half this magnitude. This begs the question as to why this is the case: given the positive ocean-atmosphere feedbacks operating to drive these strengthened trade winds, the answer could lie in the ocean, the atmosphere, or both.
The study also discusses the unprecedented nature of the wind trends, and suggests that only around half of the trend can be explained by the IPO. So where does the other half come from? The Indian Ocean is as one possibility, given its recent rapid warming; but models capture this in greenhouse gas forced projections. What else might be accelerating the winds in the Pacific beyond what you’d expect to see from the underlying SST fields alone?
The study also points to the length of the wind trend as being crucial to the hiatus; arguing that anything much shorter, like a decadal wind trend, would not have resulted in nearly as much heat uptake by the ocean. This is related to the time-scale for ocean adjustment to wind forcing in the subtropics: in short it takes time to spin-up the ocean circulation response, and then more time to see this circulation inject a significant amount of heat into the ocean thermocline. Given the ocean inertia to change, what happens when the trade winds next weaken back to average values? Does the subducted heat get mixed away before this can resurface, or does the heat find a way to return to the surface when the winds reverse? Our initial work suggests the latter: as when we forced the wind anomalies to abate, warming out of the hiatus can be rapid, eventually recovering the warming that paused during the hiatus. So this suggests that whenever the current wind trends reverse, warming will resume as projected, and in time the present “pause” will be long forgotten by the climate system. [Ed: see again Mike's piece for a discussion of an alternative hypothesis--namely, the possibility that a La Niña-like state is part of the response to anthropogenic forcing itself].
Of course, other factors could have also contributed to part of the recent slowdown in the globally averaged air temperature metric: increased aerosols, a solar minimum, and problems with missing data in the Arctic. Summing up all of the documented contributions to the hiatus, spanning ocean heat uptake, reduced radiation reaching Earth’s surface, and data gaps, climate scientists have probably accounted for the hiatus twice over. Of course each effect is not linearly additive, but even so, many experts are now asking why hasn’t the past decade been one of considerable cooling in global mean air-temperatures? Or put another way, why isn’t the model-observed gap even wider? One way to explain this is that the current generation of climate models may be too low in their climate sensitivity – an argument made recently by Sherwood et al in relation to unresolved cloud physics. A perhaps completely unexpected conclusion when analysts first noticed the model-observed divergence progressing over the past decade.
- G. Foster, and S. Rahmstorf, "Global temperature evolution 1979–2010", Environmental Research Letters, vol. 6, pp. 044022, 2011. http://dx.doi.org/10.1088/1748-9326/6/4/044022
- M.A. Balmaseda, K.E. Trenberth, and E. Källén, "Distinctive climate signals in reanalysis of global ocean heat content", Geophysical Research Letters, vol. 40, pp. 1754-1759, 2013. http://dx.doi.org/10.1002/grl.50382
- M.H. England, S. McGregor, P. Spence, G.A. Meehl, A. Timmermann, W. Cai, A.S. Gupta, M.J. McPhaden, A. Purich, and A. Santoso, "Recent intensification of wind-driven circulation in the Pacific and the ongoing warming hiatus", Nature Climate Change, 2014. http://dx.doi.org/10.1038/nclimate2106
- Y. Kosaka, and S. Xie, "Recent global-warming hiatus tied to equatorial Pacific surface cooling", Nature, vol. 501, pp. 403-407, 2013. http://dx.doi.org/10.1038/nature12534
A little late starting this month’s open thread – must be the weather…
Guest commentary by Tim Osborn and Phil Jones
The Climatic Research Unit (CRU) land surface air temperature data set, CRUTEM4, can now be explored using Google Earth. Access is via this portal together with instructions for using it (though it is quite intuitive).
We have published a short paper in Earth System Science Data (Osborn and Jones, 2014) to describe this new approach.
This is part of ongoing efforts to make our climate data as accessible and transparent as possible. The CRUTEM4 dataset is already freely available via the CRU and UK Met Office, including the full underlying database of weather station data. But accessing it through Google Earth will enhance:
- Traceability of how we construct the dataset, by showing the weather station data used to make each grid box temperature anomaly.
- Accessibility for teaching and research, by extracting grid box and weather station data without the need for programming.
- Identifying errors. With ~6000 station records collected and collated by third parties there are bound to be errors. If they are identified, they can be corrected. Note that the global temperature record is not greatly affected by changes to the input data (e.g. Figure 5 of Jones et al., 2012).
The view when the KML file is first opened, with red/green shading to show grid boxes with CRUTEM4 data:
Navigate to a region of interest and click a shaded box to see the grid-box annual temperature anomaly:
Choose from links to view a larger annual image, a seasonal image, the grid-box data values (in CSV format for import into a spreadsheet) or choose “stations” to see the weather stations used:
Click a weather station pin to view the station annual temperature series, with links to larger annual and seasonal images, and to the station data values (again in CSV format for easy import into a spreadsheet):
We encourage you to try it for yourself – and to also read the open-access paper (Osborn and Jones, 2014) which describes the construction of CRUTEM4 in detail.
- T.J. Osborn, and P.D. Jones, "The CRUTEM4 land-surface air temperature data set: construction, previous versions and dissemination via Google Earth", Earth System Science Data, vol. 6, pp. 61-68, 2014. http://dx.doi.org/10.5194/essd-6-61-2014
- P.D. Jones, D.H. Lister, T.J. Osborn, C. Harpham, M. Salmon, and C.P. Morice, "Hemispheric and large-scale land-surface air temperature variations: An extensive revision and an update to 2010", Journal of Geophysical Research, vol. 117, 2012. http://dx.doi.org/10.1029/2011JD017139
Along with David’s online class a number of new climate science Massive Online Open Courses (MOOCs) are now coming online.
A new online course from MIT, “Global Warming Science”, introduces the basic science underpinning our knowledge of the climate system, how climate has changed in the past, and how it may change in the future. The course focuses on the fundamental energy balance in the climate system, between incoming solar radiation and outgoing infrared radiation, and how this balance is affected by greenhouse gases. They also discuss physical processes that shape the climate, such as atmospheric and oceanic convection and large-scale circulation, solar variability, orbital mechanics, and aerosols, as well as the evidence for past and present climate change. Climate models of varying degrees of complexity are available for students to run – including a model of a single column of the Earth’s atmosphere, which includes many of the important elements of simulating climate change. Together, this range of topics forms the scientific basis for our understanding of anthropogenic (human-influenced) climate change.
The introduction video gives a flavour of the course, which is presented by Kerry Emanuel, Dan Cziczo and David McGee:
The course is geared toward students with some mathematical and scientific background, but does not require any prior knowledge of climate or atmospheric science. Classes begin on February 19th and run for 12 weeks. Students may simply audit the course, or complete problems sets and a final exam to receive a certificate of completion. The course is free, and one can register for it here.
There are other climate science courses available too:
- David’s course Global Warming: The Science of Climate Change is starting again March 31.
- A course Turn Down the Heat: Why a 4ºC Warmer World Must be Avoided from the World Bank (presented by Kanta Kumari Rigaud and Pablo Benitez, and including input from Stefan). This started Jan 24.
- Richard Alley has a new 8-week course Energy, the Environment, and Our Future which started on Jan 6. (More background here).
- Update: Climate change: challenges and solutions from Tim Lenton, U. Exeter
- Update: Climate Change in Four Dimensions from Charles Kennel, Naomi Oreskes, Veerabhadran Ramanathan, Richard Somerville and David G. Victor, UCSD.
The global temperature data for 2013 are now published. 2010 and 2005 remain the warmest years since records began in the 19th Century. 1998 ranks third in two records, and in the analysis of Cowtan & Way, which interpolates the data-poor region in the Arctic with a better method, 2013 is warmer than 1998 (even though 1998 was a record El Nino year, and 2013 was neutral).
The end of January, when the temperature measurements of the previous year are in, is always the time to take a look at the global temperature trend. (And, as the Guardian noted aptly, also the time where the “climate science denialists feverishly yell [...] that global warming stopped in 1998.”) Here is the ranking of the warmest years in the four available data sets of the global near-surface temperatures (1):
New this year: for the first time there is a careful analysis of geographical data gaps – especially in the Arctic there’s a gaping hole – and their interpolation for the HadCRUT4 data. Thus there are now two surface temperature data sets with global coverage (the GISTEMP data from NASA have always filled gaps by interpolation). In these two data series 2007 is ranked 3rd. Their direct comparison looks like this:
One can clearly see the extreme year 1998, which (thanks to the record-El Niño) stands out above the long-term trend like no other year. But even taking this outlier year as starting point, the linear trend 1998-2013 in all four data sets is positive. Also clearly visible is 2010 as the warmest year since records began, and the minima in the years 2008 and 2011/2012. But just like the peaks are getting higher, these minima are less and less deep.
In these data curves I cannot see a particularly striking or significant current “warming pause”, even though the warming trend from 1998 is of course less than the long-term trend. Even in Nature, there was recently a (journalistic) contribution that in its introduction strongly overstated this alleged “hiatus”. It makes a good story that perhaps some cannot resist. (“Warming trend is somewhat reduced, but within the usual range of variation” simply does not make good headline.)
The role of El Niño and La Niña
The recent slower warming is mainly explained by the fact that in recent years the La Niña state in the tropical Pacific prevailed, in which the eastern Pacific is cold and the ocean stores more heat (2). This is due to an increase in the trade winds that push water westward across the tropical Pacific, while in the east cold water from the depths comes to the surface (see last graph here). In addition, radiative forcing has recently increased more slowly (more on this in the analysis of Hansen et al. – definitely worth a read).
NASA shows the following graphic, where you can see that the warmer years tend to be those with an El Niño in the tropical Pacific (red years), while the particularly cool years are those with La Niña (blue years).
Figure 2 The GISS data, with El Niño and La Niña conditions highlighted. Neutral years like 2013 are gray. Source: NASA .
Quality of the interpolation
How good is the interpolation into regions not regularly covered by weather stations? In any case, of course, better than simply ignoring the gaps, as the HadCRUT and NOAA data have done so far. The truly global average is important, since only it is directly related to the energy balance of our planet and thus the radiative forcing by greenhouse gases. An average over just part of the globe is not. The Arctic has been warming disproportionately in the last ten to fifteen years.
But how well the interpolation works we know only since the important work of Cowtan and Way. These colleagues have gone to the trouble of carefully validating their method. Although there are no permanent weather stations in the Arctic, there is intermittent data from buoys and and from weather model reanalyses with which they could test their method. For the last few decades and Cowtan & Way also make use of satellite data (more on this in our article on underestimated warming). I therefore assume that the data from Cowtan & Way is the methodologically best estimate of the global mean temperature which we currently have. This correction is naturally small (less than a tenth of a degree) and hardly changes the long-term trend of global warming – but if you look deeper into shorter periods of time, it can make a noticeable difference. The comparison with the uncorrected HadCRUT4 data looks like this:
And here’s a look at the last years in detail:
Following this analysis, 2013 was thus even warmer than the record El-Niño-year 1998.
- In all four data series of the global near-surface air temperature, the linear trend even from the extreme El Niño year 1998 is positive, i.e. shows continued warming, despite the choice of a warm outlier as the initial year.
- In all four data series of the global near-surface air temperature, 2010 was the warmest year on record, followed by 2005.
- The year 1998 is, at best, rank 3 – in the currently best data set of Cowtan & Way, 1998 is actually only ranked 7th. Even 2013 is – without El Niño – warmer there than 1998.
The German news site Spiegel Online presents these facts under the headline Warming of the air paused for 16 years (my translation). The headline of the NASA news release, NASA Finds 2013 Sustained Long-Term Climate Warming trend, is thus completely turned on its head.
This will not surprise anyone who has followed climate reporting of Der Spiegel in recent years. To the contrary – colleagues express their surprise publicly when a sensible article on the subject appears there. For years, Der Spiegel has acted as a gateway for dubious “climate skeptics” claims into the German media whilst trying to discredit top climate scientists (we’ve covered at least one example here).
Do Spiegel readers know more (as their advertising goes) – more than NASA, NOAA, Hadley Centre and the World Meteorological Organization WMO together? Or are they simply being taken for a ride for political reasons?
(1) In addition to the data of the near-surface temperatures, which are composed of measurements from weather stations and sea surface temperatures, there is also the microwave data from satellites, which can be used to estimate air temperatures in the troposphere in a few kilometers altitude. In the long-term climate trend since the beginning of satellite measurements in 1979, the tropospheric temperatures show a similar warming as the surface temperatures, but the short-term fluctuations in the troposphere are significantly different from those near the surface. For example, the El Niño peak in 1998 is about twice as high as in the surface data in the troposphere, see Foster and Rahmstorf 2011 . In their trend from 1998 , the two satellite series contradict each other: UAH shows +0.05 ° C per decade (a bit more than HadCRUT4), RSS shows -0.05 ° C per decade.
(2) Another graphic to illustrate the change between El Niño and La Niña: the Oceanic Niño Index ONI, the standard index of NOAA to describe the seesaw in the tropical Pacific.
Figure 5 The ONI index. The arrows added by me point to some globally warm or cool years (compare Figure 1 or 4). Source: NOAA .
Kevin Cowtan has a neat online trend calculator for all current global temperature data series.
Gavin provided a thoughtful commentary about the role of scientists as advocates in his RealClimate piece a few weeks ago.
I have weighed in with my own views on the matter in my op-ed today in this Sunday’s New York Times. And, as with Gavin, my own views have been greatly influenced and shaped by our sadly departed friend and colleague, Stephen Schneider. Those who were familiar with Steve will recognize his spirit and legacy in my commentary. A few excerpts are provided below:
THE overwhelming consensus among climate scientists is that human-caused climate change is happening. Yet a fringe minority of our populace clings to an irrational rejection of well-established science. This virulent strain of anti-science infects the halls of Congress, the pages of leading newspapers and what we see on TV, leading to the appearance of a debate where none should exist.
My colleague Stephen Schneider of Stanford University, who died in 2010, used to say that being a scientist-advocate is not an oxymoron. Just because we are scientists does not mean that we should check our citizenship at the door of a public meeting, he would explain. The New Republic once called him a “scientific pugilist” for advocating a forceful approach to global warming. But fighting for scientific truth and an informed debate is nothing to apologize for.
Our Department of Homeland Security has urged citizens to report anything dangerous they witness: “If you see something, say something.” We scientists are citizens, too, and, in climate change, we see a clear and present danger. The public is beginning to see the danger, too — Midwestern farmers struggling with drought, more damaging wildfires out West, and withering, record, summer heat across the country, while wondering about possible linkages between rapid Arctic warming and strange weather patterns, like the recent outbreak of Arctic air across much of the United States.
The piece ends on this note:
How will history judge us if we watch the threat unfold before our eyes, but fail to communicate the urgency of acting to avert potential disaster? How would I explain to the future children of my 8-year-old daughter that their grandfather saw the threat, but didn’t speak up in time?
Those are the stakes.
I would encourage interested readers to read the commentary in full at the New York Times website.
Constructive contributions are welcome in the comment section below :-)
Back in 2007 I wrote a post looking at the closures of the Thames Barrier since construction finished in 1983. Since then there has been another 7 years of data* and given that there was a spate of closures last week due to both river and tidal flooding, it seems a good time to revisit the topic.
The Barrier is in the Thames Estuary in the South-East UK and is raised whenever there is a risk of flooding in London proper. The Thames below Teddington weir is tidal with quite a wide range and so (the very real) risks are greatest at high tide. Risks are elevated for one of two reasons (and occasionally both) – either the river flow is elevated, and so a normal high tide would cause flooding (a ‘fluvial’ risk) or the tide itself is elevated (usually associated with a storm surge) (a ‘tidal’ risk). Both river floods and storm surges have a potential climate change component, and so the frequency of closure and the reasons why might be useful in assessing whether risks have changed in recent years. Countering that though, there have been non-climatic changes to how flooding risks are managed, improved hydrological modelling which gives better predictability, and local subsidence issues which will increase flooding risks but is not climatic in origin. Connecting flooding (or flood risks) to climate change is always complicated – as this recent Carbon Brief posting reminds us.
So how many closures have their been?
Red denotes the closures that are tidal in origin, grey is the total number of closures. Obviously tidal closures are dominant, but closures due to the river flooding in 2001, 2003 and this season are very notable. However, for the reasons outlined above and discussed previously, the raw number of closures – despite having increased over the years, is not a clean indicator of climate changes. We can look a little deeper at the data though.
For each closure, there is a record of the high water level at Southend (at the mouth of the Estuary), the river flow at Teddington and how much of the high water level was due to a storm surge. These are plotted below for each of the closures. Note that closures come in clusters, all are during the winter months (Sep-Apr) and in cases where the river flow is high, closures can happen for multiple high tides in a row.
Looking at the changes in these causes is interesting. The highest storm surges were in 2007 and 1993 but there is no apparent trend. The river flow data are more enigmatic, with the peak flows post-2000 standing out, but since there is much better data on the history of the Thames flow, that data should be looked at instead to determine trends. There is also a summary description of the first 100 years of data at Teddington available.
The high water record is perhaps the most interesting and there seems to be an upward trend in the peak levels. However, sea level rise at Southend to 1983 was about 1.2 mm/year and in the 30 years since then, one would expect 3.5 to 5.0 cm more, which is much less than what one would deduce from the peaks in the figure. So, again, a deeper look into the more direct data is probably warranted.
I therefore conclude, as I did in 2007, that:
Thames Barrier closings tell a complicated story which mix climate information with management issues and are probably too erratic to be particularly meaningful – if you want to say something about global sea level, then look at the integrated picture from satellites and tide gauges. But it is a good illustration of adaptive measures that are, and will increasingly be, needed to deal with ongoing climate change.
* Updated data thanks to the UK Environment Agency (via @AlanBarrierEA @EnvAgency) with a helping hand from Richard Betts.
by Michael E. Mann and Gavin Schmidt
This time last year we gave an overview of what different methods of assessing climate sensitivity were giving in the most recent analyses. We discussed the three general methods that can be used:
The first is to focus on a time in the past when the climate was different and in quasi-equilibrium, and estimate the relationship between the relevant forcings and temperature response (paleo-constraints). The second is to find a metric in the present day climate that we think is coupled to the sensitivity and for which we have some empirical data (climatological constraints). Finally, there are constraints based on changes in forcing and response over the recent past (transient constraints).
All three constraints need to be reconciled to get a robust idea what the sensitivity really is.
A new paper using the second ‘climatological’ approach by Steve Sherwood and colleagues was just published in Nature and like Fasullo and Trenberth (2012) (discussed here) suggests that models with an equilibrium climate sensitivity (ECS) of less than 3ºC do much worse at fitting the observations than other models.
Sherwood et al focus on a particular process associated with cloud cover which is the degree to which the lower troposphere mixes with the air above. Mixing is associated with reductions in low cloud cover (which give a net cooling effect via their reflectivity), and increases in mid- and high cloud cover (which have net warming effects because of the longwave absorption – like greenhouse gases). Basically, models that have more mixing on average show greater sensitivity to that mixing in warmer conditions, and so are associated with higher cloud feedbacks and larger climate sensitivity.
The CMIP5 ensemble spread of ECS is quite large, ranging from 2.1ºC (GISS E2-R – though see note at the end) to 4.7ºC (MIROC-ESM), with a 90% spread of ±1.3ºC, and most of this spread is directly tied to variations in cloud feedbacks. These feedbacks are uncertain, in part, because it involves processes (cloud microphysics, boundary layer meteorology and convection) that occur on scales considerably smaller than the grid spacing of the climate models, and thus cannot be explicitly resolved. These must be parameterized and different parameterizations can lead to large differences in how clouds respond to forcings.
Whether clouds end up being an aggravating (positive feedback) or mitigating (negative feedback) factor depends not just on whether there will be more or less clouds in a warming world, but what types of clouds there will be. The net feedback potentially represents a relatively small difference of much larger positive and negative contributions that tend to cancel and getting that right is a real challenge for climate models.
By looking at two reanalyses datasets (MERRA and ERA-Interim), Sherwood et al then try and assess which models have more realistic representations of the lower tropospheric mixing process, as indicated in the figure:
Figure (derived from Sherwood et al, fig. 5c) showing the relationship between the models’ estimate of Lower Tropospheric Mixing (LTMI) and sensitivity, along with estimates of the same metric from radiosondes and the MERRA and ERA-Interim reanalyses.
From that figure one can conclude that this process is indeed correlated to sensitivity, and that the observationally derived constraints suggest a sensitivity at the higher end of the model spectrum.
There was a interesting talk at AGU from Peter Caldwell (PCDMI) on how simply data mining for correlations between model diagnostics and climate sensitivity is likely to give you many false positives just because of the number of possible options, and the fact that individual models aren’t strictly independent. Sherwood et al get past that by focussing on physical processes that have an a priori connection to sensitivity, and are careful not to infer overly precise probabilistic statements about the real world. However, they do conclude that ‘models with ECS lower than 3ºC’ do not match the observations as well. This is consistent with the Fasullo and Trenberth study linked to above, and also to work by Andrew Dessler that suggest that models with amplifying net cloud feedback appear most consistent with observations.
There are a number of technical points that should also be added. First, ECS is the long term (multi-century) equilibrium response to a doubling of CO2 in an coupled ocean-atmosphere model. It doesn’t include many feedbacks associated with ‘slow’ processes (such as ice sheets, or vegetation, or the carbon cycle). See our earlier discussion for different definitions. Second, Sherwood et al are using a particular estimate of the ‘effective’ ECS in their analysis of the CMIP5 models. This estimate follows from the method used in Andrews et al (2011), but is subtly different than the ‘true’ ECS, since it uses a linear extrapolation from a relatively short period in the abrupt 4xCO2 experiments. In the case of the GISS models, the effective ECS is about 10% smaller than the ‘true’ value, however this distinction should not really affect their conclusions.
Third, in much of the press coverage of this paper (e.g. Guardian, UPI), the headline was that temperatures would rise by ’4ºC by 2100′. This unfortunately plays into the widespread confusion between an emergent model property (the ECS) and a projection into the future. These are connected, but the second depends strongly on the scenario of future forcings. Thus the temperature prediction for 2100 is contingent on following the RCP85 (business as usual) scenario.
Last year, the IPCC assessment dropped the lower bound on the expected range of climate sensitivity slightly, going from 2-4.5ºC in AR4 to 1.5-4.5ºC in AR5. One of us (Mike), mildly criticized this at the time.
Other estimates that have come in since AR5 (such as Schurer et al.) support ECS values similar to the CMIP5 mid-range, i.e. ~3ºC, and it has always been hard to reconcile a sensitivity of only 1.5ºC with the paleo-evidence (as we discussed years ago).
However, it remains true that we do not have a precise number for the ECS. Sherwood et al’s results give weight to higher values than some other recent estimates based on transient estimates (e.g. Otto et al. (2013)), but it should be kept in mind that there is a great asymmetry in risk between the high and low end estimates. Uncertainty cuts both ways and is not our friend. If the climate indeed turns out to have the higher-end climate sensitivity suggested by here, the impacts of unmitigated climate change are likely to be considerably greater than suggested by current best estimates.
- S.C. Sherwood, S. Bony, and J. Dufresne, "Spread in model climate sensitivity traced to atmospheric convective mixing", Nature, vol. 505, pp. 37-42, 2014. http://dx.doi.org/10.1038/nature12829
- J.T. Fasullo, and K.E. Trenberth, "A Less Cloudy Future: The Role of Subtropical Subsidence in Climate Sensitivity", Science, vol. 338, pp. 792-794, 2012. http://dx.doi.org/10.1126/science.1227465
- A.E. Dessler, "A Determination of the Cloud Feedback from Climate Variations over the Past Decade", Science, vol. 330, pp. 1523-1527, 2010. http://dx.doi.org/10.1126/science.1192546
- A. Schurer, G. Hegerl, M.E. Mann, S.F.B. Tett, and S.J. Phipps, "Separating forced from chaotic climate variability over the past millennium", Journal of Climate, pp. 130325112547002, 2013. http://dx.doi.org/10.1175/JCLI-D-12-00826.1
- A. Otto, F.E.L. Otto, O. Boucher, J. Church, G. Hegerl, P.M. Forster, N.P. Gillett, J. Gregory, G.C. Johnson, R. Knutti, N. Lewis, U. Lohmann, J. Marotzke, G. Myhre, D. Shindell, B. Stevens, and M.R. Allen, "Energy budget constraints on climate response", Nature Geosci, vol. 6, pp. 415-416, 2013. http://dx.doi.org/10.1038/ngeo1836
First open thread of the new year. A time for ‘best of’s of climate science last year and previews for the this year perhaps? We will have an assessment of the updates to annual indices and model/data comparisons later in the month.
We have often discussed issues related to science communication on this site, and the comment threads frequently return to the issue of advocacy, the role of scientists and the notion of responsibility. Some videos from the recent AGU meeting are starting to be uploaded to the AGU Youtube channel and, oddly, the first video of a talk is my Stephen Schneider lecture on what climate scientists should advocate for (though actually, it mostly about how science communicators should think about advocacy in general since the principles are applicable regardless of the subject area):
There is a lot of overlap between my talk and those given by Stephen Schneider twenty and thirty years ago – in particular the video at the Aspen Global Change Institute on whether a scientist-advocate was an oxymoron, and in descriptions on his website. Though I also touch on newer discussions, such as those raised earlier this year by Tamsin Edwards in the Guardian and in subsequent twitter and blog conversations. Another relevant piece is the paper on bringing values and deliberation to science communication (Dietz, 2013)
What’s new today is that scientific communication (and scientists communicating) is no longer limited to a few top voices in the broadcast media, but rather to a much wider (and perhaps younger) cohort of scientists communicating at many different levels -via blogs, twitter, facebook, reddit etc as well as in the mainstream media. Issues that were merely academic to most scientists a few decades ago, are actually very real to many more now. A greater appreciation of what other scientists have previously said about advocacy is perhaps needed.
I will likely write this lecture up more formally, but in the meantime I’ll be happy to discuss the points or the implications in the comment section. Note that I at one point mistakenly credit Aristotle with a quote that actually came from Elbert Hubbard (thus are laid bare the dangers of finishing a new talk late the previous evening…).
While difficult, let’s keep the discussion about advocacy in general, rather than for or against advocacy of specific policies.
- T. Dietz, "Bringing values and deliberation to science communication", Proceedings of the National Academy of Sciences, vol. 110, pp. 14081-14087, 2013. http://dx.doi.org/10.1073/pnas.1212740110
Since 1998 the global temperature has risen more slowly than before. Given the many explanations for colder temperatures discussed in the media and scientific literature (La Niña, heat uptake of the oceans, arctic data gap, etc.) one could jokingly ask why no new ice age is here yet. This fails to recognize, however, that the various ingredients are small and not simply additive. Here is a small overview and attempt to explain how the different pieces of the puzzle fit together.
Figure 1 The global near-surface temperatures (annual values at the top, decadal means at the bottom) in the three standard data sets HadCRUT4 (black), NOAA (orange) and NASA GISS (light blue). Graph: IPCC 2013.
First an important point: the global temperature trend over only 15 years is neither robust nor predictive of longer-term climate trends. I’ve repeated this now for six years in various articles, as this is often misunderstood. The IPCC has again made this clear (Summary for Policy Makers p. 3):
Due to natural variability, trends based on short records are very sensitive to the beginning and end dates and do not in general reflect long-term climate trends.
You can see this for yourself by comparing the trend from mid-1997 to the trend from 1999 : the latter is more than twice as large: 0.07 instead of 0.03 degrees per decade (HadCRUT4 data).
Likewise for data uncertainty: the trends of HadCRUT and NASA data hardly differ in the long term, but they do over the last 15 years. And the small correction proposed recently by Cowtan & Way to compensate for the data gap in the Arctic almost does not change the HadCRUT4 long-term trend, but it changes that over the last 15 years by a factor of 2.5.
Therefore, it is a (by some deliberately promoted) misunderstanding to draw conclusions from such a short trend about future global warming, let alone climate policy. To illustrate this point, the following graph shows one simulation from the CMIP3 model ensemble:
Figure 2 Temperature evolution in a model simulation with the MRI model. Other models also show comparable “hiatuses” due to natural climate variability. This is one of the standard simulations carried out within the framework of CMIP3 for the IPCC 2007 report. Graph: Roger Jones.
In this model calculation, there is a “warming pause” in the last 15 years, but in no way does this imply that further global warming is any less. The long-term warming and the short-term “pause” have nothing to do with each other, since they have very different causes. By the way this example refutes the popular “climate skeptics” claim that climate models cannot explain such a “hiatus” – more on that later.
Now for the causes of the lesser trend of the last 15 years. Climate change can have two types of causes: external forcing or internal variability in the climate system.
External forcing: the sun, volcanoes & co.
The possible external drivers include the shading of the sun by aerosol pollution of the atmosphere by volcanoes (Neely et al., 2013) or Chinese power plants (Kaufmann et al. 2011). Second, a reduction of the greenhouse effect of CFCs because these gases have been largely banned in the Montreal Protocol (Estrada et al., 2013). And third, the transition from solar maximum in the first half to a particularly deep and long solar minimum in the second half of the period – this is evidenced by measurements of solar activity, but can explain only part of the slowdown (about one third according to our correlation analysis).
It is likely that all these factors indeed contributed to a slowing of the warming, and they are also additive – according to the IPCC report (Section 9.4) about half of the slowdown can be explained by a slower increase in radiative forcing. A problem is that the data on the net radiative forcing are too imprecise to better quantify its contribution. Which in turn is due to the short period considered, in which the changes are so small that data uncertainties play a big role, unlike for the long-term climate trends.
The latest data and findings on climate forcings are not included in the climate model runs because of the long lead time for planning and executing such supercomputer simulations. Therefore, the current CMIP5 simulations run from 2005 in scenario mode (see Figure 6) rather than being driven by observed forcings. They are therefore driven e.g. with an average solar cycle and know nothing of the particularly deep and prolonged solar minimum 2005-2010.
Internal variability: El Niño, PDO & co.
The strongest internal variability in the climate system on this time scale is the change from El Niño to La Niña – a natural, stochastic “seesaw” in the tropical Pacific called ENSO (El Niño Southern Oscillation).
The fact that El Niño is important for our purposes can already be seen by how much the trend changes if you leave out 1998 (see above): El Niño years are particularly warm (see chart), and 1998 was the strongest El Niño year since records began. Further evidence of the crucial importance of El Niño is that after correcting the global temperature data for the effect of ENSO and solar cycles by a simple correlation analysis, you get a steady warming trend without any recent slowdown (see next graph and Foster and Rahmstorf 2011). ENSO is responsible for two thirds of the correction. And if you nudge a climate model in the tropical Pacific to follow the observed sequence of El Niño and La Niña (rather than generating such events itself in random order), then the model reproduces the observed global temperature evolution including the “hiatus” (Kosaka and Xie 2013) .
One can also ask how the observed warming fits the earlier predictions of the IPCC . The result looks like this (Rahmstorf et al 2011):
Figure 3 Comparison of global temperature (average over 5 data sets, including 2 satellite series) with the projections from the 3rd and 4 IPCC reports. Pink: the measured values. Red: data after adjusting for ENSO, volcanoes and solar activity by a multivariate correlation analysis. The data are shown as a moving average over 12 months. From Rahmstorf et al. 2012.
And what about the ocean heat storage ? That is no additional effect, but part of the mechanism by which El Niño years are warm and La Niña years are cold at the Earth’s surface. During El Niño the ocean releases heat, during La Niña it stores more heat. The often-cited data on the heat storage in the ocean are therefore just further evidence that El Niño plays a crucial role for the “pause”.
Leading U.S. climatologist Kevin Trenberth has studied this for twenty years and has just published a detailed explanatory article. Trenberth emphasizes the role of long-term variations of ENSO, called pacific-decadal oscillation (PDO). Put simply: phases with more El Niño and phases with predominant La Niña conditions (as we’ve had recently) may persist for up to two decades in the tropical Pacific. The latter brings a somewhat slower warming at the surface of our planet, because more heat is stored deeper in the ocean. A central point here: even if the surface temperature stagnates our planet continues to take up heat. The increasing greenhouse effect leads to a radiation imbalance: we absorb more heat from the sun than we emit back into space. 90% of this heat ends up in the ocean due to the high heat capacity of water. The fact that the ocean continues to heat up, without pause, demonstrates that the greenhouse effect has not subsided, as we have discussed here.
How important the effect of El Niño is will be revealed at the next decent El Niño event. I have already predicted last year that after the next El Niño a new record in global temperature will be reached again – a forecast that probably will be confirmed or falsified soon.
The Arctic data gap
Recently, Cowtan & Way have shown that recent warming was underestimated in the HadCRUT data. After using satellite data and a smart statistical method to fill gaps in the network of weather stations, the global warming trend since 1998 is 0.12 degrees per decade – that is only a quarter less than the long-term trend of 0.16 degrees per decade measured since 1980. Awareness of this data gap is not new – Simmons et al. have shown already in 2010 that global warming is underestimated in the HadCRUT data, and we have discussed the Arctic data hole repeatedly since 2008 at RealClimate. NASA GISS has always filled the data gaps by interpolation, albeit with a simpler method, and accordingly the GISTEMP data show hardly a slowdown of warming.
The spatial pattern
Cohen et al. have shown two years ago that it is mainly the recent cold winters in Eurasia that have contributed to the flattening of the global warming curve (see figure).
Figure 4 Observed temperature trends in the winter months. Despite the significant global warming in the annual mean, there was a winter cooling in Eurasia. CRUTem3 data (land only!), from Cohen et al. 2012.
They argue that an explanation for the “pause” in global warming would have to explain this particular pattern. But this is not compelling: there could be two independent mechanisms superimposed. One that dampens global warming – which would have to be explained by the global energy balance. And a second one that explains the cold Eurasian winters, but without affecting the global mean temperature. I think the latter is likely – these recent cold winters are part of the much-discussed “warm Arctic – cold continents” pattern (see, eg, Overland et al 2011) and could be related to the dwindling ice cover on the Arctic Ocean, as we explained here. Since the heat is just moved around, with Eurasian cold linked to a correspondingly warmer Arctic, this hardly affects the global mean temperature – unless you’re looking at a data set with a large data gap in the Arctic …
What does it add up to?
How does all that fit together now? As described above, I think (just like Trenberth) that natural variability, in particular ENSO and PDO, is the main reason the recent slower warming. From the perspective of the planetary energy balance heat storage in the ocean is the key mechanism.
If the warming is steady after adjusting for ENSO, volcanoes and solar cycles, does the additional correction for the Arctic data gap by Cowtan & Way mean that the warming after these adjustments has even accelerated? That could be, but only by a small amount. As you can see in Figure 6 of our paper (Foster and Rahmstorf), the slowdown is gone after said adjustment in the GISS data and the two satellites series, but there still is some slowdown in the two data sets with the Arctic gap, ie HadCRUT and NCDC. Adding the trend correction by 0.08 degrees per decade from Cowtan & Way to our ENSO-adjusted HadCRUT trend from 1998, you end up at about 0.2 degrees per decade, practically the same value as we got for the GISS data. If one further adds the effect of the above forcings (without the solar activity already accounted for) this would add a few hundredths of a degree. The result would be a bit faster warming than over the entire period since 1980, but probably less than the 0.29 °C per decade measured over 1992-2006. Nothing to get excited about. Especially since based on the model calculations you’d expect anyway trends around 0.2 degrees per decade, because models predict not a constant but a gradually accelerating warming. Which brings us to the comparison with models.
Comparison with models
Figure 5 Comparison of the three measured data sets shown at the outset with earlier IPCC projections from the first (FAR), 2nd (SAR) 3rd (TAR) and 4th (AR4) IPCC report, as well as with the CMIP3 model ensemble. As you can see the data move within the projected ranges. Source: IPCC AR5, Figure 1.4. (Small note: “climate skeptics” brought an earlier, erroneous draft version of this graphic to the public, although it was marked in block letters as a temporary placeholder by IPCC.)
When comparing data with models, one needs to understand a key point: the models also produce internal variations, including ENSO, but as this (similar to the weather in the models) is a stochastic process, the El Niños and La Niñas are distributed randomly over the years. Therefore, only in rare cases a model will randomly produce a sequence that is similar to the observed sequence with reduced warming from 1998 to 2012. There are such models – see the first image above – but most show such phases of slow warming or “hiatus” at other times.
The IPCC has therefore never tried to predict the climate evolution over 15 years, because that’s just too much influenced by random internal variability (such as ENSO), which we cannot predict (at least as yet).
However, all models show such variability – no one who understands this issue could have been surprised that there can be such hiatus phases. They’ve also occurred in the past, for example from 1982, as Trenberth shows in his Figure 4.
The following graph shows a comparison of observational data with the CMIP5 ensemble of model experiments that have been made for the current IPCC report. The graph shows that the El Niño year 1998 is at the top and the last two cool La Niña years are at the bottom of the model projection range (for the various reasons explained above). However, the temperatures (at least according to the data of Cowtan & Way) are within the range which is spanned by 90% of the models.
Figure 6 Comparison of 42 CMIP5 simulations with the observational data. The HadCRUT4 value for 2013 is provisional of course, still without November and December. (Source: DeepClimate.org)
So there is no evidence for model errors here (for more on this see this article) . This is also no evidence for a lower climate sensitivity, even if this was proposed some time ago by Otto et al. (2013). Trenberth et al. suggest that even the choice of a different data set of ocean heat content would have increased the climate sensitivity estimate of Otto et al. by 0.5 degrees. In addition, Otto et al. used the HadCRUT4 temperature data with its particularly low recent warming. With an honest appraisal of the full uncertainty, also in the forcing, one must come to the conclusion that such a short period is not sufficient to draw conclusions about the climate sensitivity.
Global temperature has in recent years increased more slowly than before, but this is within the normal natural variability that always exists, and also within the range of predictions by climate models – even despite some cool forcing factors such as the deep solar minimum not included in the models. There is therefore no reason to find the models faulty. There is also no reason to expect less warming in the future – in fact, perhaps rather the opposite as the climate system will catch up again due its natural oscillations, e.g. when the Pacific decadal oscillation swings back to its warm phase. Even now global temperatures are very high again – in the GISS data, with an anomaly of + 0.77 °C November was warmer than the previous record year of 2010 (+ 0.67 °), and it was the warmest November on record since 1880.
PS: This article was translated from the German original at RC’s sister blog KlimaLounge. KlimaLounge has been nominated as one of 20 blogs for the award of German science blog of the year 2013. If you’d like to vote for us: simply go to this link, select KlimaLounge in the list and press the “vote” button.
- K. Cowtan, and RG Way, “Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends”, Quarterly Journal of the Royal Meteorological Society, pp. n / an / a, 2013. http://dx.doi.org/10.1002/qj.2297
- RR Neely, OB Toon, S. Solomon, J. Vernier, C. Alvarez, JM English, KH Rosenlof, MJ Mills, CG Bardeen, JS Daniel, and JP Thayer, ”
Recent Increases in anthropogenic SO2from Asia have minimal impact on stratospheric aerosol
“Geophysical Research Letters, vol. 40, pp. 999-1004, 2013. http://dx.doi.org/10.1002/grl.50263
- RK Kaufmann, H. Kauppi, ML man, and JH Stock, “Reconciling anthropogenic climate change with observed-temperature 1998-2008″, Proceedings of the National Academy of Sciences, vol. 108 pp. 11790-11793, 2011. http://dx.doi.org/10.1073/pnas.1102467108
- F. Estrada, P. Perron, and B. Martínez-López, “Statistically derived Contributions of diverse human Influences to twentieth-century temperature changes”, Nature Geoscience, vol. 6, pp. 1050-1055, 2013. http://dx.doi.org/10.1038/ngeo1999
- G. Foster, and S. Rahmstorf, “Global temperature evolution 1979-2010″, Environmental Research Letters, vol. 6, pp. 044 022, 2011. http://dx.doi.org/10.1088/1748-9326/6/4/044022
- Y. Kosaka, and S. Xie, “Recent global-warming hiatus tied to equatorial Pacific surface cooling”, Nature, vol. 501 pp. 403-407, 2013. http://dx.doi.org/10.1038/nature12534
- S. Rahmstorf, G. Foster, and A. Cazenave, “Comparing Projections to climate observations up to 2011,” Environmental Research Letters, vol. 7, pp. 044 035, in 2012. http://dx.doi.org/10.1088/1748-9326/7/4/044035
- AJ Simmons, KM Willett, PD Jones, PW Thorne, and DP Dee, “Low-frequency variations in surface atmospheric humidity, temperature, and precipitation: Inferences from reanalyses and monthly gridded observational data sets”, Journal of Geophysical Research, vol. 115, 2010. http://dx.doi.org/10.1029/2009JD012442
- JL Cohen, JC Furtado, MA Barlow, VA Alexeev, and JE Cherry, “Arctic warming, Increasing snow cover and wide spread boreal winter cooling”, Environmental Research Letters, vol. 7, pp. 014 007, in 2012. http://dx.doi.org/10.1088/1748-9326/7/1/014007
- JE Overland, KR Wood, and M. Wang, “Warm Arctic-cold continents: climate impacts of the newly open Arctic Sea”, Polar Research, vol. 30, 2011. http://dx.doi.org/10.3402/polar.v30i0.15787
- A. Otto, Otto FEL, O. Boucher, J. Church, G. Hegerl, PM Forster, NP Gillett, J. Gregory, GC Johnson, R. Knutti, N. Lewis, U. Lohmann, J. Marotzke, G. Myhre, D. Shindell, B. Stevens, and MR Allen, “Energy budget constraints on climate response”, Nature Geoscience, vol. 6, pp. 415-416, 2013. http://dx.doi.org/10.1038/ngeo1836
So, it’s that time of year again.
Fall AGU is the largest Earth Science conference on the planet, and is where you will get previews of new science results, get a sense of what other experts think about current topics, and indulge in the more social side of being a scientist. The full scientific program is available for searching here.
In recent years, there has been an increasing amount of virtual content – including live streaming of key sessions and high profile lectures, and continuous twitter commentary (follow the hashtag #AGU13), that give people not attending to get a sense of what’s going on. Gavin and Mike are attending and will try and give some highlights as the week goes along, here and via twitter (follow @ClimateOfGavin and @MichaelEMann).
Some obvious highlights (that will be live-streamed) are the Frontiers of Geophysics lecture from the Jim Hansen (Tuesday, 12:30pm PST), Senator Olympia Snowe (Monday, 12:30pm), Judith Lean (Tues 10:20am), the Charney Lecture from Lenny Smith (Tues 11:20am), James Elsner on tornado connections to climate change (Tues 2:40pm), David Grinspoon (the Sagan lecture, Thurs 9am), and Bill Ruddiman (Thursday 2:40pm). Some full sessions will also be livestreamed – for instance, The future of IPCC session (Tues 10:20am-12:30pm), and the Climate Literacy sessions (Tues 4:00pm-6:00pm, Wed 8am-12:30pm).
For attendees, there are a number of events close to our hearts: A bloggers forum for discussion on science blogging (Mon 5pm), the Open Mic night hosted by Richard Alley (Mon 7:30pm at Jillian’s Restaurant), and the AGU 5k run on Wednesday morning (6:30am).
Also AGU and the Climate Science Legal Defense Fund have organised a facility for individual consultations with a lawyer (by appointment via firstname.lastname@example.org) for people either who have found themselves involved in legal proceedings associated with their science or people who are just interested in what they might need to be prepared for. There is a brown bag lunch session on Friday (12:30pm PST) for a more informal discussion of relevant issues.
There are obviously many individual presentations that will be of interest, but too many to list here. Feel free to add suggestions in the comments and look out for updates all next week.
I was disappointed by the recent summary for policymakers (SPM) of the intergovernmental panel on climate change (IPCC) assessment report 5, now that I finally got around to read it. Not so much because of the science, but because the way it presented the science.
The report was written by top scientists, so what went wrong?
I guess we need to recognise the limitations of the format of the SPM, and the constraints that they have to work under (word by word approval from 190 country representatives) may not have been helpful this time. The specified report length, combined with attempts from lots of people to expand on the content, may have complicated the process.
My impression is that the amount of information crammed into this report was more important than making a few strong messages.
The SPM really provides a lot of facts, but what do all those numbers mean for policy makers? There was little attempt to set the findings in a context relevant for decision making (ranging from the national scale to small businesses).
It is difficult to write a summary for a report that has not yet been published, and for that reason, the SPM is cluttered by technical details and discussions about uncertainty and confidence which have a better place in the main report.
The authors of the SPM are experts at writing scientific papers, but that is a different skill to writing for non-scientists. Often, the order of presentation for non-scientists is opposite to the way papers are presented in sciences.
A summary should really start with the most important message, but the SPM starts by discussing uncertainties. It is then difficult for non-scientists to make sense of the report. Are the results reliable or not?
I asked myself after reading the SPM – what’s the most important finding? If the IPCC hoped for good press coverage, I can imagine all journalists asking the same question.
My recommendation is that next time, the main report is published before the SPM. That way, all the space used on uncertainty and confidence in the SPM could be spared.
I also recommend that people who decide the structure of future SPMs and undertake the writing take a course effective writing for non-scientist. At MET Norway, we have had such writing lessons to improve our communication skills, and I have found this training valuable.
It takes some training to find more popular ways to describe science and spot excessive use of jargon. Many words, such as ‘positive feedback‘ have different meanings if you talk to a scientist or a non-scientist (a bad phrase to use in the context of climate change for people with very little science background). Also the word ‘uncertainty‘ is not a good choice – what does it mean really?
There are some examples of how the report could be written in a better way: The European Academies of Science Advicory Council (EASAC) followed a different strategy, where the main report was published before the summary, and hence the summary could be written as a summary and with a more coherent structure and a stronger connection to the reports target group.
The World Bank report of last year also comes to my mind – I think that is a much clearer form of presentation.
If I could have my way, I would also suggest that IPCC’s main reports in the future come with supporting material that includes the necessary data (extracted for the plotting purposes, but with meta-data providing the complete history of post-processing) and source code for generating all the figures in the report.
One way to do that could to use so-called ‘R-packages’ as suggested by Pebesma et al (2012) (PDF). It would also be good if future assessment reports pay more attention to replicating important results as a means of verification or falsification.
p.s. After posting this article, I was made aware of two short documents summarizing the IPCC reports – link here. I’m really grateful for this feedback. -rasmus
- E. Pebesma, D. Nüst, and R. Bivand, "The R software environment in reproducible geoscientific research", Eos, Transactions American Geophysical Union, vol. 93, pp. 163, 2012. http://dx.doi.org/10.1029/2012EO160003
This month’s open thread. It’s coming to the end of the year and that means updates to the annual time series of observations and models relatively soon. Suggestions for what you’d like to see assessed are welcome… or any other climate science related topic.
Lots of interesting methane papers this week. In Nature Geoscience, Shakhova et al (2013) have published a substantial new study of the methane cycle on the Siberian continental margin of the Arctic Ocean. This paper will get a lot of attention, because it follows by a few months a paper from last summer, Whiteman et al (2013), which claimed a strong (and expensive) potential impact from Arctic methane on near-term climate evolution. That economic modeling study was based on an Arctic methane release scenario proposed in an earlier paper by Shakhova (2010). In PNAS, Miller et al (2013) find that the United States may be emitting 50-70% more methane than we thought. So where does this leave us?
Because methane is mostly well-mixed in the atmosphere, emissions from the Arctic or from the US must be seen within the context of the global sources of methane to the atmosphere. Estimates of methane emissions from the Arctic have risen, from land (Walter et al 2006) as well now as from the continental shelf off Siberia. Call it 20-30 Tg CH4 per year from both sources. The US is apparently emitting more than we thought we were, maybe 30 Tg CH4 per year. But these fluxes are relatively small compared to the global emission rate of about 600 Tg CH4 per year. The Arctic and US anthropogenic are each about 5% of the total. Changes in the atmospheric concentration scale more-or-less with changes in the chronic emission flux, so unless these sources suddenly increase by an order of magnitude or more, they won’t dominate the atmospheric concentration of methane, or its climate impact.
American Methane Emissions Higher Than Previously Thought
Miller et al (2013) combine measurements of methane concentrations in various locations through time with model reconstructions of wind fields, and “invert” the information to estimate how much methane was released to the air as it blew over the land. This is a well-established methodology, pushed to constrain US anthropogenic emissions by including measurements from aircraft and communications towers in addition to the ever-invaluable NOAA flask sample network, and incorporating socioeconomic and industrial data. The US appears to be emitting 50-70% more methane than the EPA thought we were, based on “bottom up” accounting (adding up all the known sources).
Is this bad news for global warming?
Not really, because the one real hard fact that we know about atmospheric methane is that it’s concentration isn’t rising very quickly. Methane is a short-lived gas in the atmosphere, so to make it rise, the emission flux has to continually increase. This is in contrast to CO2, which accumulates in the atmosphere / ocean system, meaning that steady (non-rising) emissions still lead to a rising atmospheric concentration. There is enough uncertainty in the methane budget that tweaks of a few percent here and there don’t upset the apple cart. Since the methane concentration wasn’t rising all that much, its sources, uncertain as they are, have been mostly balanced by sinks, also uncertain. If anything, the paper is good news for people concerned about global warming, because it gives us something to fix.
Methane from the Siberian continental shelf
The Siberian continental shelf is huge, comprising about 20% of the global area of continental shelf. Sea level dropped during the last glacial maximum, but there was no ice sheet in Siberia, so the surface was exposed to the really cold atmosphere, and the ground froze to a depth of ~1.5 km. When sea level rose, the permafrost layer came under attack by the relatively warm ocean water. The submerged permafrost has been melting for millennia, but warming of the waters on the continental shelf could accelerate the melting. In equilibrium there should be no permafrost underneath the ocean, because the ocean is unfrozen, and the sediment gets warmer with depth below that (the geothermal temperature gradient).
Ingredients of Shakhova et al (2013)
- There are lots of bubbles containing mostly methane coming up from the shallow sea floor in the East Siberian Arctic shelf. Bubbles like this have been seen elsewhere, off Spitzbergen for example (Shakhova et al (2013)). Most of the seep sites in the Siberian margin are relatively low flow but a few of them are much larger.
- The bubbles mostly dissolve in the water column, but when the methane flux gets really high the bubbles rise faster and reach the atmosphere better. When methane dissolves in the water column, some of it escapes to the atmosphere by evaporation before it gets oxidized to CO2. Storms seem to pull methane out of the water column, enhancing what oceanographers call “gas exchange” by making waves with whitecaps. Melting sea ice will also increase methane escape to the atmosphere by gas exchange. However, the concentration of methane in the water column is low enough that even with storms the gas exchange flux seems like it must be negligible compared with the bubble flux. In their calculation of the methane flux to the atmosphere, Shakhova et al focused on bubbles.
- Sediments that got flooded by rising sea level thousands of years ago are warmer than sediments still exposed to the colder atmosphere, down to a depth of ~50 meters. This information is not directly applied to the question of incremental melting by warming waters in the short-term future.
- The study derives an estimate of a total methane emission rate from the East Siberian Arctic shelf area based on the statistics of a very large number of observed bubble seeps.
Is the methane flux from the Arctic accelerating?
Shakhova et al (2013) argue that bottom water temperatures are increasing more than had been recognized, in particular in near-coastal (shallow) waters. Sea ice cover has certainly been decreasing. These factors will no doubt lead to an increase in methane flux to the atmosphere, but the question is how strong this increase will be and how fast. I’m not aware of any direct observation of methane emission increase itself. The intensity of this response is pretty much the issue of the dispute about the Arctic methane bomb (below).
What about the extremely high methane concentrations measured in Arctic airmasses?
Shakhova et al (2013) show shipboard measurements of methane concentrations in the air above the ESAS that are almost twice as high as the global average (which is already twice as high as preindustrial). Aircraft measurements published last year also showed plumes of high methane concentration over the Arctic ocean (Kort et al 2012), especially in the surface boundary layer. It’s not easy to interpret boundary-layer methane concentrations quantitatively, however, because the concentration in that layer depends on the thickness of the boundary layer and how isolated it is from the air above it. Certainly high methane concentrations indicate emission fluxes, but it’s not straightforward to know how significant that flux is in the global budget.
The more easily interpretable measurement is the time-averaged difference between Northern and Southern hemisphere methane concentrations. If Arctic methane were driving a substantial increase in the global atmospheric methane concentration, it would be detectable in this time-mean interhemispheric gradient. Northern hemisphere concentrations are a bit higher than they are in the Southern hemisphere (here), but the magnitude of the difference is small enough to support the conclusion from the methane budget that tropical wetlands, which don’t generate much interhemispheric gradient, are a dominant natural source (Kirschke et al 2013).
What about methane hydrates?
There are three possible sources of the methane in the bubbles rising out of the Siberian margin continental shelf:
- Decomposition (fermentation) of thawing organic carbon deposited with loess (windblown glacial flour) when the sediment was exposed to the atmosphere by low sea level during the last glacial time. Organic carbon deposits (called Yedoma) are the best-documented carbon reservoir in play in the Arctic.
- Methane gas that has been trapped by ice, now escaping. Shakhova et al (2013) figure that flaws in the permafrost called taliks, resulting from geologic faults or long-running rivers, might allow gas to escape through what would otherwise be impermeable ice. If there were a gas pocket of 50 Gt, it could conceivably escape quickly as a seal breached, but given that global gas reserves come to ~250 Gt, a 50 Gt gas bubble near the surface would be very large and obvious. There could be 50 Gt of small, disseminated bubbles distributed throughout the sediment column of the ESAS, but in that case I’m not sure where the short time scale for getting the gas to move comes from. I would think the gas would dribble out over the millennia as the permafrost melts.
- Decomposition (melting) of methane hydrates, a peculiar form of water ice cages that form in the presence of, and trap, methane.
Methane hydrate seems menacing as a source of gas that can spring aggressively from the solid phase like pop rocks (carbonated candies). But hydrate doesn’t just explode as soon as it crosses a temperature boundary. It takes heat to convert hydrate into fluid + gas, what is called latent heat, just like regular water ice. There could be a lot of hydrate in Arctic sediments (it’s not real well known how much there is), but there is also lot of carbon as organic matter frozen in the permafrost. Their time scales for mobilization are not really all that different, so I personally don’t see hydrates as scarier than frozen organic matter. I think it just seems scarier.
The other thing about hydrate is that at any given temperature, a minimum pressure is required for hydrate to be stable. If there is pure gas phase present, the dissolved methane concentration in the pore water, from Henry’s law, scales with pressure. At 0 degrees C, you need a pressure equivalent to ~250 meters of water depth to get enough dissolved methane for hydrate to form.
The scariest parts of the Siberian margin are the shallow parts, because this is where methane bubbles from the sea floor might reach the surface, and this is where the warming trend is observed most strongly. But methane hydrate can only form hundreds of meters below the sea floor in that setting, so thermodynamically, hydrate is not expected to be found at or near the sea floor. (Methane hydrate can be found close to the sediment surface in deeper water depth settings, as for example in the Gulf of Mexico or the Nankai trough). The implication is that it will take centuries or longer before heat diffusion through that sediment column can reach and destabilize methane hydrates.
Is there any way nature might evade this thermodynamic imperative?
If hydrate exists in near-surface sediments of the Siberian margin, it would be called “metastable”. Metastability in nature is common when forming a new phase for which a “seed” or starting crystal is needed, like cloud droplets freezing in the upper atmosphere. But for decomposition to form water and gas one would not generally expect a barrier to just melting when energy is available. Chuvilin et al (2011) monitored melting hydrate in the laboratory and observed some quirkiness.
But these experiments spanned 100 hours, while the sediment column has been warming for thousands of years, so the experiments do not really address the question. I have to think that if there were some impervious-to-melting hydrate, why then would it suddenly melt, all at once, in a few years? Actual samples of hydrate collected from shallow sediments on the Siberian shelf would be much more convincing.
What about that Arctic methane bomb?
Shakhova et al (2013) did not find or claim to have found a 50 Gt C reservoir of methane ready to erupt in a few years. That claim, which is the basis of the Whiteman et al (2013) $60 trillion Arctic methane bomb paper, remains as unsubstantiated as ever. The Siberian Arctic, and the Americans, each emit a few percent of global emissions. Significant, but not bombs, more like large firecrackers.
- N. Shakhova, I. Semiletov, I. Leifer, V. Sergienko, A. Salyuk, D. Kosmach, D. Chernykh, C. Stubbs, D. Nicolsky, V. Tumskoy, and �. Gustafsson, "Ebullition and storm-induced methane release from the East Siberian Arctic Shelf", Nature Geoscience, vol. 7, pp. 64-70, 2013. http://dx.doi.org/10.1038/ngeo2007
- G. Whiteman, C. Hope, and P. Wadhams, "Climate science: Vast costs of Arctic change", Nature, vol. 499, pp. 401-403, 2013. http://dx.doi.org/10.1038/499401a
- N.E. Shakhova, V.A. Alekseev, and I.P. Semiletov, "Predicted methane emission on the East Siberian shelf", Doklady Earth Sciences, vol. 430, pp. 190-193, 2010. http://dx.doi.org/10.1134/S1028334X10020091
- S.M. Miller, S.C. Wofsy, A.M. Michalak, E.A. Kort, A.E. Andrews, S.C. Biraud, E.J. Dlugokencky, J. Eluszkiewicz, M.L. Fischer, G. Janssens-Maenhout, B.R. Miller, J.B. Miller, S.A. Montzka, T. Nehrkorn, and C. Sweeney, "Anthropogenic emissions of methane in the United States", Proceedings of the National Academy of Sciences, vol. 110, pp. 20018-20022, 2013. http://dx.doi.org/10.1073/pnas.1314392110
- E.A. Kort, S.C. Wofsy, B.C. Daube, M. Diao, J.W. Elkins, R.S. Gao, E.J. Hintsa, D.F. Hurst, R. Jimenez, F.L. Moore, J.R. Spackman, and M.A. Zondlo, "Atmospheric observations of Arctic Ocean methane emissions up to 82° north", Nature Geoscience, vol. 5, pp. 318-321, 2012. http://dx.doi.org/10.1038/ngeo1452
- S. Kirschke, P. Bousquet, P. Ciais, M. Saunois, J.G. Canadell, E.J. Dlugokencky, P. Bergamaschi, D. Bergmann, D.R. Blake, L. Bruhwiler, P. Cameron-Smith, S. Castaldi, F. Chevallier, L. Feng, A. Fraser, M. Heimann, E.L. Hodson, S. Houweling, B. Josse, P.J. Fraser, P.B. Krummel, J. Lamarque, R.L. Langenfelds, C. Le Quéré, V. Naik, S. O'Doherty, P.I. Palmer, I. Pison, D. Plummer, B. Poulter, R.G. Prinn, M. Rigby, B. Ringeval, M. Santini, M. Schmidt, D.T. Shindell, I.J. Simpson, R. Spahni, L.P. Steele, S.A. Strode, K. Sudo, S. Szopa, G.R. van der Werf, A. Voulgarakis, M. van Weele, R.F. Weiss, J.E. Williams, and G. Zeng, "Three decades of global methane sources and sinks", Nature Geoscience, vol. 6, pp. 813-823, 2013. http://dx.doi.org/10.1038/ngeo1955
In the long run, sea-level rise will be one of the most serious consequences of global warming. But how fast will sea levels rise? Model simulations are still associated with considerable uncertainty – too complex and varied are the processes that contribute to the increase. A just-published survey of 90 sea-level experts from 18 countries now reveals what amount of sea-level rise the wider expert community expects. With successful, strong mitigation measures, the experts expect a likely rise of 40-60 cm in this century and 60-100 cm by the year 2300. With unmitigated warming, however, the likely range is 70-120 cm by 2100 and two to three meters by the year 2300.
Complex problems often cannot simply be answered with computer models. Experts form their views on a topic from the totality of their expertise – which includes knowledge of observational findings and model results, as well as their understanding of the methodological strengths and weaknesses of the various studies. Such expertise results from years of study of a topic, through ones own research, through following the scientific literature and through the ongoing critical discussion process with colleagues at conferences.
For many topics it would be interesting for the public to know what the expert community thinks. If I had a dangerous disease, I would give a lot to learn what the best specialists from around the world think about it. Mostly, however, this expertise is not transparent to outsiders. The media only offer a rather selective window into experts’ minds.
More transparency can be achieved through systematic surveys of experts. The International Council of Scientific Academies (InterAcademy Council, IAC) in its review of IPCC procedures recommended in 2010: “Where practical, formal expert elicitation procedures should be used to obtain subjective probabilities for key results”. We took this advice and last November conducted a broad expert survey on future sea-level rise, in the context of a research project funded by NOAA. Lead author is Ben Horton (Rutgers University), the further authors are Simon Engelhart (University of Rhode Island) and Andrew Kemp (Tufts University).
The credibility of such surveys stands and falls with the selection of experts (see Gavin’s article A new survey of scientists). It is important to identify relevant experts using objective criteria. For us, formal criteria such as professorships were not relevant; our objective was to reach active sea-level researchers. To this end we used the scientific publication database Web of Science of Thomson Reuters and let it generate a list of the 500 researchers who had published the most papers for the search term “sea level” in the last five years in the peer-reviewed literature. It was found that at least 6 publications were required for a scientist to make it onto this list. For 360 of those experts we were able to find email addresses. We asked those for their estimates of the sea-level rise from 2000 to 2100 and 2300, both the “likely” rise (17th to 83rd percentile) and the range of the 5th to the 95th percentile (the 95th percentile is the increase which with 95 % probability will not be exceeded, according to the expert). 90 experts from 18 countries provided their responses.
Sea-level: a bit of context
For context, the following figure from the current IPCC report summarizes the sea-level evolution:
Figure 1: Sea level rise according to the IPCC report of 2013. Shown is the past history of sea level since the year 1700 from proxy data (sediments, purple) and multiple records from tide gauge measurements. Light blue are the satellite data (from 1993). The two future scenarios mentioned in the text (RCP8.5 and RCP2.6) are shown in red and blue, with their “likely” uncertainty range according to the IPCC (meaning a 66 % probability to remain inside this range). Source: IPCC AR5 Fig. 13.27.
A more detailed discussion of the IPCC sea level numbers can be found here. The red and blue future scenarios correspond (to good approximation) to the two climate scenarios on which we surveyed the experts: blue a scenario with effective climate mitigation, red a scenario with a further unabated growth of emissions into the 22nd Century.
The survey results
The following graph shows what the surveyed experts expect for these two scenarios up to the year 2100:
Figure 2: Sea level rise over the period 2000-2100 for two warming scenarios (red RCP8.5, blue RCP3). The ranges show the average numbers given across all the experts. The inner (darker) range shows the 17 to 83 percentile values, the outer range the 5 to 95th percentiles. For comparison we see the NOAA projections of December 2012 (dashed lines) and the new IPCC projections (bars on the right). Since this graph shows the increase from the year 2000, about 25 cm should be added for a direct numerical comparison with the previous graph.
The experts gave a median rise of 40-60 cm for the blue climate scenario and 70-120 cm for the red scenario. Most of the experts thus expect a higher rise than the IPCC – about two-thirds (65%) give the upper limit for the red ‘likely’ range a higher value than the IPCC, even though the IPCC has increased its projections by ~60% since its last report of 2007. In expert circles the IPCC reports are widely considered to be conservative; this is empirical confirmation.
The following table shows all the median values:
Highly relevant for coastal protection is a “high-end” value that with high probability will not be exceeded – let’s say the 95th percentile in the table above, below which the rise will remain with 95 percent probability. For the red scenario, about half of the experts (51%) gave 1.5 meters or higher for this, a quarter (27%) even 2 meters or higher. This is for the increase from 2000 to 2100. In the longer term, for the increase from 2000 to the year 2300, the majority of experts (58%) give this high-end value as 4 meters or higher.
These numbers reflect the fact that experts (including myself) have become more pessimistic about sea-level rise in recent years, in the light of new data and insights mainly concerning the dynamic response of the ice sheets.
Experts quoted in the media are often chosen according to media needs – quite popular is the presentation of topics as a controversy with one expert pro and one against. In this way the experts are portrayed as divided into “two camps”, regardless of whether this reflects the reality. This “two-camps theory” is then used as a justification to cite (in the name of supposed balance) counter-arguments by “climate skeptics” with doubtful expertise. Especially in the US this “false balance” phenomenon is widespread.
In the distribution of expert estimates we find no evidence in support of the two-camps theory, as shown in the following graph.
Figure 3. Distribution of the experts’ answers to the upper limit of the ‘likely’ range for the RCP8.5 scenario by the year 2100. (These numbers can be compared to the value of 98 cm given in the IPCC report.)
There is no split into two groups that could be termed “alarmists” and “skeptics” – this idea can thus be regarded as empirically falsified. That is consistent with other surveys, such as that of continental ice experts by Bamber & Aspinell (Nature Climate Change 2013). Instead, we see in the distribution of responses a broad scientific “mainstream” with a normal spread (the large hump of three bars centered on 100 cm, in which I also find myself), complemented with a long tail of about a dozen “pessimists” who are worried about a much larger sea-level rise. Let’s hope these outliers are wrong. At least I don’t see a plausible physical mechanism for such a rapid rise.
A study on the regional differences in sea-level rise: A scaling approach to project regional sea level rise and its uncertainties
A study on impacts on cities: Future flood losses in major coastal cities
And the Washington Post: How high will sea levels rise? Let’s ask the experts.