• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Thursday, 10 Jul 2014 08:49

A new study by Screen and Simmonds demonstrates the statistical connection between high-amplitude planetary waves in the atmosphere and extreme weather events on the ground.

Guest post by Dim Coumou

There has been an ongoing debate, both in and outside the scientific community, whether rapid climate change in the Arctic might affect circulation patterns in the mid-latitudes, and thereby possibly the frequency or intensity of extreme weather events. The Arctic has been warming much faster than the rest of the globe (about twice the rate), associated with a rapid decline in sea-ice extent. If parts of the world warm faster than others then of course gradients in the horizontal temperature distribution will change – in this case the equator-to-pole gradient – which then could affect large scale wind patterns.

Several dynamical mechanisms for this have been proposed recently. Francis and Vavrus (GRL 2012) argued that a reduction of the north-south temperature gradient would cause weaker zonal winds (winds blowing west to east) and therefore a slower eastward propagation of Rossby waves. A change in Rossby wave propagation has not yet been detected (Barnes 2013) but this does not mean that it will not change in the future. Slowly-traveling waves (or quasi-stationary waves) would lead to more persistent and therefore more extreme weather. Petoukhov et al (2013) actually showed that several recent high-impact extremes, both heat waves and flooding events, were associated with high-amplitude quasi-stationary waves.

Intuitively it makes sense that slowly-propagating Rossby waves lead to more surface extremes. These waves form in the mid-latitudes at the boundary of cold air to the north and warm air to the south. Thus, with persistent strongly meandering isotherms, some regions will experience cold and others hot conditions. Moreover, slow wave propagation would prolong certain weather conditions and therefore lead to extremes on timescales of weeks: One day with temperatures over 30oC in say Western Europe is not really unusual, but 10 or 20 days in a row will be.

But although it intuitively makes sense, the link between high-amplitude Rossby waves and surface extremes was so far not properly documented in a statistical way. It is this piece of the puzzle which is addressed in the new paper by Screen and Simmonds recently published in Nature Climate Change (“Amplified mid-latitude planetary waves favour particular regional weather extremes”).

In a first step they extract the 40 most extreme months in the mid-latitudes for both temperature and precipitation in the 1979-2012 period, using all calendar months. They do this by averaging absolute values of temperature and precipitation anomalies, which is appropriate since planetary waves are likely to induce both negative and positive anomalies simultaneously in different regions. This way they determine the 40 most extreme months and also 40 moderate months, i.e., those months with the smallest absolute anomalies. By using monthly-averaged data, fast-traveling waves are filtered out and thus only the quasi-stationary component remains, i.e. the persistent weather conditions. Next they show that roughly half of the extreme months were associated with statistically significantly amplified waves. Vice versa, the moderate months were associated with reduced wave activity. So this nicely confirms statistically what one would expect.

nclimate2271-f1

Figure: a,b, Normalized monthly time series of mid-latitude–(35°–60° N) mean land-based absolute temperature anomalies (a) and absolute precipitation anomalies (b), 1979–2012. The 40 months with the largest values are identified by circles and labelled on the lower x axis, and the green line shows the threshold value for extremes. c,d, Normalized wave amplitude anomalies, for wave numbers 3–8, during 40 months of mid-latitude–mean temperature extremes (c) and precipitation extremes (d). The months are labelled on the abscissa in order of decreasing extremity from left to right. Grey shading masks anomalies that are not statistically significant at the 90% confidence level; specifically, anomalies with magnitude smaller than 1.64σ, the critical value of a Gaussian (normal) distribution for a two-tailed probability p = 0.1. Red shading indicates wave numbers that are significantly amplified compared to average and blue shading indicates wave numbers that are significantly attenuated compared to average. [Source: Screen and Simmonds, Nature climate Change]

The most insightful part of the study is the regional analysis, whereby the same method is applied to 7 regions in the Northern Hemisphere mid-latitudes. It turns out that especially those regions at the western boundary of the continents (i.e., western North America and Europe) show the most significant association between surface extremes and planetary wave activity. Here, moderate temperatures tend to be particularly associated with reduced wave amplitudes, and extremes with increased wave amplitudes. Further eastwards this link becomes less significant, and in eastern Asia it even inverts: Here moderate temperatures are associated with amplified waves and extremes with reduced wave amplitudes. An explanation for this result is not discussed by the authors. Possibly, it could be explained by the fact that low wave amplitudes imply predominantly westerly flow. Such westerlies will bring moderate oceanic conditions to the western boundary regions, but will bring air from the continental interior towards East Asia.

Finally, the authors redo their analysis once more but now for each tail of the distribution individually. Thus, instead of using absolute anomalies, they treat cold, hot, dry and wet extremes separately. This way, they find that amplified quasi-stationary waves “increase probabilities of heat waves in western North America and central Asia, cold outbreaks in eastern North America, droughts in central North America, Europe and central Asia and wet spells in western Asia.” These results hint at a preferred position (i.e., “phase”) of quasi-stationary waves.

With their study, the authors highlight the importance of quasi-stationary waves in causing extreme surface weather. This is an important step forward, but of course many questions remain. Has planetary wave activity changed in recent decades or is it likely to do so under projected future warming? And, if it is changing, is the rapid Arctic warming indeed responsible?

 

IMG_4765Dim Coumou works as a senior scientist at the Potsdam Institute for Climate Impact Research, where he is leading a new research group which studies the links between large scale circulation and extreme weather.   

 

 

 

 

 

References

  1. J.A. Francis, and S.J. Vavrus, "Evidence linking Arctic amplification to extreme weather in mid-latitudes", Geophysical Research Letters, vol. 39, pp. n/a-n/a, 2012. http://dx.doi.org/10.1029/2012GL051000
  2. E.A. Barnes, "Revisiting the evidence linking Arctic amplification to extreme weather in midlatitudes", Geophysical Research Letters, vol. 40, pp. 4734-4739, 2013. http://dx.doi.org/10.1002/grl.50880
  3. V. Petoukhov, S. Rahmstorf, S. Petri, and H.J. Schellnhuber, "Quasiresonant amplification of planetary waves and recent Northern Hemisphere weather extremes", Proceedings of the National Academy of Sciences, vol. 110, pp. 5336-5341, 2013. http://dx.doi.org/10.1073/pnas.1222000110
  4. J.A. Screen, and I. Simmonds, "Amplified mid-latitude planetary waves favour particular regional weather extremes", Nature Climate change, 2014. http://dx.doi.org/10.1038/NCLIMATE2271
Author: "stefan" Tags: "Arctic and Antarctic, Climate Science, I..."
Comments Send by mail Print  Save  Delicious 
Date: Sunday, 06 Jul 2014 14:05

Guest post by Jared Rennie, Cooperative Institute for Climate and Satellites, North Carolina on behalf of the databank working group of the International Surface Temperature Initiative

In the 21st Century, when multi-billion dollar decisions are being made to mitigate and adapt to climate change, society rightly expects openness and transparency in climate science to enable a greater understanding of how climate has changed and how it will continue to change. Arguably the very foundation of our understanding is the observational record. Today a new set of fundamental holdings of land surface air temperature records stretching back deep into the 19th Century has been released as a result of several years of effort by a multinational group of scientists.

The International Surface Temperature Initiative (ISTI) was launched by an international and multi-disciplinary group of scientists in 2010 to improve understanding of the Earth’s climate from the global to local scale. The Databank Working Group, under the leadership of NOAA’s National Climatic Data Center (NCDC), has produced an innovative data holding that largely leverages off existing data sources, but also incorporates many previously unavailable sources of surface air temperature. This data holding provides users a way to better track the origin of the data from its collection through its integration. By providing the data in various stages that lead to the integrated product, by including data origin tracking flags with information on each observation, and by providing the software used to process all observations, the processes involved in creating the observed fundamental climate record are completely open and transparent to the extent humanly possible.

Databank Architecture

figure1

The databank includes six data Stages, starting from the original observation to the final quality controlled and bias corrected product (Figure 1). The databank begins at Stage Zero holdings, which contain scanned images of digital observations in their original form. These images are hosted on the databank server when third party hosting is not possible. Stage One contains digitized data, in its native format, provided by the contributor. No effort is required on their part to convert the data into any other format. This reduces the possibility that errors could occur during translation. We collated over 50 sources ranging from single station records to holdings of several tens of thousands of stations.

Once data are submitted as Stage One, all data are converted into a common Stage Two format. In addition, data provenance flags are added to every observation to provide a history of that particular observation. Stage Two files are maintained in ASCII format, and the code to convert all the sources is provided. After collection and conversion to a common format, the data are then merged into a single, comprehensive Stage Three dataset. The algorithm that performs the merging is described below. Development of the merged dataset is followed by quality control and homogeneity adjustments (Stage Four and Five, respectively). These last two stages are not the responsibility of Databank Working Group, see the discussion of broader context below.

Merge Algorithm Description

The following is an overview of the process in which individual Stage Two sources are combined to form a comprehensive Stage Three dataset. A more detailed description can be found in a manuscript accepted and published by Geoscience Data Journal (Rennie et al., 2014).

The algorithm attempts to mimic the decisions an expert analyst would make manually. Given the fractured nature of historical data stewardship many sources will inevitably contain records for the same station and it is necessary to create a process for identifying and removing duplicate stations, merging some sources to produce a longer station record, and in other cases determining when a station should be brought in as a new distinct record.

The merge process is accomplished in an iterative fashion, starting from the highest priority data source (target) and running progressively through the other sources (candidates). A source hierarchy has been established which prioritizes datasets that have better data provenance, extensive metadata, and long, consistent periods of record. In addition it prioritizes holdings derived from daily data to allow consistency between daily holdings and monthly holdings. Every candidate station read in is compared to all target stations, and one of three possible decisions is made. First, when a station match is found, the candidate station is merged with the target station. Second, if the candidate station is determined to be unique it is added to the target dataset as a new station. Third, the available information is insufficient, conflicting, or ambiguous, and the candidate station is withheld.

Stations are first compared through their metadata to identify matching stations. Four tests are applied: geographic distance, height distance, station name similarity, and when the data record began. Non-missing metrics are then combined to create a metadata metric and it is determined whether to move on to data comparisons, or to withhold the candidate station. If a data comparison is deemed necessary, overlapping data between the target and candidate station is tested for goodness-of-fit using the Index of Agreement (IA). At least five years of overlap are required for a comparison to be made. A lookup table is used to provide two data metrics, the probability of station match (H1) and the probability of station uniqueness (H2). These are then combined with the metadata metric to create posterior metrics of station match and uniqueness. These are used to determine if the station is merged, added as unique, or withheld.

Stage Three Dataset Description

figure2

The integrated data holding recommended and endorsed by ISTI contains over 32,000 global stations (Figure 2), over four times as many stations as GHCN-M version 3. Although station coverage varies spatially and temporally, there are adequate stations with decadal and century periods of record at local, regional, and global scales. Since 1850, there consistently are more stations in the recommended merge than GHCN-M (Figure 3). In GHCN-M version 3, there was a significant drop in stations in 1990 reflecting the dependency on the decadal World Weather Records collection as a source, which is ameliorated by many of the new sources which can be updated much more rapidly and will enable better real-time monitoring.

figure3

Many thresholds are used in the merge and can be set by the user before running the merge program. Changing these thresholds can significantly alter the overall result of the program. Changes will also occur when the source priority hierarchy is altered. In order to characterize the uncertainty associated with the merge parameters, seven different variants of the Stage Three product were developed alongside the recommended merge. This uncertainty reflects the importance of data rescue. While a major effort has been undertaken through this initiative, more can be done to include areas that are lacking on both spatial and temporal scales, or lacking maximum and minimum temperature data.

Data Access

Version 1.0.0 of the Global Land Surface Databank has been released and data are provided from a primary ftp site hosted by the Global Observing Systems Information Center (GOSIC) and World Data Center A at NOAA NCDC. The Stage Three dataset has multiple formats, including a format approved by ISTI, a format similar to GHCN-M, and netCDF files adhering to the Climate and Forecast (CF) convention. The data holding is version controlled and will be updated frequently in response to newly discovered data sources and user comments.

All processing code is provided, for openness and transparency. Users are encouraged to experiment with the techniques used in these algorithms. The programs are designed to be modular, so that individuals have the option to develop and implement other methods that may be more robust than described here. We will remain open to releases of new versions should such techniques be constructed and verified.

ISTI’s online directory provides further details on the merging process and other aspects associated with the full development of the databank as well as all of the data and processing code.

We are always looking to increase the completeness and provenance of the holdings. Data submissions are always welcome and strongly encouraged. If you have a lead on a new data source, please contact data.submission@surfacetemperatures.org with any information which may be useful.

The broader context

It is important to stress that the databank is a release of fundamental data holdings – holdings which contain myriad non-climatic artefacts arising from instrument changes, siting changes, time of observation changes etc. To gain maximum value from these improved holdings it is imperative that as a global community we now analyze them in multiple distinct ways to ascertain better estimates of the true evolution of surface temperatures locally, regionally, and globally. Interested analysts are strongly encouraged to develop innovative approaches to the problem.

To help ascertain what works and what doesn’t the benchmarking working group are developing and will soon release a set of analogs to the databank. These will share the space and time sampling of the holdings but contain a set of known (to the originators) data issues that require removing. When analysts apply their methods to the analogs we can infer something meaningful about their methods. Further details are available in a discussion paper under peer review [Willett et al., submitted].

More Information

www.surfacetemperatures.org
ftp://ftp.ncdc.noaa.gov/pub/data/globaldatabank

References
Rennie, J.J. and coauthors, 2014, The International Surface Temperature Initiative Global Land Surface Databank: Monthly Temperature Data Version 1 Release Description and Methods. Accepted, Geoscience Data Journal.

Willett, K. M. et al., submitted, Concepts for benchmarking of homogenisation algorithm performance on the global scale. http://www.geosci-instrum-method-data-syst-discuss.net/4/235/2014/gid-4-235-2014.html

Author: "rasmus" Tags: "Climate Science, Instrumental Record"
Comments Send by mail Print  Save  Delicious 
Date: Wednesday, 02 Jul 2014 13:55

This month’s open thread. Topics of potential interest: The successful OCO-2 launch, continuing likelihood of an El Niño event this fall, predictions of the September Arctic sea ice minimum, Antarctic sea ice excursions, stochastic elements in climate models etc. Just for a change, no discussion of mitigation efforts please!

Author: "group" Tags: "Climate Science, Open thread"
Comments Send by mail Print  Save  Delicious 
Date: Sunday, 01 Jun 2014 23:35

June is the month when the Arctic Sea Ice outlook gets going, when the EPA releases its rules on power plant CO2 emissions, and when, hopefully, commenters can get back to actually having constructive and respectful conversations about climate science (and not nuclear energy, impending apocalypsi (pl) or how terrible everyone else is). Thanks.

Author: "group" Tags: "Climate Science, Open thread"
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 08 May 2014 13:39

Guest commentary from Michelle L’Heureux, NOAA Climate Prediction Center

Much media attention has been directed at the possibility of an El Niño brewing this year. Many outlets have drawn comparison with the 1997-98 super El Niño. So, what are the odds that El Niño will occur? And if it does, how strong will it be?

To track El Niño, meteorologists at the NOAA/NWS Climate Prediction Center (CPC) release weekly and monthly updates on the status of the El Niño-Southern Oscillation (ENSO). The International Research Institute (IRI) for Climate and Society partner with us on the monthly ENSO release and are also collaborators on a brand new “ENSO blog” which is part of www.climate.gov (co-sponsored by the NOAA Climate Programs Office).

Blogging ENSO is a first for operational ENSO forecasters, and we hope that it gives us another way to both inform and interact with our users on ENSO predictions and impacts. In addition, we will collaborate with other scientists to profile interesting ENSO research and delve into the societal dimensions of ENSO.

As far back as November 2013, the CPC and the IRI have predicted an elevated chance of El Niño (relative to historical chance or climatology) based on a combination of model predictions and general trends over the tropical Pacific Ocean. Once the chance of El Niño reached 50% in March 2014, an El Niño Watch was issued to alert the public that conditions are more favorable for the development of El Niño.
Current forecasts for the Nino-3.4 SST index (as of 5 May 2014) from the NCEP Climate Forecast System version 2 model.
Current forecasts for the Nino-3.4 SST index (as of 5 May 2014) from the NCEP Climate Forecast System version 2 model

More recently, on May 8th, the CPC/IRI ENSO team increased the chance that El Niño will develop, with a peak probability of ~80% during the late fall/early winter of this year. El Nino onset is currently favored sometime in the early summer (May-June-July). At this point, the team remains non-committal on the possible strength of El Niño preferring to watch the system for at least another month or more before trying to infer the intensity. But, could we get a super strong event? The range of possibilities implied by some models allude to such an outcome, but at this point the uncertainty is just too high. While subsurface heat content levels are well above average (March was the highest for that month since 1979 and April was the second highest), ENSO prediction relies on many other variables and factors. We also remain in the spring prediction barrier, which is a more uncertain time to be making ENSO predictions.

Could El Niño predictions fizzle? Yes, there is roughly a 2 in 10 chance at this point that this could happen. It happened in 2012 when an El Nino Watch was issued, chances became as high as 75% and El Niño never formed. Such is the nature of seasonal climate forecasting when there is enough forecast uncertainty that “busts” can and do occur. In fact, more strictly, if the forecast probabilities are “reliable,” an event with an 80% chance of occurring should only occur 80% of the time over a long historical record. Therefore, 20% of the time the event must NOT occur (click here for a description of verification techniques).

While folks might prefer total certainty in our forecasts, we live in an uncertain world. El Niño is most likely to occur this year, so please stay attentive to the various updates linked above and please visit our brand new ENSO blog.

Author: "mike" Tags: "Climate Science"
Comments Send by mail Print  Save  Delicious 
Date: Friday, 02 May 2014 13:35

This month’s open thread. In order to give everyone a break, no discussion of mitigation options this month – that has been done to death in previous threads. Anything related to climate science is totally fine: Carbon dioxide levels maybe, or TED talks perhaps…

Author: "group" Tags: "Climate Science, Open thread"
Comments Send by mail Print  Save  Delicious 
Faking it   New window
Date: Wednesday, 30 Apr 2014 11:36

Every so often contrarians post old newspaper quotes with the implication that nothing being talked about now is unprecedented or even unusual. And frankly, there are lots of old articles that get things wrong, are sensationalist or made predictions without a solid basis. And those are just the articles about the economy.

However, there are plenty of science articles that are just interesting, reporting events and explorations in the Arctic and elsewhere that give a fascinating view into how early scientists were coming to an understanding about climate change and processes. In particular, in the Atlantic sector of the Arctic the summer of 1922 was (for the time) quite warm, and there were a number of reports that discussed some unprecedented (again, for the time) observations of open water. The most detailed report was in the Monthly Weather Review:

The same report was picked up by the Associated Press and short summary articles appeared in the Washington Post and L.A. Times on Nov 2nd (right). As you can read, the basic story is that open water was seen up to 81º 29′N near Spitzbergen (now referred to as Svalbard), and that this was accompanied by a shift in ecosystems and some land ice melting. It seems that the writers were more concerned with fishing than climate change though.

This clip started showing up around Aug 2007 (this is the earliest mention I can find). The main point in bringing it up was (I imagine) to have a bit of fun by noting the similarity of the headline “Arctic Ocean Getting Warm” and contemporaneous headlines discussing the very low sea ice amounts in 2007. Of course, this doesn’t imply that the situation was the same back in 1922 compared to 2007 (see below).

The text of Washington Post piece soon started popping up on blogs and forums. Sometime in late 2009, probably as part of a mass-forwarded email (remember those?), the text started appearing with the following addition (with small variations, e.g. compare this and this):

I apologize, I neglected to mention that this report was from November 2, 1922. As reported by the AP and published in The Washington Post

However, the text was still pretty much what was in the Washington Post article (some versions had typos of “Consulafft” instead of “Consul Ifft” (the actual consul’s name) and a few missing words). Snopes looked into it and they agreed that this was basically accurate – and they correctly concluded that the relevance to present-day ice conditions was limited.

But sometime in January 2010 (the earliest version I can find is from 08/Jan/2010), a version of the email started circulating with an extra line added:

“Within a few years it is predicted that due to the ice melt the sea will rise and make most coastal cities uninhabitable.”

This is odd on multiple levels. First of all, the rest of the piece is just about observations, not predictions of any sort. Nor is there any source given for these mysterious predictions (statistics? soothsaying? folk wisdom?). Indeed, since ice melt large enough to ‘make most coastal cities uninhabitable’ would be a big deal, you’d think that the Consul and AP would have been a little more concerned about the level of the sea instead of the level of the seals. In any case, the line is completely made up, a fiction, an untruth, a lie.

But now, instead of just an observation that sounds like observations being made today, the fake quote is supposed to demonstrate that people (implicitly scientists) have been making alarmist and unsupported claims for decades with obvious implications. This is pretty low by any standards.

The article with the fake quote has done the rounds of most of the major contrarian sites – including the GWPF, right-wing leaning local papers (Provo, UT), magazines (Quadrant in Australia, Canada Free Press) and blogs (eg. Small dead animals). The only pseudo-sceptic blog that doesn’t appear to have used it is WUWT! (though it has come up in comments). This is all despite some people noting that the last line was fake (at least as early as April 2011). Some of the mentions even link to the Snopes article (which doesn’t mention the fake last line) as proof that their version (with the fake quote) is authentic.

Last week it was used again by Richard Rahn in the Washington Times, and the fake quote was extracted and tweeted by CFACT, which is where I saw it.

So we have a situation where something real and actually interesting is found in the archives, it gets misrepresented as a ‘gotcha’ talking point, but someone thinks it can be made ‘better’ and so adds a fake last line to sex it up. Now with twitter, with its short quotes, some contrarians only quote the fakery. And thus a completely false talking point is created out of the whole cloth.

Unfortunately, this is not unusual.

Comparing 1922 and now

To understand why the original story is actually interesting, we need a little context. Estimates of Arctic sea ice go back to the 19th Century from fishing vessels and explorers though obviously they have got better in recent decades because of the satellite coverage. The IPCC AR5 report (Figure 4.3) shows a compilation of sea ice extent from HadISST1 (which is being updated as we speak), but it is clear enough for our purposes:

I have annotated the summer of 1922, which did see quite a large negative excursion Arctic-wide compared to previous years, though the excursion is perhaps not that unusual for the period. A clearer view can be seen in the Danish ice charts for August 1921 and 1922 (via the Icelandic Met Office):



The latitude for open-water in the 1922 figure is around 81ºN, as reported by the Consul. Browsing other images in the series indicates that Spitzbergen almost always remained ice-bound even in August, so the novelty of the 1922 observation is clear.

But what of now? We can look at the August 2013 operational ice charts (that manually combine satellite and in situ observations) from the Norwegian Met Office, and focus on the area of Svalbard/Spitzbergen. Note that 2013 was the widely touted year that Arctic sea ice ‘recovered’:



The open-water easily extends to past 84ºN – many hundreds of kilometers further north than the ‘unprecedented’ situation in 1922. Data from the last 4 years shows some variability of course, but by late August there is consistently open-water further north than 81ºN 30′. The Consul’s observation, far from being novel, is now commonplace.

This implies that this article – when seen in context – is actually strongly confirming of a considerable decline in Arctic sea ice over the last 90 years. Not that CFACT is going to tweet that.

Author: "gavin" Tags: "Arctic and Antarctic, Climate Science, I..."
Comments Send by mail Print  Save  Delicious 
Date: Saturday, 26 Apr 2014 02:57

Somewhat randomly, my thoughts turned to the Nenana Ice Classic this evening, only to find that the ice break up had only just occurred (3:48 pm Alaskan Standard Time, April 25). This is quite early (the 7th earliest date, regardless of details associated with the vernal equinox or leap year issues), though perhaps unsurprising after the warm Alaskan winter this year (8th warmest on record). This is in strong contrast to the very late break up last year.



Break up dates accounting for leap years and variations in the vernal equinox.

As mentioned in my recent post, the Nenana break up date is a good indicator of Alaskan regional temperatures and despite last year’s late anomaly, the trends are very much towards a earlier spring. This is also true for trends in temperatures and ice break up mostly everywhere else too, despite individual years (like 2013/2014) being anomalously cold (for instance in the Great Lakes region). As we’ve often stressed, it is the trends that are important for judging climate change, not the individual years. Nonetheless, odds on dates as early as this years have more than doubled over the last century.

Author: "gavin" Tags: "Climate impacts, Climate Science, Instru..."
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 24 Apr 2014 19:47

metadata-fig “These results are quite strange”, my colleague told me. He analysed some of the recent climate model results from an experiment known by the cryptic name ‘CMIP5‘. It turned out that the results were ok, but we had made an error when reading and processing the model output. The particular climate model that initially gave the strange results had used a different calendar set-up to the previous models we had examined.

In fact, the models used to compute the results in CMIP5 use several different calendars: Gregorian, idealistic 360-day, or assuming no leap years. These differences do not really affect the model results, however, they are important to take into account in further analysis.

Just to make things more complicated, model results and data often come with different time units (such as counting hours since 0001-01-01 00:00:0.0) and physical units (precipitation m/day or kg m/s; temperature: Kelvin, Fahrenheit, or Centigrade). Different countries use different decimal delimiters: point or comma. And missing values are sometimes represented as a blank space, some unrealistic number (-999), or ‘NA’ (not available) if the data is provided as ASCII files. No recorded rainfall is often represented by either 0 or the ASCII character ‘.’.

For station data, the numbers are often ordered differently, either as rows or columns, and with a few lines in the beginning (the header) with various amount of description. There are almost as many ways to store data as there are groups providing data. Great!

Murphy’s law combined with typically different formats imply that reading data and testing takes time. Different scripts must be written for each data portal. The time it takes to read data can in principle be reduced to seconds, given appropriate means to do so (and the risk of making mistakes eliminated). Some data portals provide codes such as Fortran programs, but using Fortran for data analysis is no longer very efficient.

We are not done with the formats. There are more aspects to data, analyses, and model results. A proper and unambiguous description of the data is always needed so that people know exactly what they are looking at. I think this will become more important with new efforts devoted to the World Meteorological Organisation’s (WMO) global framework on climate services (GFCS).

Data description is known as ‘meta-data‘, telling what a variable represents, what units, the location, time, the method used to record or compute, and its quality.

It is important to distinguish measurements from model results. The quality of data is given by error bars, whereas the reliability of model result can be described by various skill scores, depending on their nature.

There is a large range of possibilities for describing methods and skill scores, and my guess is that there is no less diversity than we see in data formats used in different portals. This diversity is also found in empirical-statistical downscaling.

A new challenge is that the volume of climate model results has grown almost explosively. How do make sense out of all these results and all the data? If the results come with proper meta-data, it may be possible to apply further statistical analysis to sort, categorise, identify links (regression), or apply geo-statistics.

Meta-data with a controlled vocabulary can help keep track of results and avoid ambiguities. It is also easier to design common analytical and visualisation methods for data which have a standard format. There are already some tools for visualisation and analysis such as Ferret and GrADS, however, mainly for gridded data.

Standardised meta-data also allows easy comparisons between same type of results from different research communities, or different types of results, e.g. by the means of experimental design (Thorarinsdottir et. al. 2014). Such statistical analysis may make it possible to say whether certain choices lead to different results, if they are tagged with the different schemes employed in the models. This type of analysis makes use of certain key words, based on a set of commonly agreed terms.

Similar terms, however, may mean different things to different communities, such as ‘model’, ‘prediction’, ‘non-stationarity’, ‘validation’, and ‘skill’. I have seen how misinterpretation of such concepts has lead to confusion, particularly among people who don’t think that climate change is a problem.

There have been recent efforts to establish controlled vocabularies, e.g. through EUPORIAS and a project called downscaling metadata, and a new breed of concepts has entered climate research, such as COG and CIM.

There are further coordinated initiatives addressing standards for meta-data, data formats, and controlled vocabularies. Perhaps most notable are the Earth System Grid Federation (ESGF), the coupled model inter-comparison project (CMIP), and the coordinated regional downscaling experiment (CORDEX). The data format used by climate models, netCDFCF‘, is a good start, at least for model results on longitude-latitude grids. However, these initiatives don’t yet offer explanations of validation methods, skill scores, modelling details.

Validation, definitions and meta-data have been discussed in a research project called ‘SPECS‘ (that explores the possibility for seasonal-to-decadal prediction) because it is important to understand the implications and limitations of its forecasts. There is also another project called VALUE that addresses the question of validation strategies and skill scores for downscaling methods.

Many climate models have undergone thorough evaluation, but this is not apparent unless one reads chapter 9 on model evaluation in the latest IPCC report (AR5). Even in this report, a systematic summary of the different evaluation schemes and skill scores is sometimes lacking, with an exception of a summary of spatial correlation between model results and analyses.

The information about model skill would be more readily accessible if the results were tagged with the type of tests used to verify the results, and the test results (skill scores). An extra bonus is that a common practice of including a quality stamp describing validation may enhance the visibility of the evaluation aspect. To make such labelling effective, they should use well-defined terms and glossaries.

There is more than gridded results from a regional climate model. What about quantities such as return values, probabilities, storm tracks, number of freezing events, intense rainfall events, start of a rainy season, wet-day frequency, extremes, or droughts? The larger society needs information in a range of different formats, provided by climate services. Statistical analysis and empirical-statistical downscaling provide information in untraditional ways, as well as improved quantification of uncertainty (Katz et al., 2013).

Another important piece of information is the process history to make the results traceable and in principle replicable. The history is important for both the science community and for use in climate services.

unlabelmedicne
One analogy to proper meta-data is to provide a label on climate information in a similar way to labels on medicine.

In summary,there has been much progress on climate data formats and standards, but I think we can go even further and become even more efficient by extending this work.

Update: Also see related Climate Informatics: Human Experts and the End-to-End System

References

  1. T. Thorarinsdottir, J. Sillmann, and R. Benestad, "Studying Statistical Methodology in Climate Research", Eos, Transactions American Geophysical Union, vol. 95, pp. 129-129, 2014. http://dx.doi.org/10.1002/2014EO150008
  2. R.W. Katz, P.F. Craigmile, P. Guttorp, M. Haran, B. Sansó, and M.L. Stein, "Uncertainty analysis in climate change assessments", Nature Climate change, vol. 3, pp. 769-771, 2013. http://dx.doi.org/10.1038/nclimate1980
Author: "rasmus" Tags: "Climate modelling, Glossary, Scientific ..."
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 17 Apr 2014 08:56

Brigitte Knopf_441B9424_Sep2012_web

 

 

 

Guest post by Brigitte Knopf

 

 

 

 

 

 

Global emissions continue to rise further and this is in the first place due to economic growth and to a lesser extent to population growth. To achieve climate protection, fossil power generation without CCS has to be phased out almost entirely by the end of the century. The mitigation of climate change constitutes a major technological and institutional challenge. But: It does not cost the world to save the planet.

This is how the new report was summarized by Ottmar Edenhofer, Co-Chair of Working Group III of the IPCC, whose report was adopted on 12 April 2014 in Berlin after intense debates with governments. The report consists of 16 chapters with more than 2000 pages. It was written by 235 authors from 58 countries and reviewed externally by 900 experts. Most prominent in public is the 33-page Summary for Policymakers (SPM) that was approved by all 193 countries. At a first glance, the above summary does not sound spectacular but more like a truism that we’ve often heard over the years. But this report indeed has something new to offer.

The 2-degree limit

For the first time, a detailed analysis was performed of how the 2-degree limit can be kept, based on over 1200 future projections (scenarios) by a variety of different energy-economy computer models. The analysis is not just about the 2-degree guardrail in the strict sense but evaluates the entire space between 1.5 degrees Celsius, a limit demanded by small island states, and a 4-degree world. The scenarios show a variety of pathways, characterized by different costs, risks and co-benefits. The result is a table with about 60 entries that translates the requirements for limiting global warming to below 2-degrees into concrete numbers for cumulative emissions and emission reductions required by 2050 and 2100. This is accompanied by a detailed table showing the costs for these future pathways.

The IPCC represents the costs as consumption losses as compared to a hypothetical ‘business-as-usual’ case. The table does not only show the median of all scenarios, but also the spread among the models. It turns out that the costs appear to be moderate in the medium-term until 2030 and 2050, but in the long-term towards 2100, a large spread occurs and also high costs of up to 11% consumption losses in 2100 could be faced under specific circumstances. However, translated into reduction of growth rate, these numbers are actually quite low. Ambitious climate protection would cost only 0.06 percentage points of growth each year. This means that instead of a growth rate of about 2% per year, we would see a growth rate of 1.94% per year. Thus economic growth would merely continue at a slightly slower pace. However, and this is also said in the report, the distributional effects of climate policy between different countries can be very large. There will be countries that would have to bear much higher costs because they cannot use or sell any more of their coal and oil resources or have only limited potential to switch to renewable energy.

The technological challenge

Furthermore – and this is new and important compared to the last report of 2007 – the costs are not only shown for the case when all technologies are available, but also how the costs increase if, for example, we would dispense with nuclear power worldwide or if solar and wind energy remain more expensive than expected.

The results show that economically and technically it would still be possible to remain below the level of 2-degrees temperature increase, but it will require rapid and global action and some technologies would be key:

Many models could not achieve atmospheric concentration levels of about 450 ppm CO2eq by 2100, if additional mitigation is considerably delayed or under limited availability of key technologies, such as bioenergy, CCS, and their combination (BECCS).

Probably not everyone likes to hear that CCS is a very important technology for keeping to the 2-degree limit and the report itself cautions that CCS and BECCS are not yet available at a large scale and also involve some risks. But it is important to emphasize that the technological challenges are similar for less ambitious temperature limits.

The institutional challenge

Of course, climate change is not just a technological issue but is described in the report as a major institutional challenge:

Substantial reductions in emissions would require large changes in investment patterns

Over the next two decades, these investment patterns would have to change towards low-carbon technologies and higher energy efficiency improvements (see Figure 1). In addition, there is a need for dedicated policies to reduce emissions, such as the establishment of emissions trading systems, as already existent in Europe and in a handful of other countries.

Since AR4, there has been an increased focus on policies designed to integrate multiple objectives, increase co‐benefits and reduce adverse side‐effects.

The growing number of national and sub-national policies, such as at the level of cities, means that in 2012, 67% of global GHG emissions were subject to national legislation or strategies compared to  only 45% in 2007. Nevertheless, and that is clearly stated in the SPM, there is no trend reversal of emissions within sight – instead a global increase of emissions is observed.

IPCC_WG3_SPM_Figure_9

Figure 1: Change in annual investment flows from the average baseline level over the next two decades (2010 to 2029) for mitigation scenarios that stabilize concentrations within the range of approximately 430–530 ppm CO2eq by 2100. Source: SPM, Figure SPM.9

 

Trends in emissions

A particularly interesting analysis, showing from which countries these emissions originate, was removed from the SPM due to the intervention of some governments, as it shows a regional breakdown of emissions that was not in the interest of every country (see media coverage here or here). These figures are still available in the underlying chapters and the Technical Summary (TS), as the government representatives may not intervene here and science can speak freely and unvarnished. One of these figures shows very clearly that in the last 10 years emissions in countries of upper middle income – including, for example, China and Brazil – have increased while emissions in high-income countries – including Germany – stagnate, see Figure 2. As income is the main driver of emissions in addition to the population growth, the regional emissions growth can only be understood by taking into account the development of the income of countries.

Historically, before 1970, emissions have mainly been emitted by industrialized countries. But with the regional shift of economic growth now emissions have shifted to countries with upper middle income, see Figure 2, while the industrialized countries have stabilized at a high level. The condensed message of Figure 2 does not look promising: all countries seem to follow the path of the industrialized countries, with no “leap-frogging” of fossil-based development directly to a world of renewables and energy efficiency being observed so far.

AR5_figure_TS.4

Figure 2: Trends in GHG emissions by country income groups. Left panel: Total annual anthropogenic GHG emissions from 1970 to 2010 (GtCO2eq/yr). Middle panel: Trends in annual per capita mean and median GHG emissions from 1970 to 2010 (tCO2eq/cap/yr). Right panel: Distribution of annual per capita GHG emissions in 2010 of countries within each income group (tCO2/cap/yr). Source: TS, Figure TS.4

 

But the fact that today’s emissions especially rise in countries like China is only one side of the coin. Part of the growth in CO2 emissions in the low and middle income countries is due to the production of consumption goods that are intended for export to the high-income countries (see Figure 3). Put in plain language: part of the growth of Chinese emissions is due to the fact that the smartphones used in Europe or the US are produced in China.

AR5_figure_TS.5

Figure 3: Total annual CO2 emissions (GtCO2/yr) from fossil fuel combustion for country income groups attributed on the basis of territory (solid line) and final consumption (dotted line). The shaded areas are the net CO2 trade balance (difference) between each of the four country income groups and the rest of the world. Source: TS, Figure TS.5

 

The philosophy of climate change

Besides all the technological details there has been a further innovation in this report, that is the chapter on “Social, economic and ethical concepts and methods“. This chapter could be called the philosophy of climate change. It emphasizes that

Issues of equity, justice, and fairness arise with respect to mitigation and adaptation. […] Many areas of climate policy‐making involve value judgements and ethical considerations.

This implies that many of these issues cannot be answered solely by science, such as the question of a temperature level that avoids dangerous anthropogenic interference with the climate system or which technologies are being perceived as risky. It means that science can provide information about costs, risks and co-benefits of climate change but in the end it remains a social learning process and debate to find the pathway society wants to take.

Conclusion

The report contains many more details about renewable energies, sectoral strategies such as in the electricity and transport sector, and co-benefits of avoided climate change, such as improvements of air quality. The aim of Working Group III of the IPCC was, and the Co-Chair emphasized this several times, that scientists are mapmakers that will help policymakers to navigate through this difficult terrain in this highly political issue of climate change. And this without being policy prescriptive about which pathway should be taken or which is the “correct” one. This requirement has been fulfilled and the map is now available. It remains to be seen where the policymakers are heading in the future.

 

The report :

Climate Change 2014: Mitigation of Climate Change – IPCC Working Group III Contribution to AR5

 

Brigitte Knopf is head of the research group Energy Strategies Europe and Germany at the Potsdam Institute for Climate Impact Research (PIK) and one of the authors of the report of the IPCC Working Group III and is on Twitter as @BrigitteKnopf

This article was translated from the German original at RC’s sister blog KlimaLounge.

 

Reaclimate coverage of the IPCC 5th Assessment Report:

Summary of Part 1, Physical Science Basis

Summary of Part 2, Impacts, Adaptation, Vulnerability

Summary of Part 3, Mitigation

Sea-level rise in the AR5

Attribution of climate change to human causes

Radiative forcing of climate change

Author: "stefan" Tags: "IPCC"
Comments Send by mail Print  Save  Delicious 
Date: Tuesday, 08 Apr 2014 12:25

Guest commentary from Drew Shindell

There has been a lot of discussion of my recent paper in Nature Climate Change (Shindell, 2014). That study addressed a puzzle, namely that recent studies using the observed changes in Earth’s surface temperature suggested climate sensitivity is likely towards the lower end of the estimated range. However, studies evaluating model performance on key observed processes and paleoclimate evidence suggest that the higher end of sensitivity is more likely, partially conflicting with the studies based on the recent transient observed warming. The new study shows that climate sensitivity to historical changes in the abundance of aerosol particles in the atmosphere is larger than the sensitivity to CO2, primarily because the aerosols are largely located near industrialized areas in the Northern Hemisphere middle and high latitudes where they trigger more rapid land responses and strong snow & ice feedbacks. Therefore studies based on observed warming have underestimated climate sensitivity as they did not account for the greater response to aerosol forcing, and multiple lines of evidence are now consistent in showing that climate sensitivity is in fact very unlikely to be at the low end of the range in recent estimates.

In particular, a criticism of the paper written by Nic Lewis has gotten some attention. Lewis makes a couple of potentially interesting points, chief of which concern the magnitude and uncertainty in the aerosol forcing I used and the time period over which the calculation is done, and I address these issues here. There are also a number of less substantive points in his piece that I will not bother with.

Lewis states that “The extensive adjustments made by Shindell to the data he uses are a source of concern. One of those adjustments is to add +0.3 W/m² to the figures used for model aerosol forcing to bring the estimated model aerosol forcing into line with the AR5 best estimate of -0.9 W/m².” Indeed the estimate of aerosol forcing used in the calculation of transient climate response (TCR) in the paper does not come directly from climate models, but instead incorporates an adjustment to those models so that the forcing better matches the assessed estimates from the Fifth Assessment Report (AR5) of the Intergovernmental Panel on Climate Change (IPCC). An adjustment is necessary because as climate models are continually evaluated against observations evidence has become emerged that the strength of their aerosol-cloud interactions are too strong (i.e. the models’ ‘aerosol indirect effect’ is larger than inferred from observations). There have been numerous papers on this topic and this issue was thoroughly assessed in IPCC AR5 chapter 7. The assessed best estimate was that the historical negative aerosol forcing (radiation and cloud effects, but not black carbon on snow/ice) was too strong by about 0.3 Wm-2 in the models that included that effect, a conclusion very much in line with a prior publication on climate sensitivity by Otto et al. (2013). Given numerous scientific studies on this topic, there is ample support for the conclusion that models overestimate the magnitude of aerosol forcing, though the uncertainty in aerosol forcing (which is incorporated into the analysis in the paper) is large, especially in comparison with CO2 forcing which can be better constrained by observations.

The second substantive point Lewis raised relates to the time period over which the TCR is evaluated. The IPCC emphasizes forcing estimates relative to 1750 since most of the important anthropogenic impacts are thought to have been small at that time (biomass burning may be an exception, but appears to have a relatively small net forcing). Surface temperature observations become sparser going back further in time, however, and the most widely used datasets only go back to 1880 or 1850. Radiative forcing, especially that due to aerosols, is highly uncertain for the period 1750-1850 as there is little modeling and even less data to constrain those models. The AR5 gives a value for 1850 aerosol forcing (relative to 1750) (Annex II, Table AII.1.2) of -0.178 W/m² for direct+indirect (radiation+clouds). There is also a BC snow forcing of 0.014 W/m², for a total of -0.164 W/m². While these estimates are small, they are nonetheless very poorly constrained.

Hence there are two logical choices for an analysis of TCR. One could assume that there was minimal global mean surface temperature change between 1750 and 1850, as some datasets suggest, and compare the 1850-2000 temperature change with the full 1750-2000 forcing estimate, as in my paper and Otto et al. In this case, aerosol forcing over 1750-2000 is used.

Alternatively, one could assume we can estimate forcing during this early period realistically enough to remove if from the longer 1750-2000 estimates, and so compare forcing and response over 1850-2000. In this case, this must be done for all forcings, not just for the aerosols. The well-mixed greenhouse gas forcing in 1850 is 0.213 W/m². Including well-mixed solar and stratospheric water that becomes 0.215 W/m². LU and ozone almost exactly cancel one another. So to adjust from 1750-2000 to 1850-2000 forcings, one must remove 0.215 W/m² and also remove the -0.164 W/m² aerosol forcing, multiplying the latter by it’s impact relative to that of well-mixed greenhouse gases (~1.5) that gives about -0.25 W/m².

If this is done consistently, the denominator of the climate sensitivity calculation containing total forcing barely changes and hence the TCR results are essentially the same (a change of only 0.03°C). Lewis’ claim that the my TCR results are mistaken because they did not account for 1750-1850 aerosol forcing is incorrect because he fails to use consistent time periods for all forcing agents. The results are in fact quite robust to either analysis option provided they are done consistently.

Lewis also discusses the uncertainty in aerosol forcing and in the degree to which the response to aerosols are enhanced relative to the response to CO2. Much of this discussion follows a common pattern of looking through the peer-reviewed paper to find all the caveats and discussion points, and then repeating them back as if they undermine the paper’s conclusions rather than reflecting that they are uncertainties that were already taken into account. It is important to realize that the results presented in the paper include both the uncertainty in the aerosol forcing and the uncertainty in the enhancement of the response to aerosol forcing, as explicitly stated. Hence any statement that the uncertainty is underestimated in the results presented in the paper, due to the fact that (included) uncertainty in these two components is large, is groundless.

In fact, this is an important issue to keep in mind as Lewis also argues that the climate models do not provide good enough information to determine the value of the enhanced aerosol response (the parameter I call E in the paper, where E is the ratio of the global mean temperature response to aerosol forcing versus the response to the same global mean magnitude of CO2 forcing, so that E=1.5 would be a 50% stronger response to aerosols). While the models indeed are imperfect and have uncertainties, they provide the best available method we have to determine the value of E as this cannot be isolated from observations directly. Furthermore, basic physical understanding supports the modeled value of E being substantially greater than 1, as deep oceans clearly take longer to respond than the land surface, so the Northern Hemisphere, with most of the world’s land, will respond more rapidly than the Southern Hemisphere with more ocean. Quantifying the value of E accurately is difficult, and the variation across the models is substantial, primarily reflecting our incomplete knowledge of aerosol forcing. This leads to a range of E quoted in the paper of 1.18 to 2.43. I used this range, assuming a lognormal distribution, along with the mean value of 1.53, in the calculation for the TCR.

Lewis then argues that the large uncertainty ranges in E and in aerosol forcing make it the TCR estimates “worthless”. While “worthless” is a little strong, it is important to fully assess uncertainties in trying to constrain any properties in the real world. It’s worthwhile to note that Lewis co-authored a recent report claiming that TCR could in fact be constrained to be low. That report relies on studies that include the large aerosol forcing uncertainty, so criticizing my paper for that would be inconsistent. However, Lewis’ study assumed that all forcings induce the same response in global mean temperature as CO2. This is equivalent to assuming that E is exactly 1.0 with NO uncertainty whatsoever. This is a reasonable first guess in the absence of evidence to the contrary, but as my paper recently showed, there is evidence to indicate that assumption is biased.

But while Lewis argues that the uncertainty in E is large and climate models do not give the value as accurately as we’d like, that does not justify ignoring that uncertainty entirely. Instead, we need to characterize that uncertainty as best we can and propagate that through the calculation (as can be seen in the figure below). The real question is not whether climate models provide us perfect information (they do not), but rather whether they provide better information than some naïve prior assumption. In this case, it is clear that they do.



Figure shows representative probability distribution functions for TCR using the numbers from Shindell (2014) in a Monte Carlo calculation (Gaussian for Fghg and dTobs, lognormal fits for the skewed distributions for Faerosol+ozone+LU and E). The green line is if you assume exactly no difference between the effects of aerosols and GHGs; Red is if you estimate that difference using climate models; Dashed red is the small difference made by using a different start date (1850 instead of 1750).

This highlights the critical distinction in our reasoning: I fully support the basic methods used in prior work such as Otto et al and have simply quantified an additional physical factor in the existing methodology. I am however confused that Lewis, on one hand, appears to now object to the basic method used in prior work in which the authors first adjusted aerosol forcing, second included it’s uncertainty, and then finally quantified estimates of TCR, Yet on the other hand, he not only co-authored the Otto et al paper but released a report praising that study just three days before the publication of my paper.

For completeness, I should acknowledge that Lewis correctly identified a typo in the last row of the first column of Table S2, which has been corrected in the version posted where there is also access to the computer codes used in the calculations. The climate model output itself is already publicly available at the CMIP5 website (also linked at that page).

Finally, I note that the conclusions of the paper send a sobering message. It would be nice if sensitivity was indeed quite low and society could get away with smaller emission cuts to stabilize climate. Unfortunately, several lines of independent evidence now agree that this is not the case.

References

  1. D.T. Shindell, "Inhomogeneous forcing and transient climate sensitivity", Nature Climate change, vol. 4, pp. 274-277, 2014. http://dx.doi.org/10.1038/nclimate2136
  2. A. Otto, F.E.L. Otto, O. Boucher, J. Church, G. Hegerl, P.M. Forster, N.P. Gillett, J. Gregory, G.C. Johnson, R. Knutti, N. Lewis, U. Lohmann, J. Marotzke, G. Myhre, D. Shindell, B. Stevens, and M.R. Allen, "Energy budget constraints on climate response", Nature Geosci, vol. 6, pp. 415-416, 2013. http://dx.doi.org/10.1038/ngeo1836
Author: "group" Tags: "Climate modelling, Climate Science, Inst..."
Comments Send by mail Print  Save  Delicious 
Date: Sunday, 06 Apr 2014 15:02

More open thread. Unusually, we are keeping the UV Mar 2014 thread open for more Diogenetic conversation and to keep this thread open for more varied fare.

Author: "group" Tags: "Climate Science, Open thread"
Comments Send by mail Print  Save  Delicious 
Date: Friday, 04 Apr 2014 08:41

The second part of the new IPCC Report has been approved – as usual after lengthy debates – by government delegations in Yokohama (Japan) and is now public. Perhaps the biggest news is this: the situation is no less serious than it was at the time of the previous report 2007. Nonetheless there is progress in many areas, such as a better understanding of observed impacts worldwide and of the specific situation of many developing countries. There is also a new assessment of “smart” options for adaptation to climate change. The report clearly shows that adaptation is an option only if efforts to mitigate greenhouse gas emissions are strengthened substantially. Without mitigation, the impacts of climate change will be devastating.

cramer

 

 

Guest post by Wolfgang Cramer

 

 

On all continents and across the oceans

Impacts of anthropogenic climatic change are observed worldwide and have been linked to observed climate using rigorous methods. Such impacts have occurred in many ecosystems on land and in the ocean, in glaciers and rivers, and they concern food production and the livelihoods of people in developing countries. Many changes occur in combination with other environmental problems (such as urbanization, air pollution, biodiversity loss), but the role of climate change for them emerges more clearly than before.

abb1

Fig. 1 Observed impacts of climate change during the period since publication of the IPCC Fourth Assessment Report 2007

 

During the presentation for approval of this map in Yokohama many delegates asked why there are not many more impacts on it. This is because authors only listed those cases where solid scientific analysis allowed attribution. An important implication of this is that absence of icons from the map may well be due to lacking data (such as in parts of Africa) – and certainly does not imply an absence of impacts in reality. Compared to the earlier report in 2007, a new element of these documented findings is that impacts on crop yields are now clearly identified in many regions, also in Europe. Improved irrigation and other technological advances have so far helped to avoid shrinking yields in many cases – but the increase normally expected from technological improvements is leveling off rapidly.

 

A future of increasing risks

More than previous IPCC reports, the new report deals with future risks. Among other things, it seeks to identify those situations where adaptation could become unfeasible and damages therefore become inevitable. A general finding is that “high” scenarios of climate change (those where global mean temperature reaches four degrees C or more above preindustrial conditions – a situation that is not at all unlikely according to part one of the report) will likely result in catastrophic impacts on most aspects of human life on the planet.

abb2

Fig. 2 Risks for various systems with high (blue) or low (red) efforts in climate change mitigation

 

These risks concern entire ecosystems, notably those of the Arctic and the corals of warm waters around the world (the latter being a crucial resource for fisheries in many developing countries), the global loss of biodiversity, but also the working conditions for many people in agriculture (the report offers many details from various regions). Limiting global warming to 1.5-2.0 degrees C through aggressive emission reductions would not avoid all of these damages, but the risks would be significantly lower (a similar chart has been shown in earlier reports, but the assessment of risks is now, based on the additional scientific knowledge available, more alarming than before, a point that is expressed most prominently by the deep red color in the first bar).

 

Food security increasingly at risk

In the short term, warming may improve agricultural yields in some cooler regions, but significant reductions are highly likely to dominate in later decades of the present century, particularly for wheat, rice and maize. The illustration is an example of the assessment of numerous studies in the scientific literature, showing that, from 2030 onwards, significant losses are to be expected. This should be seen in the context of already existing malnutrition in many regions, a growing problem also in the absence of climate change, due to growing populations, increasing economic disparities and the continuing shift of diet towards animal protein.

abb3

Fig. 3 Studies indicating increased crop yields (blue) or reduced crop yields (brown), accounting for various scenarios of climate change and technical adaptation

 

The situation for global fisheries is comparably bleak. While some regions, such as the North Atlantic, might allow larger catches, there is a loss of marine productivity to be expected in nearly all tropical waters, caused by warming and acidification. This affects poor countries in South-East Asia and the Pacific in particular. Many of these countries will also be affected disproportionately by the consequences of sea-level rise for coastal mega-cities.

abb4

Fig. 4 Change in maximum fish catch potential 2051-2060 compared to 2001-2010 for the climate change scenario SRES A1B

 

Urban areas in developing countries particularly affected

Nearly all developing countries experience significant growth in their mega-cities – but it is here that higher temperatures and limited potential for technical adaptation have the largest effect on people. Improved urban planning, focusing on the resilience of residential areas and transport systems of the poor, can deliver important contributions to adaptation. This would also have to include better preparation for the regionally rising risks from typhoons, heat waves and floods.

 

Conflicts in a warmer climate

It has been pointed out that no direct evidence is available to connect the occurrence of violent conflict to observed climate change. But recent research has shown that it is likely that dry and hot periods may have been contributing factors. Studies also show that the use of violence increases with high temperatures in some countries. The IPCC therefore concludes that enhanced global warming may significantly increase risks of future violent conflict.

 

Climate change and the economy

Studies estimate the impact of future climate change as around few percent of global income, but these numbers are considered hugely uncertain. More importantly, any economic losses will be most tangible for countries, regions and social groups already disadvantaged compared to others. It is therefore to be expected that economic impacts of climate change will push large additional numbers of people into poverty and the risk of malnutrition, due to various factors including increase in food prices.

 

Options for adaptation to the impacts of climate change

The report underlines that there is no globally acceptable “one-fits-all” concept for adaptation. Instead, one must seek context-specific solutions. Smart solutions can provide opportunities to enhance the quality of life and local economic development in many regions – this would then also reduce vulnerabilities to climate change. It is important that such measures account for cultural diversity and the interests of indigenous people. It also becomes increasingly clear that policies that reduce emissions of greenhouse gases (e.g., by the application of more sustainable agriculture techniques or the avoidance of deforestation) need not be in conflict with adaptation to climate change. Both can improve significantly the livelihoods of people in developing countries, as well as their resilience to climate change.

It is beyond doubt that unabated climate change will exhaust the potential for adaptation in many regions – particularly for the coastal regions in developing countries where sea-level rise and ocean acidification cause major risks.

The summary of the report is found here. Also the entire report with all underlying chapters is online. Further there is a nicely crafted background video.

Wolfgang Cramer is scientific director of the Institut Méditerranéen de Biodiversité et d’Ecologie marine et continentale (IMBE) in Aix-en-Provence one of the authors of the IPCC  working group 2 report.

This article was translated from the German original at RC’s sister blog KlimaLounge.

 

Weblink

Here is our summary of part 1 of the IPCC report.

Author: "stefan" Tags: "Climate impacts, Climate Science, IPCC"
Comments Send by mail Print  Save  Delicious 
Date: Monday, 31 Mar 2014 01:29

Instead of speculations based on partial drafts and attempts to spin the coverage ahead of time, you can now download the final report of the IPCC WG2: “Climate Change 2014:Impacts, Adaptation, and Vulnerability” directly. The Summary for Policy Makers is here, while the whole report is also downloadable by chapter. Notably there are FAQ for the whole report and for each chapter that give a relatively easy way in to the details. Note too that these are the un-copyedited final versions, and minor edits, corrections and coherent figures will be forthcoming in the final published versions. (For reference, the WG1 report was released in Sept 2013, but only in final published form in Jan 2014). Feel free to link to interesting takes on the report in the comments.

Author: "group" Tags: "Climate impacts, Climate Science, IPCC"
Comments Send by mail Print  Save  Delicious 
Date: Friday, 28 Mar 2014 23:34

This is mid-month open-thread for all discussions, except those related to Diogenes’ comments. People wanting to discuss with commenter Diogenes should stick to the previous UV thread. All such discussion on this thread will be moved over. Thanks.

Author: "gavin" Tags: "Climate Science, Open thread"
Comments Send by mail Print  Save  Delicious 
Date: Tuesday, 25 Mar 2014 16:40

Does global warming make extreme weather events worse? Here is the #1 flawed reasoning you will have seen about this question: it is the classic confusion between absence of evidence and evidence for absence of an effect of global warming on extreme weather events. Sounds complicated? It isn’t. I’ll first explain it in simple terms and then give some real-life examples.

The two most fundamental properties of extreme events are that they are rare (by definition) and highly random. These two aspects (together with limitations in the data we have) make it very hard to demonstrate any significant changes. And they make it very easy to find all sorts of statistics that do not show an effect of global warming – even if it exists and is quite large.

Would you have been fooled by this?

Imagine you’re in a sleazy, smoky pub and a stranger offers you a game of dice, for serious money. You’ve been warned and have reason to suspect they’re using a loaded dice here that rolls a six twice as often as normal. But the stranger says: “Look here, I’ll show you: this is a perfectly normal dice!” And he rolls it a dozen times. There are two sixes in those twelve trials – as you’d expect on average in a normal dice. Are you convinced all is normal?

You shouldn’t be, because this experiment is simply inconclusive. It shows no evidence for the dice being loaded, but neither does it provide real evidence against your prior suspicion that the dice is loaded. There is a good chance for this outcome even if the dice is massively loaded (i.e. with 1 in 3 chance to roll a six). On average you’d expect 4 sixes then, but 2 is not uncommon either. With normal dice, the chance to get exactly two sixes in this experiment is 30%, with the loaded dice it is 13%[i]. From twelve tries you simply don’t have enough data to tell.

Hurricanes

In 2005, leading hurricane expert Kerry Emanuel (MIT) published an analysis showing that the power of Atlantic hurricanes has strongly increased over the past decades, in step with temperature. His paper in the journal Nature happened to come out on the 4th of August – just weeks before hurricane Katrina struck. Critics were quick to point out that the power of hurricanes that made landfall in the US had not increased. While at first sight that might appear to be the more relevant statistic, it actually is a case like rolling the dice only twelve times: as Emanuel’s calculations showed, the number of landfalling storms is simply far too small to get a meaningful result, as those data represent “less than a tenth of a percent of the data for global hurricanes over their whole lifetimes”. Emanuel wrote at the time (and later confirmed in a study): “While we can already detect trends in data for global hurricane activity considering the whole life of each storm, we estimate that it would take at least another 50 years to detect any long-term trend in U.S. landfalling hurricane statistics, so powerful is the role of chance in these numbers.” Like with the dice this is not because the effect is small, but because it is masked by a lot of ‘noise’ in the data, spoiling the signal-to-noise ratio.

Heat records

The number of record-breaking hot months (e.g. ‘hottest July in New York’) around the world is now five times as big as it would be in an unchanging climate. This has been shown by simply counting the heat records in 150,000 series of monthly temperature data from around the globe, starting in the year 1880. Five times. For each such record that occurs just by chance, four have been added thanks to global warming.

You may be surprised (like I was at first) that the change is so big after less than 1 °C global warming – but if you do the maths, you find it is exactly as expected. In 2011, in the Proceedings of the National Academy we described a statistical method for calculating the expected number of monthly heat records given the observed gradual changes in climate. It turns out to be five times the number expected in a stationary climate.

Given that this change is so large, that it is just what is expected and that it can be confirmed by simple counting, you’d expect this to be uncontroversial. Not so. Our paper was attacked with astounding vitriol by Roger Pielke Jr., with repeated false allegations about our method (more on this here).

Barriopedro

European summer temperatures for 1500–2010. Vertical lines show the temperature deviations from average of individual summers, the five coldest and the five warmest are highlighted. The grey histogram shows the distribution for the 1500–2002 period with a Gaussian fit shown in black. That 2010, 2003, 2002, 2006 and 2007 are the warmest summers on record is clearly not just random but a systematic result of a warming climate. But some invariably will rush to the media to proclaim that the 2010 heat wave was a natural phenomenon not linked to global warming. (Graph from Barriopedro et al., Science 2011.)

 

Heat records can teach us another subtle point. Say in your part of the world the number of new heat records has been constant during the past fifty years. So, has global warming not acted to increase their number? Wrong! In a stationary climate, the number of new heat records declines over time. (After 50 years of data, the chance that this year is the hottest is 1/50. After 100 years, this is reduced to 1/100.)  So if the number has not changed, two opposing effects must have kept it constant: the natural decline, and some warming. In fact, the frequency of daily heat records has declined in most places during the past decades. But due to global warming, they have declined much less than the number of cold records, so that we now observe many more hot records than cold records. This shows how some aspects of extreme events can be increased by global warming at the same time as decreasing over time. A curve with no trend does not demonstrate that something is unaffected by global warming.

Drought

Drought is another area where it is very easy to over-interpret statistics with no significant change, as in this recent New York Times opinion piece on the serious drought in California. The argument here goes that man-made climate change has not played “any appreciable role in the current California drought”, because there is no trend in average precipitation. But that again is a rather weak argument, because drought is far more complex than just being driven by average precipitation. It has a lot to do with water stored in soils, which gets lost faster in a warmer climate due to higher evaporation rates. California has just had its warmest winter on record. And the Palmer Drought Severity Index, a standard measure for drought, does show a significant trend towards more serious drought conditions in California.

The cost of extreme weather events

If an increase in extreme weather events due to global warming is hard to prove by statistics amongst all the noise, how much harder is it to demonstrate an increase in damage cost due to global warming? Very much harder! A number of confounding socio-economic factors clouds this issue which are very hard to quantify and disentangle. Some factors act to increase the damage, like larger property values in harm’s way. Some act to decrease it, like more solid buildings (whether from better building codes or simply as a result of increased wealth) and better early warnings. Thus it is not surprising that the literature on this subject overall gives inconclusive results. Some studies find significant damage trends after adjusting for GDP, some don’t, tempting some pundits to play cite-what-I-like. The fact that the increase in damage cost is about as large as the increase in GDP (as recently argued at FiveThirtyEight) is certainly no strong evidence against an effect of global warming on damage cost. Like the stranger’s dozen rolls of dice in the pub, one simply cannot tell from these data.

The emphasis on questionable dollar-cost estimates distracts from the real issue of global warming’s impact on us. The European heat wave of 2003 may not have destroyed any buildings – but it is well documented that it caused about 70,000 fatalities. This is the type of event for which the probability has increased by a factor of five due to global warming – and is likely to rise to a factor twelve over the next thirty years. Poor countries, whose inhabitants hardly contribute to global greenhouse gas emissions, are struggling to recover from “natural” disasters, like Pakistan from the 2010 floods or the Philippines and Vietnam from tropical storm Haiyan last year. The families who lost their belongings and loved ones in such events hardly register in the global dollar-cost tally.

It’s physics, stupid!

While statistical studies on extremes are plagued by signal-to-noise issues and only give unequivocal results in a few cases with good data (like for temperature extremes), we have another, more useful source of information: physics. For example, basic physics means that rising temperatures will drive sea levels up, as is in fact observed. Higher sea level to start from will clearly make a storm surge (like that of the storms Sandy and Haiyan) run up higher. By adding 1+1 we therefore know that sea-level rise is increasing the damage from storm surges – probably decades before this can be statistically proven with observational data.

There are many more physical linkages like this – reviewed in our recent paper A decade of weather extremes. A warmer atmosphere can hold more moisture, for example, which raises the risk of extreme rainfall events and the flooding they cause. Warmer sea surface temperatures drive up evaporation rates and enhance the moisture supply to tropical storms. And the latent heat of water vapor is a prime source of energy for the atmosphere. Jerry Meehl from NCAR therefore compares the effect of adding greenhouse gases to putting the weather on steroids.

Yesterday the World Meteorological Organisation published its Annual Statement on the Climate, finding that “2013 once again demonstrated the dramatic impact of droughts, heat waves, floods and tropical cyclones on people and property in all parts of the planet” and that “many of the extreme events of 2013 were consistent with what we would expect as a result of human-induced climate change.”

With good physical reasons to expect the dice are loaded, we should not fool ourselves with reassuring-looking but uninformative statistics. Some statistics show significant changes – but many are simply too noisy to show anything. It would be foolish to just play on until the loading of the dice finally becomes evident even in highly noisy statistics. By then we will have paid a high price for our complacency.

 

Postscript (29 March):

The Huffington Post has the story of the letters that Roger Pielke sent to two leading climate scientists, perceived by them as threatening, after they criticised his article: FiveThirtyEight Apologizes On Behalf Of Controversial Climate Science Writer. According to the Huffington Post, Pielke wrote to Kevin Trenberth and his bosses:

Once again, I am formally asking you for a public correction and apology. If that is not forthcoming I will be pursuing this further. More generally, in the future how about we agree to disagree over scientific topics like gentlemen?

Pielke using the word “gentlemen” struck me as particularly ironic.

How gentlemanly is it that on his blog he falsely accused us of cherry-picking the last 100 years of data rather than using the full available 130 years in our PNAS paper Increase of extreme events in a warming world, even though we clearly say in the paper that our conclusion is based on the full data series?

How gentlemanly is it that he falsely claims “Rahmstorf confirms my critique (see the thread), namely, they used 1910-2009 trends as the basis for calculating 1880-2009 exceedence probabilities,” when I have done nothing of the sort?

How gentlemanly is it that to this day, in a second update to his original article, he claims on his website: “The RC11 methodology does not make any use of data prior to 1910 insofar as the results are concerned (despite suggestions to the contrary in the paper).” This is a very serious allegation for a scientist, namely that we mislead or deceive in our paper (some colleagues have interpreted this as an allegation of scientific fraud). This allegation is completely unsubstantiated by Pielke, and of course it is wrong.

We did not respond with a threatening letter – not our style. Rather, we published a simple statistics tutorial together with our data and computer code, hoping that in this way Pielke could understand and replicate our results. But until this day we have not received any apology for his false allegations.

Our paper showed that the climatic warming observed in Moscow particularly since 1980 greatly increased the chances of breaking the previous July temperature record (set in 1938) there. We concluded:

For July temperature in Moscow, we estimate that the local warming trend has increased the number of records expected in the past decade fivefold, which implies an approximate 80% probability that the 2010 July heat record would not have occurred without climate warming.

Pielke apparently did not understand why the temperatures before 1910 hardly affect this conclusion (in fact increasing the probability from 78% to 80%), and that the linear trend from 1880 or 1910 is not a useful predictor for this probability of breaking a record. This is why we decomposed the temperature data into a slow, non-linear trend line (shown here) and a stochastic component – a standard procedure that even makes it onto the cover picture of a data analysis textbook, as well as being described in a climate time series analysis textbook. (Pielke ridicules this method as “unconventional”.)

He gentlemanly writes about our paper:

That some climate scientists are playing games in their research, perhaps to get media attention in the larger battle over climate politics, is no longer a surprise. But when they use such games to try to discredit serious research, then the climate science community has a much, much deeper problem.

His praise of “serious research” by the way refers to a paper that claimed “a primarily natural cause for the Russian heat wave” and “that it is very unlikely that warming attributable to increasing greenhouse gas concentrations contributed substantially to the magnitude of this heat wave.” (See also the graph above.)

 

Update (1 April):

Top hurricane expert Kerry Emanuel has now published a very good response to Pielke at FiveThirtyEight, making a number of the same points as I do above. He uses a better analogy than my dice example though, writing:

Suppose observations showed conclusively that the bear population in a particular forest had recently doubled. What would we think of someone who, knowing this, would nevertheless take no extra precautions in walking in the woods unless and until he saw a significant upward trend in the rate at which his neighbors were being mauled by bears?

The doubling of the bear population refers to the increase in hurricane power in the Atlantic which he showed in his Nature article of 2005 – an updated graph of his data is shown below, from our Nature Climate Change paper A decade of weather extremes.

Emanuel_Atlantic_PDI

 

Related posts:

Extremely hot

On record-breaking extremes

The Moscow warming hole

 


[i] For the math-minded: if a dice has a probability of 1/n to roll a six (a normal dice has n=6) and you roll it k times, the probability p to find m sixes is p = k!/[(k-m)!m!] × (n-1)(k-m)/nk.

Author: "stefan" Tags: "Climate Science, Communicating Climate, ..."
Comments Send by mail Print  Save  Delicious 
Date: Saturday, 22 Mar 2014 17:06

XKCD, the brilliant and hilarious on-line comic, attempts to answer the question

How much CO2 is contained in the world’s stock of bottled fizzy drinks? How much soda would be needed to bring atmospheric CO2 back to preindustrial levels?

The answer is, enough to cover the Earth with 10 layers of soda cans. However, the comic misses a factor of about two, which would arise from the ocean. The oceans have been taking up carbon throughout the industrial era, as have some parts of the land surface biosphere. The ocean contains about half of the carbon we’ve ever released from fossil fuels. We’ve also cut down a lot of trees, which has been more-or-less compensated for by uptake into other parts of the land biosphere. So as a fraction of our total carbon footprint (fuels + trees) the oceans contain about a third.

At any rate, the oceans are acting as a CO2 buffer, meaning that it’s absorbing CO2 as it tries to limit the change to the atmospheric concentration. If we suddenly pulled atmospheric CO2 back down to 280 ppm (by putting it all in cans of soda perhaps), the oceans would work in the opposite direction, to buffer our present-day higher concentration by giving up CO2. The land biosphere is kind of a loose cannon in the carbon cycle, hard to predict what it will do.

Ten layers of soda cans covering the whole earth sounds like a lot. But most of a soda can is soda, rather than CO2. Here’s another statistic: If the CO2 in the atmosphere were to freeze out as dry ice depositing on the ground, the dry ice layer would only be about 7 millimeters thick. I guess cans of soda pop might not be the most efficient or economical means of CO2 sequestration. For a better option, look to saline aquifers, which are porous geological formations containing salty water that no one would want to drink or irrigate with anyway. CO2 at high pressure forms a liquid, then ultimately reacts with igneous rocks to form CaCO3.

Further Reading

Tans, Pieter. An accounting of the observed increase in oceanic and atmospheric CO2 and
an outlook for the Future. Oceanography 22(4) 26-35, 2009

Carbon dioxide capture and storage IPCC Report, 2005

Author: "david" Tags: "Climate Science"
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 13 Mar 2014 13:57

I’m writing this post to see if our audience can help out with a challenge: Can we collectively produce some coherent, properly referenced, open-source, scalable graphics of global temperature history that will be accessible and clear enough that we can effectively out-compete the myriad inaccurate and misleading pictures that continually do the rounds on social media?

Bad graphs

One of the most common fallacies in climate is the notion that, because the climate was hotter than now in the Eocene or Cretaceous or Devonian periods, we should have no concern for current global warming. Often this is combined with an implication that mainstream scientists are somehow unaware of these warmer periods (despite many of us having written multiple papers on previous warm climates). This is fallacious on multiple grounds, not least because everyone (including IPCC) has been discussing these periods for ages. Additionally, we know that sea levels during those peak warm periods were some 80 meters higher than today, and that impacts of the current global warming are going to be felt by societies and existing ecosystems that are adapted for Holocene climates – not climates 100 million years ago.

In making this point the most common graph that gets used is one originally put online by “Monte Hieb” on this website. Over the years, the graphic has changed slightly


Monte Hieb temperature/CO2 schematics

(versions courtesy of the wayback machine), but the essential points have remained the same. The ‘temperature’ record is a hand-drawn schematic derived from the work of Chris Scotese, and the CO2 graph is from a model that uses tectonic and chemical weathering histories to estimate CO2 levels (Berner 1994; Berner and Kothavala, 2001). In neither case is there an abundance of measured data.

The original Scotese renderings are also available (again, earlier versions via the wayback machine):


Scotese reconstructions

Scotese is an expert in reconstructions of continental positions through time and in creating his ‘temperature reconstruction’ he is basically following an old-fashioned idea (best exemplified by Frakes et al’s 1992 textbook) that the planet has two long-term stable equilibria (‘warm’ or ‘cool’) which it has oscillated between over geologic history. This kind of heuristic reconstruction comes from the qualitative geological record which gives indications of glaciations and hothouses, but is not really adequate for quantitative reconstructions of global mean temperatures. Over the last few decades, much better geochemical proxy compilations with better dating have appeared (for instance, Royer et al (2004)) and the idea that there are only two long-term climate states has long fallen by the wayside.

However, since this graphic has long been a favorite of the climate dismissives, many different versions do the rounds, mostly forwarded by people who have no idea of the provenance of the image or the lack of underlying data, or the updates that have occurred. Indeed, the 2004 version is the most common, having been given a boost by Monckton in 2008 and many others. Most recently, Patrick Moore declared that this was his favorite graph.

Better graphs

While more realistic graphs of temperature and CO2 histories will not prevent the basic fallacy we started discussing from being propagated, I think people should be encouraged to use actual data to make their points so that at least rebuttals of any logical fallacies wouldn’t have to waste time arguing about the underlying data. Plus it is so much better to have figures that don’t need a week to decipher (see some more principles at Betterfigures.org).

Some better examples of long term climate change graphics do exist. This one from Veizer et al (2000) for instance (as rendered by Robert Rohde):



Phanerozoic Climate Change

IPCC AR4 made a collation for the Cenozoic (65 Mya ago to present):



IPCC AR4 Fig 6.1

and some editors at Wikipedia have made an attempt to produce a complete record for the Phanerozoic:



Wikipedia multi-period collation

But these collations are imperfect in many ways. On the last figure the time axis is a rather confusing mix of linear segments and logarithmic scaling, there is no calibration during overlap periods, and the scaling and baselining of the individual, differently sourced data is a little ad hoc. Wikipedia has figures for other time periods that have not been updated in years and treatment of uncertainties is haphazard (many originally from GlobalWarmingArt).

I think this could all be done better. However, creating good graphics takes time and some skill, especially when the sources of data are so disparate. So this might be usefully done using some crowd-sourcing – where we collectively gather the data that we can find, process it so that we have clean data, discuss ways to fit it together, and try out different plotting styles. The goal would be to come up with a set of coherent up-to-date (and updatable) figures that could become a new standard for representing the temperature history of the planet. Thus…

The world temperature history challenge

The challenge comes in three parts:

  1. Finding suitable data
  2. Combining different data sets appropriately
  3. Graphically rendering the data

Each part requires work which could be spread widely across the participants. I have made a start on collating links to suitable data sets, and this can both be expanded upon and consolidated.

Period Reference Data download
0-600 Mya Veizer et al (2000), Royer et al (2004) (updated Royer (2014)) Veizer d180, Royer04 Temp, Royer14 CO2
0-65 Mya Zachos et al (2008), Hansen et al (2010) Zachos/Hansen
0-5.3 Mya Lisiecki and Raymo (2005) LR04 Stack
0-800 kya EPICA Dome C Temperature Reconstruction
0-125 kya NGRIP/Antarctic analog? NGRIP 50yr
0-12 kya Marcott et al (2013) MEA12 stack (xls)
0-2 kya Mann et al (2008), Ljungqvist (2010) MEA08 EIV, Ljungqvist10
1880-2013 CE GISTEMP GISTEMP LOTI
1850-2013 CE HadCRUT4 HadCRUT4 Global annual average, Cowtan&Way (infilled)
1850-2013 CE Berkeley Earth Land+Ocean annual mean

Combining this data is certainly a challenge, and there are multiple approaches that could be used that range from the very simple to the very complex. More subtly the uncertainties need to be properly combined also. Issues range from temporal and spatial coverage, time-dependent corrections in d18O for long term geologic processes or ice volume corrections, dating uncertainty etc.

Finally, rendering the graphics calls for additional skills – not least so that the different sources of data are clear, that the views over different timescales are coherent, and that the graphics are in the Wiki-standard SVG format (this site can be used for conversion from pdf or postscript).

Suggestions for other data sets to consider, issues of calibration and uncertainty and trial efforts are all welcome in the comments. If we make some collective progress, I’ll put up a new post describing the finished product(s). Who knows, you folks might even write a paper…

This post was inspired by a twitter conversation for Sou from Bundunga and some of the initial data links came via Robert Rohde (of Global Warming Art and now Berkeley Earth) and Dana Royer.

References

  1. R.A. Berner, "GEOCARB II; a revised model of atmospheric CO 2 over Phanerozoic time", American Journal of Science, vol. 294, pp. 56-91, 1994. http://dx.doi.org/10.2475/ajs.294.1.56
  2. R.A. Berner, "GEOCARB III: A revised model of atmospheric CO2 over Phanerozoic time", American Journal of Science, vol. 301, pp. 182-204, 2001. http://dx.doi.org/10.2475/ajs.301.2.182
  3. J. Veizer, Y. Godderis, and L.M. François, "", Nature, vol. 408, pp. 698-701, 2000. http://dx.doi.org/10.1038/35047044
  4. D. Royer, "Atmospheric CO2 and O2 During the Phanerozoic: Tools, Patterns, and Impacts", Treatise on Geochemistry, pp. 251-267, 2014. http://dx.doi.org/10.1016/B978-0-08-095975-7.01311-5
  5. L.E. Lisiecki, and M.E. Raymo, " A Pliocene-Pleistocene stack of 57 globally distributed benthic δ 18 O records ", Paleoceanography, vol. 20, pp. n/a-n/a, 2005. http://dx.doi.org/10.1029/2004PA001071
  6. S.A. Marcott, J.D. Shakun, P.U. Clark, and A.C. Mix, "A Reconstruction of Regional and Global Temperature for the Past 11,300 Years", Science, vol. 339, pp. 1198-1201, 2013. http://dx.doi.org/10.1126/science.1228026
  7. M.E. Mann, Z. Zhang, M.K. Hughes, R.S. Bradley, S.K. Miller, S. Rutherford, and F. Ni, "Proxy-based reconstructions of hemispheric and global surface temperature variations over the past two millennia", Proceedings of the National Academy of Sciences, vol. 105, pp. 13252-13257, 2008. http://dx.doi.org/10.1073/pnas.0805721105
  8. F.C. LJUNGQVIST, "A NEW RECONSTRUCTION OF TEMPERATURE VARIABILITY IN THE EXTRA-TROPICAL NORTHERN HEMISPHERE DURING THE LAST TWO MILLENNIA", Geografiska Annaler: Series A, Physical Geography, vol. 92, pp. 339-351, 2010. http://dx.doi.org/10.1111/j.1468-0459.2010.00399.x
Author: "gavin" Tags: "Climate Science, Communicating Climate, ..."
Comments Send by mail Print  Save  Delicious 
Date: Friday, 07 Mar 2014 15:51

I am always interested in non-traditional data sets that can shed some light on climate changes. Ones that I’ve discussed previously are the frequency of closing of the Thames Barrier and the number of vineyards in England. With the exceptional warmth in Alaska last month (which of course was coupled with colder temperatures elsewhere), I was reminded of another one, the Nenana Ice Classic.

For those that don’t know what the ‘Classic’ is, it is lottery competition that has been running since 1917 to guess the date on which the Nenana river ice breaks up in the spring. The Nenana river is outside of Fairbanks, Alaska and can be relied on to freeze over every year. The locals put up a tripod on the ice, and when the ice breaks up in the spring, the tripod gets swept away. The closest estimate to the exact time this happens wins the lottery, which can have a quite substantial pot.

Due to the cold spring in Alaska last year, the ice break up date was the latest since 1917, consistent with the spring temperature anomaly state-wide being one of the coldest on record (unsurprisingly the Nenana break up date is quite closely correlated to spring Alaskan temperatures). This year is shaping up to be quite warm (though current temperatures in Nenana (as of March 7) are still quite brisk!).

Since there is now an almost century-long record of these break up dates, it makes sense to look at them as potential indicators of climate change (and interannual variability). The paper by Sagarin and Micheli (2001) was perhaps the first such study, and it has been alluded to many times since (for instance, in the Wall Street Journal and Physics Today in 2008).

The figure below shows the break up date in terms of days after a nominal March 21, or more precisely time from the vernal equinox (the small correction is so that the data don’t get confused by non-climatic calendar issues). The long term linear trend (which is negative and has a slope of roughly 6 days per century) indicates that on average the break up dates have been coming earlier in the season. This is clear despite a lot of year-to-year variability:



Figure: Break up dates at Nenena in Julian days (either from a nominal March 21 (JD-80), or specifically tied to the Vernal Equinox). Linear trend in the VE-corrected data is ~6 days/century (1917-2013, ±4 days/century, 95% range).

In the 2008 WSJ article Martin Jeffries, a local geophysicist, said:

The Nenana Ice Classic is a pretty good proxy for climate change in the 20th century.

And indeed it is. The break-up dates are strongly correlated to regional spring temperatures, which have warmed over the century, tracking the Nenana trend. But as with the cool weather January 2014 in parts of the lower 48, or warm weather in Europe or Alaska, the expected very large variability in winter weather can be relied on to produce outliers on a regular basis.

Given that year-to-year variability, it is predictable that whenever the annual result is above trend, it often gets cherry-picked to suggest that climate change is not happening (despite the persistent long-term trend). There are therefore no prizes for guessing which years’ results got a lot of attention from the ‘climate dismissives’*. This is the same phenomenon that happens every winter whenever there is some cold weather or snow somewhere. Indeed, it is so predictable** that it even gets its own xkcd cartoon:



(Climate data sourced from Climate Central).
The fact remains that winters have been getting warmer on average. While scientists are very interested in potential influences on the variability (whether from volcanoes, solar effects, greenhouse gases, or Arctic sea ice loss), it remains the case that this is much harder and more uncertain than attributing trends in the mean, which as should be clear, are much more robust. As yet there is no truly convincing evidence that any change in variance has been detected, though there are a lot of ideas out there, and some very interesting discussions (see for instance, Francis and Vavrus (2012) and Barnes (2013)).

For fun, I calculated some of the odds (Monte-Carlo simulations using observed mean, a distribution of trends based on the linear fit and the standard deviation of the residuals). This suggests that a date as late as May 20 (as in 2013) is very unexpected even without any climate trends (<0.7%) and even more so with (<0.2%), but that the odds of a date before April 29 have more than doubled (from 10% to 22%) with the trend. The most favored date is May 3rd (with no trend it would have been May 6th), but the odds of the break-up happening in that single 24 hour period are only around 1 in 14.

So, the Nenana ice Classic – unlike the other two examples I mentioned in the opening paragraph – does appear to be a useful climate metric. That isn’t to say that every year is going to follow the long-term trend (clearly it doesn’t), but you’d probably want to factor that in to (ever so slightly) improve your odds of winning.

* Yup. 2001, after 2008, 2011, and 2013.
** It is so predictable, I am thinking about opening a derivative market on whether this year’s Nenana result will get mentioned.

References

  1. R. Sagarin, "Climate Change in Nontraditional Data Sets", Science, vol. 294, pp. 811-811, 2001. http://dx.doi.org/10.1126/science.1064218
  2. J.A. Francis, and S.J. Vavrus, "Evidence linking Arctic amplification to extreme weather in mid-latitudes", Geophysical Research Letters, vol. 39, pp. n/a-n/a, 2012. http://dx.doi.org/10.1029/2012GL051000
  3. E.A. Barnes, "Revisiting the evidence linking Arctic Amplification 1 to extreme weather in midlatitudes", Geophysical Research Letters, pp. n/a-n/a, 2013. http://dx.doi.org/10.1002/grl.50880
Author: "gavin" Tags: "Climate impacts, Climate Science, Instru..."
Comments Send by mail Print  Save  Delicious 
Date: Wednesday, 05 Mar 2014 13:14

Guest commentary from Zeke Hausfather and Robert Rohde

Daily temperature data is an important tool to help measure changes in extremes like heat waves and cold spells. To date, only raw quality controlled (but not homogenized) daily temperature data has been available through GHCN-Daily and similar sources. Using this data is problematic when looking at long-term trends, as localized biases like station moves, time of observation changes, and instrument changes can introduce significant biases.

For example, if you were studying the history of extreme heat in Chicago, you would find a slew of days in the late 1930s and early 1940s where the station currently at the Chicago O’Hare airport reported daily max temperatures above 45 degrees C (113 F). It turns out that, prior to the airport’s construction, the station now associated with the airport was on the top of a black roofed building closer to the city. This is a common occurrence for stations in the U.S., where many stations were moved from city cores to newly constructed airports or wastewater treatment plants in the 1940s. Using the raw data without correcting for these sorts of bias would not be particularly helpful in understanding changes in extremes.




Berkeley Earth has a newly released homogenized daily temperature field is built as a refinement upon our monthly temperature field using similar techniques. In constructing the monthly temperature field, we identified inhomogeneities in station time series caused by biasing events such as station moves and instrument changes, and measured their impact. The daily analysis begins by applying the same set of inhomogeneity breakpoints to the corresponding daily station time series.

Each daily time series is transformed into a series of temperature anomalies by subtracting from each daily observation the corresponding monthly average at the same station.  These daily anomaly series are then combined using similar mathematics to our monthly process (e.g. Kriging), but with an empirically determined correlation vs. distance function that falls off more rapidly and accounts for the more localized nature of daily weather fluctuations. For each day, resulting daily temperature anomaly field is then added to the corresponding monthly temperature field to create an overall daily temperature field.  The resulting daily temperature field captures the large day-to-day fluctuations in weather but also preserves the long-term characteristics already present in the monthly field.

As there are substantially more monthly records than daily records in the early periods (e.g. 1880-1950), treating the daily data as a refinement to the monthly data allows us to get maximum utility from the monthly data when determining long-term trends.  Additionally, performing the Kriging step on daily anomaly fields simplifies the computation in a way that makes it much more computationally tractable.

You can find the homogenized gridded 1-degree latitude by 1-degree longitude daily temperature data here in NetCDF format (though note that a single daily series like TMin is ~4.2 GB). We will be releasing an individual homogenized station series in the near future. Additional videos of daily absolute temperatures are also available.

Climate Data Visualization with Google Maps Engine

Through a partnership with Google we’ve created a number of different interactive climate maps using their new Maps Engine. These include maps of temperature trends from different periods to present (1810, 1860, 1910, 1960, 1990), maps of record high and low temperatures, and other interesting climate aspects like average temperature, daily temperature range, seasonal temperature range, and temperature variability.

These maps utilize a number of neat features of Google’s Maps Engine. They dynamically update as you zoom in, changing both the contour lines shown and the points of interest. For example, in the 1960-present trend map shown above, you can click on a point to show the regional climate summary for each location. As you zoom in, you can get see more clickable points appear. You can also enter a specific address in the search bar and see aspects of the climate at that location.

We also have a map of all ~40,000 stations in our database, markers for each that show the homogenized record for that station when clicked. These highlight all the detected breakpoints, and show how the station record compares to the regional expectation once breakpoints have been corrected. You can also click on the image to get to a web page for each station that shows the raw data as well as various statistics about the station.

New Global Temperature Series



Berkeley Earth has a new global temperature series available. This was created by combining our land record with a Kriged ocean time series using data from HadSST3. The results fit quite well with other published series, and are closest to the new estimates by Cowtan and Way. This is somewhat unsurprising, as both of us use HadSST3 as our ocean series and use Kriging for spatial interpolation.

If we zoom into the period since 1979, both Berkeley and Cowtan and Way have somewhat higher trends than other series over the last decade. This is due mainly to the coverage over the arctic (note that we use air temperature over ice instead of sea temperature under ice for areas of the world with seasonal sea ice coverage).

Author: "group" Tags: "Climate Science, Instrumental Record"
Comments Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader