• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Monday, 06 Oct 2014 10:51

Guest commentary from Richard Millar (U. Oxford)

The recent Lewis and Curry study of climate sensitivity estimated from the transient surface temperature record is being lauded as something of a game-changer – but how much of a game-changer is it really?

The method at the heart of the new study is essentially identical to that used in the much discussed Otto et al. (2013) study. This method uses a simple equation of the energy balance of the climate and observations of global temperature change and estimated ocean heat uptake anomalies along with a time series of historical radiative forcing (code), in order to make inferences about the equilibrium climate sensitivity (ECS – the ultimate equilibrium warming resulting from doubling carbon dioxide concentrations) and its shorter-term counterpart the transient climate response (TCR – the warming at point of doubling after carbon dioxide concentrations are increased at 1% per year). [Ed. An overview of different methods to calculate sensitivity is available here. The L&C results are also discussed here].

Lewis and Curry use an updated radiative forcing estimate over that used in Otto et al along with slightly different assumptions over the periods used to define the observational anomalies. They use the latest IPCC numbers for radiative forcing and global temperature changes, but not the latest IPCC ocean heat content data. Their result is a 5 – 95% confidence interval on ECS of 1.1–4.1K and for TCR is 0.9-2.5K. These confidence intervals are very consistent with other constraints, from paleo or emergent observations and with the range of GCM estimates. For the TCR, arguably the more important measure of the climate response for policy makers as it is a better predictor of cumulative carbon budgets, the 5-95% confidence intervals are in fact almost identical to the AR5 likely range and similar to the CMIP5 general circulation model (GCM) estimated 5–95% range (shown below).


Figure 1: The 5-95% confidence ranges for transient climate response (TCR) taken from various studies as in Fig. TS.TFE6.2 of IPCC AR5 WG1. The green bordered bar at the top of figure is the estimated 5-95% range from the CMIP5 GCMs. blue bordered bar at the top of the figure is the 5-95% range from the Lewis and Curry (2014) study. The grey shading represents the AR5 consensus likely range for TCR.

There is a difference between the Lewis and Curry 17-83% confidence intervals and the IPCC likely ranges for TCR and ECS. However, for all quantities that are not directly observable, the IPCC typically interprets the 5-95% confidence intervals as likely ranges to account for the possibility that the model used to derive the confidence intervals could be missing something important (i.e. non-linearity that would not be captured by the simple models used in Otto et al and Lewis and Curry, which can particularly be a problem for ECS estimates using this method as the climate feedback parameter is assumed to be constant in time) [IPCC AR5 WG1 Ch10.8.2]. In this case, accounting for more complete surface temperature changes (Cowtan and Way, 2013), or the hemispheric imbalance associated with aerosol forcing (Shindell, 2014), or updates in the OHC changes, may all shift the Lewis and Curry distribution. [Ed. This expert judgement related to structural uncertainty was also applied to the attribution statements discussed here before].

The median estimate of the TCR from Lewis and Curry (1.3K) is towards the lower end of the IPCC likely range and lower than the CMIP5 median value of around 1.8K. A simple way to understand the importance of the exact TCR value for mitigation policy is via its impact on the cumulative carbon budget to avoid crossing a 2K threshold of global surface temperature warming. Using the Allen and Stocker relationship between TCR and TCRE (the transient climate response to cumulative emissions) we can scale the remaining carbon budget to reflect different values for the TCR. Taking the IPCC CO2-only carbon budget of 1000 GtC (based on the CMIP5 median TCR of 1.8K) to have a better than 2 in 3 chance of restricting CO2-induced warming to beneath 2K, means that emissions would have to fall on average at 2.4%/year from today onwards. If instead, we take the Lewis and Curry median estimate (1.3K), emissions would have to fall at 1.2%/year. If TCR is at the 5th percentile or 95th percentiles of the Lewis and Curry range, then emissions would need to fall at 0.6%/year and 7.1%/year respectively.

Non-CO2 emissions also contribute to peak warming. The RCP scenarios have a non-CO2 contribution to the 2K peak warming threshold of around 0.5K [IPCC AR5 WG1 – Summary for Policymakers]. Therefore, to limit total warming to 2K, the CO2-induced contribution to peak warming is restricted to around 1.5K. This restricts the remaining carbon budget further, meaning that emissions would have to fall at 4.5%/year assuming a TCR of 1.8K or 1.9%/year taking TCR to be equal to the Lewis & Curry median estimate of 1.3K (assuming no mitigation of non-CO2 emissions).

While of some scientific interest, the impact for real-world mitigation policy of the range of conceivable values for the TCR is small (see also this discussion in Sci. Am.). For targets like the 2 K guide-rail, a TCR on the lower end of the Lewis and Curry and IPCC ranges might just be the difference between a achievable rate of emissions reduction and an impossible one…

References

  1. N. Lewis, and J.A. Curry, "The implications for climate sensitivity of AR5 forcing and heat uptake estimates", Clim Dyn, 2014. http://dx.doi.org/10.1007/s00382-014-2342-y
  2. A. Otto, F.E.L. Otto, O. Boucher, J. Church, G. Hegerl, P.M. Forster, N.P. Gillett, J. Gregory, G.C. Johnson, R. Knutti, N. Lewis, U. Lohmann, J. Marotzke, G. Myhre, D. Shindell, B. Stevens, and M.R. Allen, "Energy budget constraints on climate response", Nature Geosci, vol. 6, pp. 415-416, 2013. http://dx.doi.org/10.1038/ngeo1836
  3. K. Cowtan, and R.G. Way, "Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends", Q.J.R. Meteorol. Soc., vol. 140, pp. 1935-1944, 2014. http://dx.doi.org/10.1002/qj.2297
  4. D.T. Shindell, "Inhomogeneous forcing and transient climate sensitivity", Nature Climate change, vol. 4, pp. 274-277, 2014. http://dx.doi.org/10.1038/nclimate2136
  5. P.J. Durack, P.J. Gleckler, F.W. Landerer, and K.E. Taylor, "Quantifying underestimates of long-term upper-ocean warming", Nature Climate change, 2014. http://dx.doi.org/10.1038/nclimate2389
  6. M.R. Allen, and T.F. Stocker, "Impact of delay in reducing carbon dioxide emissions", Nature Climate change, vol. 4, pp. 23-26, 2013. http://dx.doi.org/10.1038/nclimate2077
Author: "group" Tags: "Climate modelling, Climate Science, Gree..."
Comments Send by mail Print  Save  Delicious 
Date: Saturday, 04 Oct 2014 17:06

This month’s open thread.

Author: "group" Tags: "Climate Science, Open thread"
Comments Send by mail Print  Save  Delicious 
Date: Wednesday, 01 Oct 2014 17:27

In a comment in Nature titled Ditch the 2 °C warming goal, political scientist David Victor and retired astrophysicist Charles Kennel advocate just that. But their arguments don’t hold water.

It is clear that the opinion article by Victor & Kennel is meant to be provocative. But even when making allowances for that, the arguments which they present are ill-informed and simply not supported by the facts. The case for limiting global warming to at most 2°C above preindustrial temperatures remains very strong.

Let’s start with an argument that they apparently consider especially important, given that they devote a whole section and a graph to it. They claim:

The scientific basis for the 2 °C goal is tenuous. The planet’s average temperature has barely risen in the past 16 years.

They fail to explain why short-term global temperature variability would have any bearing on the 2 °C limit – and indeed this is not logical. The short-term variations in global temperature, despite causing large variations in short-term rates of warming, are very small – their standard deviation is less than 0.1 °C for the annual values and much less for decadal averages (see graph – this can just as well be seen in the graph of Victor & Kennel). If this means that due to internal variability we’re not sure whether we’ve reached 2 °C warming or just 1.9 °C or 2.1 °C – so what? This is a very minor uncertainty. (And as our readers know well, picking 1998 as start year in this argument is rather disingenuous – it is the one year that sticks out most above the long-term trend of all years since 1880, due to the strongest El Niño event ever recorded.)

2degrees_smallGlobal-mean surface temperature 1880-2013 (NASA GISS data). Grey line shows annual values, the blue line a LOESS smooth to highlight the long-term evolution. The latter is well reproduced by climate models when driven by all the known forcings (see Fig. TS-9 of the IPCC AR5). Note that the annual values typically stray by only ~0.1 °C from this smooth evolution due to natural variability such as El Niño – Southern Oscillation. The year 1998 sticks out more than any other year above the blue line – even so 2010 and 2005 are the warmest years on record. Values are given relative to a preindustrial baseline, the exact definition of which may be debated but only adds a minor uncertainty – here it was chosen as the mean temperature of 1880-1900.

The logic of 2 °C

Climate policy needs a “long-term global goal” (as the Cancun Agreements call it) against which the efforts can be measured to evaluate their adequacy. This goal must be consistent with the concept of “preventing dangerous climate change” but must be quantitative. Obviously it must relate to the dangers of climate change and thus result from a risk assessment. There are many risks of climate change (see schematic below), but to be practical, there cannot be many “long-term global goals” – one needs to agree on a single indicator that covers the multitude of risks. Global temperature is the obvious choice because it is a single metric that is (a) closely linked to radiative forcing (i.e. the main anthropogenic interference in the climate system) and (b) most impacts and risks depend on it. In practical terms this also applies to impacts that depend on local temperature (e.g. Greenland melt), because local temperatures to a good approximation scale with global temperature (that applies in the longer term, e.g. for 30-year averages, but of course not for short-term internal variability). One notable exception is ocean acidification, which is not a climate impact but a direct impact of rising CO2 levels in the atmosphere – it is to my knowledge currently not covered by the UNFCCC.

emissions_to_impacts

From emissions to impacts.

Once an overall long-term goal has been defined, it is a matter of science to determine what emissions trajectories are compatible with this, and these can and will be adjusted as time goes by and knowledge increases.

Why not use limiting greenhouse gas concentrations to a certain level, e.g. 450 ppm CO2-equivalent, as long-term global goal? This option has its advocates and has been much discussed, but it is one step further removed from the actual impacts and risks we want to avoid along the causal chain shown above, so an extra layer of uncertainty is added. This uncertainty is that in climate sensitivity, and the overall range is a factor of three (1.5-4.5 °C) according to IPCC. This would mean that as scientific understanding of climate sensitivity evolves in coming decades, one might have to re-open negotiations about the “long-term global goal”. With the 2 °C limit that is not the case – the strategic goal would remain the same, only the specific emissions trajectories would need to be adjusted in order to stick to this goal. That is an important advantage.

2 °C is feasible

Victor & Kennel claim that the 2 °C limit is “effectively unachievable”. In support they only offer a self-citation to a David Victor article, but in fact they disagree with the vast majority of scientific literature on this point. The IPCC has only this year summarized this literature, finding that the best estimate of the annual cost of limiting warming to 2 °C is 0.06 % of global GDP (1). This implies just a minor delay in economic growth. If you normally would have a consumption growth of 2% per year (say), the cost of the transformation would reduce this to 1.94% per year. This can hardly be called prohibitively expensive. When Victor & Kennel claim holding the 2 °C line is unachievable, they are merely expressing a personal, pessimistic political opinion. This political pessimism may well be justified, but it should be expressed as such and not be confused with a geophysical, technological or economic infeasibility of limiting warming to below 2 °C.

Because Victor & Kennel complain about policy makers “chasing an unattainable goal”, they apparently assume that their alternative proposal of focusing on specific climate impacts would lead to a weaker, more easily attainable limit on global warming. But they provide no evidence for this, and most likely the opposite is true. One needs to keep in mind that 2 °C was already devised based on the risks of certain impacts, as epitomized in the famous “reasons of concern” and “burning embers” (see the IPCC WG2 SPM page 13) diagrams of the last IPCC reports, which lay out major risks as a function of global temperature. Several major risks are considered “high” already for 2 °C warming, and if anything the many of these assessed risks have increased from the 3rd to the 4th to the 5th IPCC reports, i.e. may arise already at lower temperature values than previously thought.

One of the rationales behind 2 °C was the AR4 assessment that above 1.9 °C global warming we start running the risk of triggering the irreversible loss of the Greenland Ice Sheet, eventually leading to a global sea-level rise of 7 meters. In the AR5, this risk is reassessed to start already at 1 °C global warming. And sea-level projections of the AR5 are much higher than those of the AR4.

Even since the AR5, new science is pointing to higher risks. We have since learned that parts of Western Antarctica probably have already crossed the threshold of a marine ice sheet instability (it is well worth reading the commentaries by Antarctica experts Eric Rignot or Anders Levermann on this development). And that significant amounts of potentially unstable ice exist even in East Antarctica, held back only by small “ice plugs”. Regarding extreme events, we have learnt that record-breaking monthly heat waves have already increased five-fold above the number in a stationary climate. (These are heat waves like in Europe in 2003, causing ~ 70.000 fatalities.)

And we should not forget that after 2 °C warming we will be well outside the range of temperature variation of the entire Holocene; the planet will be hotter than anything experienced during human civilisation.

If anything, there are good arguments to revise the 2 °C limit downward. Such a possible revision is actually foreseen in the Cancun Agreements, because the small island nations and least developed countries have long pushed for 1.5 °C, for good reasons.

Uncritically adopted?

Victor & Kennel claim the 2 °C guardrail was “uncritically adopted”. They appear to be unaware of the fact that it took almost twenty years of intense discussions, both in the scientific and the policy communities, until this limit was agreed upon. As soon as the world’s nations agreed at the 1992 Rio summit to “prevent dangerous anthropogenic interference with the climate system”, the debate started on how to specify the danger level and operationalize this goal. A “tolerable temperature window” up to 2 °C above preindustrial was first proposed as a practical solution in 1995 in a report by the German government’s Advisory Council on Global Change (WBGU). It subsequently became the climate policy guidance of first the German government and then the European Union. It was formally adopted by the EU in 2005.

Also in 2005, a major scientific conference hosted by the UK government took place in Exeter (covered at RealClimate) to discuss and describe scientifically what “avoiding dangerous climate change” means. The results were published in a 400-page book by Cambridge University Press. Not least there are the IPCC reports as mentioned above, and the Copenhagen Climate Science Congress in March 2009 (synthesis report available in 8 languages), where the 2 °C limit was an important issue discussed also in the final plenary with then Danish Prime Minister Anders Fogh Rasmussen (‘Don’t give us too many moving targets – it is already complex’).

After further debate, 2 °C was finally adopted at the UNFCCC climate summit in Cancun in December 2010. Nota bene as an upper limit. The official text (Decision 1/CP.16 Para I(4)) pledges

to hold the increase in global average temperature below 2 °C above pre- industrial levels.

So talking about a 2 °C “goal” or “target” is misleading – nobody in their right mind would aim to warm the climate by 2 °C. The goal is to avoid just that, namely keeping warming below 2 °C. As an upper limit it was also originally proposed by the WBGU.

What are the alternatives?

Victor & Kennel propose to track a bunch of “vital signs” rather than global-mean surface temperature. They write:

What is ultimately needed is a volatility index that measures the evolving risk from extreme events.

As anyone who has ever thought about extreme events – which by definition are rare – knows, the uncertainties relating to extreme events and their link to anthropogenic forcing are many times larger than those relating to global temperature. It is rather illogical to complain about ~0.1 °C variability in global temperature, but then propose a much more volatile index instead.

Or take this proposal:

Because energy stored in the deep oceans will be released over decades or centuries, ocean heat content is a good proxy for the long-term risk to future generations and planetary-scale ecology.

It seems that the authors are not getting the physics of the climate system here. The deep oceans will almost certainly not release any heat for at least a thousand years to come; instead they will continue to absorb heat while slowly catching up with the much greater surface warming. It is also unclear what the amount of heat stored in the deep ocean has to do with risks and impacts at our planet’s surface – if deep ocean heat uptake increases (e.g. due to a reduction in deep water renewal rates, as predicted by IPCC), how would this affect people and ecosystems on land?

Vital Signs

The idea to monitor other vital signs of our home planet and to keep them within acceptable bounds is of course neither bad nor new. In fact, in addition to the 2 °C warming limit the WBGU has also proposed to limit ocean acidification to at most 0.2 (in terms of reduction of the mean pH of the global surface ocean) and to limit global sea-level rise to at most 1 meter. And there is a high-profile scientific debate about further planetary boundaries which Victor & Kennel don’t bother mentioning, although the 2009 Nature paper A safe operating space for humanity by Rockström et al. already has clocked up 932 citations in Web of Science. The key difference to Victor & Kennel, apart from the better scientific foundation of these earlier proposals, is that these bounds are intended as additional and complementary to the 2 °C limit and not to replace it.

If one wanted to sabotage the chances for a meaningful agreement in Paris next year, towards which the negotiations have been ongoing for several years, there’d hardly be a better way than restarting a debate about the finally-agreed foundation once again, namely the global long-term goal of limiting warming to at most 2 °C. This would be a sure recipe to delay the process by years. That is time which we do not have if we want to prevent dangerous climate change.

 

Footnote

(1) According to IPCC, mitigation consistent with the 2°C limit involves annualized reduction of consumption growth by 0.04 to 0.14 (median: 0.06) percentage points over the century relative to annualized consumption growth in the baseline that is between 1.6% and 3% per year. Estimates do not include the benefits of reduced climate change as well as co-benefits and adverse side-effects of mitigation. Estimates at the high end of these cost ranges are from models that are relatively inflexible to achieve the deep emissions reductions required in the long run to meet these goals and/or include assumptions about market imperfections that would raise costs.

 

Links

The Guardian: Could the 2°C climate target be completely wrong?

We have covered just the main points – a more detailed analysis [pdf] of the further questionable claims by Victor and Kennel has been prepared by the scientists from Climate Analytics.

Climate Progress: 2°C Or Not 2°C: Why We Must Not Ditch Scientific Reality In Climate Policy

Carbon Brief: Scientists weigh in on two degrees target for curbing global warming. Lots of leading climate scientists comment on Victor & Kennel (none agree with them).

Jonathan Koomey: The Case for the 2 C Warming Limit

Update 6 October: David Victor has posted a lengthy response at Dot Earth. I was most surprised by the fact that he says that I use “the same tactics that are often decried of the far right—of slamming people personally with codewords like “political scientist” and “retired astrophysicist” to dismiss us as irrelevant”. I did not know either author, so I looked up Thomson Reuters Web of Science (as I routinely do) to see what their field of scientific work is. I have no idea why calling someone a political scientist or astrophysicist are “codewords” (for what?) or could be taken as “ad hominem slam” (I’m proud to have started my career in astrophysics), or why a political scientist should not be exactly the kind of person qualified to comment on the climate policy negotiation process. I thought that this is just the kind of thing political scientists analyse. In any case I want to make very clear that characterising these scientists by naming their fields of scientific work was not intended to call into question their expertise, nor did my final paragraph intend to imply they are trying to sabotage an agreement in Paris – I just fear that this would be the (surely unintended) effect if their proposal were to be adopted.

Author: "stefan" Tags: "Climate impacts, Climate Science, Instru..."
Comments Send by mail Print  Save  Delicious 
Date: Friday, 26 Sep 2014 20:10

Global Warming: The Science and Modeling of Climate Change is a free online adaptation of a college-level class for non-science majors at the University of Chicago (textbook, video lectures). The class includes 33 short exercises for playing with on-line models, 5 “number-cruncher” problems where you create simple models from scratch in a spreadsheet or programming language, and 8 “explainer” assignments where you explain some concept as you would to a smart 11-year old child (short, simple, clear), and exchange these with other students in the class for feedback. The discussion forums are very lively, as thousands of people from around the world make their way through the video lectures and exercises, lots to chat about. This is our third run of the class, so we’re getting the kinks out. We hope you find it useful. September 29 – December 31 2014.

global-warming-coursera-logo

Author: "david" Tags: "Climate Science"
Comments Send by mail Print  Save  Delicious 
Date: Tuesday, 23 Sep 2014 13:54
Author: "david" Tags: "Climate Science"
Comments Send by mail Print  Save  Delicious 
Date: Tuesday, 02 Sep 2014 12:08

Climate blogs and comment threads are full of ‘arguments by analogy’. Depending on what ‘side’ one is on, climate science is either like evolution/heliocentrism/quantum physics/relativity or eugenics/phrenology/Ptolemaic cosmology/phlogiston. Climate contrarians are either like flat-earthers/birthers/moon-landing hoaxers/vaccine-autism linkers or Galileo/stomach ulcer-Helicobacter proponents/Wegener/Copernicus. Episodes of clear misconduct or dysfunction in other spheres of life are closely parsed only to find clubs with which to beat an opponent. Etc. Etc.

While the users of these ‘arguments’ often assume that they are persuasive or illuminating, the only thing that is revealed is how the proposer feels about climate science. If they think it is generally on the right track, the appropriate analogy is some consensus that has been validated many times and the critics are foolish stuck-in-the-muds or corporate disinformers, and if they don’t, the analogy is to a consensus that was overturned and where the critics are the noble paradigm-shifting ‘heretics’. This is far closer to wishful thinking than actual thinking, but it does occasionally signal clearly who is not worth talking to. For instance, an article pretending to serious discussion on climate that starts with a treatise about Lysenkoism in the Soviet Union is not to be taken seriously.

Since the truth of falsity of any scientific claim can only be evaluated on it’s own terms – and not via its association with other ideas or the character of its proponents – this kind of argument is only rhetorical. It gets no-one closer to the truth of any particular matter. The fact is that many, many times, mainstream science has survived multiple challenges by ‘sceptics’, and that sometimes (though not at all often), a broad consensus has been overturned. But knowing which case is which in any particular issue simply by looking for points of analogy with previous issues, but without actually examining the data and theory directly, is impossible. The point being that arguments by analogy are not persuasive to anyone who doesn’t already agree with you on the substance.

Given the rarity of a consensus-overturning event, the only sensible prior is to assume that a consensus is probably valid absent very strong evidence to the contrary, which is incidentally the position adopted by the arch-sceptic Bertrand Russell. The contrary assumption implies there are no a priori reasons to think any scientific body of work is credible which, while consistent, is not one that I have ever found anyone professing in practice. Far more common is a selective rejection of science dependent on other reasons and that is not a coherent philosophical position at all.

Analogies do have their place of course – usually to demonstrate that a supposedly logical point falls down completely when applied to a different (but analogous) case. For instance, an implicit claim that all correct scientific theories are supported by a unanimity of Nobel Prize winners/members of the National Academies, is easily dismissed by reference to Kary Mullis or Peter Duesberg. A claim that CO2 can’t possibly have a significant effect solely because of its small atmospheric mixing ratio, can be refuted as a general claim by reference to other substances (such as arsenic, plutonium or Vitamin C) whose large effects due to small concentrations are well known. Or if a claim is made that all sciences except climate science are devoid of uncertainty, this is refuted by reference to, well, any other scientific field.

To be sure, I am not criticising the use of metaphor in a more general sense. Metaphors that use blankets to explaining how the greenhouse effect works, income and spending in your bank account to stand in for the carbon cycle, what the wobbles in the Earth’s orbit look like if the planet was your head, or conceptualizing the geologic timescale by compressing it to a day, for instance, all serve useful pedagogic roles. The crucial difference is that these mappings don’t come dripping with over-extended value judgements.

Another justification for the kind of analogy I’m objecting to is that it is simply for amusement: “Of course, I’m not really comparing my opponents to child molesters/food adulterers/mass-murderers – why can’t you take a joke?”. However, if you need to point out to someone that a joke (for adults at least) needs to have more substance than just calling someone a poopyhead, it is probably not worth the bother.

It would be nice to have a moratorium on all such analogical arguments, though obviously that is unlikely to happen. The comment thread here can assess this issue directly, but most such arguments on other threads are ruthlessly condemned to the bore-hole (where indeed many of them already co-exist). But perhaps we can put some pressure on users of these fallacies by pointing to this post and then refusing to engage further until someone actually has something substantive to offer. It may be pointless, but we can at least try.

Author: "gavin" Tags: "Climate Science, Communicating Climate"
Comments Send by mail Print  Save  Delicious 
Date: Tuesday, 02 Sep 2014 11:58

This month’s open thread. People could waste time rebunking predictable cherry-picked claims about the upcoming Arctic sea ice minimum, or perhaps discuss a selection of 10 climate change controversies from ICSU… Anything! (except mitigation).

Author: "group" Tags: "Climate Science, Open thread"
Comments Send by mail Print  Save  Delicious 
Date: Wednesday, 27 Aug 2014 13:45

I have written a number of times about the procedure used to attribute recent climate change (here in 2010, in 2012 (about the AR4 statement), and again in 2013 after AR5 was released). For people who want a summary of what the attribution problem is, how we think about the human contributions and why the IPCC reaches the conclusions it does, read those posts instead of this one.

The bottom line is that multiple studies indicate with very strong confidence that human activity is the dominant component in the warming of the last 50 to 60 years, and that our best estimates are that pretty much all of the rise is anthropogenic.



The probability density function for the fraction of warming attributable to human activity (derived from Fig. 10.5 in IPCC AR5). The bulk of the probability is far to the right of the “50%” line, and the peak is around 110%.

If you are still here, I should be clear that this post is focused on a specific claim Judith Curry has recently blogged about supporting a “50-50″ attribution (i.e. that trends since the middle of the 20th Century are 50% human-caused, and 50% natural, a position that would center her pdf at 0.5 in the figure above). She also commented about her puzzlement about why other scientists don’t agree with her. Reading over her arguments in detail, I find very little to recommend them, and perhaps the reasoning for this will be interesting for readers. So, here follows a line-by-line commentary on her recent post. Please excuse the length.

Starting from the top… (note, quotes from Judith Curry’s blog are blockquoted).

Pick one:

a) Warming since 1950 is predominantly (more than 50%) caused by humans.

b) Warming since 1950 is predominantly caused by natural processes.

When faced with a choice between a) and b), I respond: ‘I can’t choose, since i think the most likely split between natural and anthropogenic causes to recent global warming is about 50-50′. Gavin thinks I’m ‘making things up’, so I promised yet another post on this topic.

This is not a good start. The statements that ended up in the IPCC SPMs are descriptions of what was found in the main chapters and in the papers they were assessing, not questions that were independently thought about and then answered. Thus while this dichotomy might represent Judith’s problem right now, it has nothing to do with what IPCC concluded. In addition, in framing this as a binary choice, it gives implicit (but invalid) support to the idea that each choice is equally likely. That this is invalid reasoning should be obvious by simply replacing 50% with any other value and noting that the half/half argument could be made independent of any data.

For background and context, see my previous 4 part series Overconfidence in the IPCC’s detection and attribution.

Framing

The IPCC’s AR5 attribution statement:

It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together. The best estimate of the human induced contribution to warming is similar to the observed warming over this period.

I’ve remarked on the ‘most’ (previous incarnation of ‘more than half’, equivalent in meaning) in my Uncertainty Monster paper:

Further, the attribution statement itself is at best imprecise and at worst ambiguous: what does “most” mean – 51% or 99%?

Whether it is 51% or 99% would seem to make a rather big difference regarding the policy response. It’s time for climate scientists to refine this range.

I am arguing here that the ‘choice’ regarding attribution shouldn’t be binary, and there should not be a break at 50%; rather we should consider the following terciles for the net anthropogenic contribution to warming since 1950:

  • >66%
  • 33-66%
  • <33%

JC note: I removed the bounds at 100% and 0% as per a comment from Bart Verheggen.

Hence 50-50 refers to the tercile 33-66% (as the midpoint)

Here Judith makes the same mistake that I commented on in my 2012 post – assuming that a statement about where the bulk of the pdf lies is a statement about where it’s mean is and that it must be cut off at some value (whether it is 99% or 100%). Neither of those things follow. I will gloss over the completely unnecessary confusion of the meaning of the word ‘most’ (again thoroughly discussed in 2012). I will also not get into policy implications since the question itself is purely a scientific one.

The division into terciles for the analysis is not a problem though, and the weight of the pdf in each tercile can easily be calculated. Translating the top figure, the likelihood of the attribution of the 1950+ trend to anthropogenic forcings falling in each tercile is 2×10-4%, 0.4% and 99.5% respectively.

Note: I am referring only to a period of overall warming, so by definition the cooling argument is eliminated. Further, I am referring to the NET anthropogenic effect (greenhouse gases + aerosols + etc). I am looking to compare the relative magnitudes of net anthropogenic contribution with net natural contributions.

The two IPCC statements discussed attribution to greenhouse gases (in AR4) and to all anthropogenic forcings (in AR5) (the subtleties involved there are discussed in the 2013 post). I don’t know what she refers to as the ‘cooling argument’, since it is clear that the temperatures have indeed warmed since 1950 (the period referred to in the IPCC statements). It is worth pointing out that there can be no assumption that natural contributions must be positive – indeed for any random time period of any length, one would expect natural contributions to be cooling half the time.

Further, by global warming I refer explicitly to the historical record of global average surface temperatures. Other data sets such as ocean heat content, sea ice extent, whatever, are not sufficiently mature or long-range (see Climate data records: maturity matrix). Further, the surface temperature is most relevant to climate change impacts, since humans and land ecosystems live on the surface. I acknowledge that temperature variations can vary over the earth’s surface, and that heat can be stored/released by vertical processes in the atmosphere and ocean. But the key issue of societal relevance (not to mention the focus of IPCC detection and attribution arguments) is the realization of this heat on the Earth’s surface.

Fine with this.

IPCC

Before getting into my 50-50 argument, a brief review of the IPCC perspective on detection and attribution. For detection, see my post Overconfidence in IPCC’s detection and attribution. Part I.

Let me clarify the distinction between detection and attribution, as used by the IPCC. Detection refers to change above and beyond natural internal variability. Once a change is detected, attribution attempts to identify external drivers of the change.

The reasoning process used by the IPCC in assessing confidence in its attribution statement is described by this statement from the AR4:

“The approaches used in detection and attribution research described above cannot fully account for all uncertainties, and thus ultimately expert judgement is required to give a calibrated assessment of whether a specific cause is responsible for a given climate change. The assessment approach used in this chapter is to consider results from multiple studies using a variety of observational data sets, models, forcings and analysis techniques. The assessment based on these results typically takes into account the number of studies, the extent to which there is consensus among studies on the significance of detection results, the extent to which there is consensus on the consistency between the observed change and the change expected from forcing, the degree of consistency with other types of evidence, the extent to which known uncertainties are accounted for in and between studies, and whether there might be other physically plausible explanations for the given climate change. Having determined a particular likelihood assessment, this was then further downweighted to take into account any remaining uncertainties, such as, for example, structural uncertainties or a limited exploration of possible forcing histories of uncertain forcings. The overall assessment also considers whether several independent lines of evidence strengthen a result.” (IPCC AR4)

I won’t make a judgment here as to how ‘expert judgment’ and subjective ‘down weighting’ is different from ‘making things up’

Is expert judgement about the structural uncertainties in a statistical procedure associated with various assumptions that need to be made different from ‘making things up’? Actually, yes – it is.

AR5 Chapter 10 has a more extensive discussion on the philosophy and methodology of detection and attribution, but the general idea has not really changed from AR4.

In my previous post (related to the AR4), I asked the question: what was the original likelihood assessment from which this apparently minimal downweighting occurred? The AR5 provides an answer:

The best estimate of the human induced contribution to warming is similar to the observed warming over this period.

So, I interpret this as scything that the IPCC’s best estimate is that 100% of the warming since 1950 is attributable to humans, and they then down weight this to ‘more than half’ to account for various uncertainties. And then assign an ‘extremely likely’ confidence level to all this.

Making things up, anyone?

This is very confused. The basis of the AR5 calculation is summarised in figure 10.5:


Figure 10.5 IPCC AR5

The best estimate of the warming due to anthropogenic forcings (ANT) is the orange bar (noting the 1𝛔 uncertainties). Reading off the graph, it is 0.7±0.2ºC (5-95%) with the observed warming 0.65±0.06 (5-95%). The attribution then follows as having a mean of ~110%, with a 5-95% range of 80–130%. This easily justifies the IPCC claims of having a mean near 100%, and a very low likelihood of the attribution being less than 50% (p < 0.0001!). Note there is no ‘downweighting’ of any argument here – both statements are true given the numerical distribution. However, there must be some expert judgement to assess what potential structural errors might exist in the procedure. For instance, the assumption that fingerprint patterns are linearly additive, or uncertainties in the pattern because of deficiencies in the forcings or models etc. In the absence of any reason to think that the attribution procedure is biased (and Judith offers none), structural uncertainties will only serve to expand the spread. Note that one would need to expand the uncertainties by a factor of 3 in both directions to contradict the first part of the IPCC statement. That seems unlikely in the absence of any demonstration of some huge missing factors.

I’ve just reread Overconfidence in IPCC’s detection and attribution. Part IV, I recommend that anyone who seriously wants to understand this should read this previous post. It explains why I think the AR5 detection and attribution reasoning is flawed.

Of particular relevance to the 50-50 argument, the IPCC has failed to convincingly demonstrate ‘detection.’ Because historical records aren’t long enough and paleo reconstructions are not reliable, the climate models ‘detect’ AGW by comparing natural forcing simulations with anthropogenically forced simulations. When the spectra of the variability of the unforced simulations is compared with the observed spectra of variability, the AR4 simulations show insufficient variability at 40-100 yrs, whereas AR5 simulations show reasonable variability. The IPCC then regards the divergence between unforced and anthropogenically forced simulations after ~1980 as the heart of the their detection and attribution argument. See Figure 10.1 from AR5 WGI (a) is with natural and anthropogenic forcing; (b) is without anthropogenic forcing:

Slide1

This is also confused. “Detection” is (like attribution) a model-based exercise, starting from the idea that one can estimate the result of a counterfactual: what would the temperature have done in the absence of the drivers compared to what it would do if they were included? GCM results show clearly that the expected anthropogenic signal would start to be detectable (“come out of the noise”) sometime after 1980 (for reference, Hansen’s public statement to that effect was in 1988). There is no obvious discrepancy in spectra between the CMIP5 models and the observations, and so I am unclear why Judith finds the detection step lacking. It is interesting to note that given the variability in the models, the anthropogenic signal is now more than 5𝛔 over what would have been expected naturally (and if it’s good enough for the Higgs Boson….).

Note in particular that the models fail to simulate the observed warming between 1910 and 1940.

Here Judith is (I think) referring to the mismatch between the ensemble mean (red) and the observations (black) in that period. But the red line is simply an estimate of the forced trends, so the correct reading of the graph would be that the models do not support an argument suggesting that all of the 1910-1940 excursion is forced (contingent on the forcing datasets that were used), which is what was stated in AR5. However, the observations are well within the spread of the models and so could easily be within the range of the forced trend + simulated internal variability. A quick analysis (a proper attribution study is more involved than this) gives an observed trend over 1910-1940 as 0.13 to 0.15ºC/decade (depending the dataset, with ±0.03ºC (5-95%) uncertainty in the OLS), while the spread in my collation of the historical CMIP5 models is 0.07±0.07ºC/decade (5-95%). Specifically, 8 model runs out of 131 have trends over that period greater than 0.13ºC/decade – suggesting that one might see this magnitude of excursion 5-10% of the time. For reference, the GHG related trend in the GISS models over that period is about 0.06ºC/decade. However, the uncertainties in the forcings for that period are larger than in recent decades (in particular for the solar and aerosol-related emissions) and so the forced trend (0.07ºC/decade) could have been different in reality. And since we don’t have good ocean heat content data, nor any satellite observations, or any measurements of stratospheric temperatures to help distinguish potential errors in the forcing from internal variability, it is inevitable that there will be more uncertainty in the attribution for that period than for more recently.

The glaring flaw in their logic is this. If you are trying to attribute warming over a short period, e.g. since 1980, detection requires that you explicitly consider the phasing of multidecadal natural internal variability during that period (e.g. AMO, PDO), not just the spectra over a long time period. Attribution arguments of late 20th century warming have failed to pass the detection threshold which requires accounting for the phasing of the AMO and PDO. It is typically argued that these oscillations go up and down, in net they are a wash. Maybe, but they are NOT a wash when you are considering a period of the order, or shorter than, the multidecadal time scales associated with these oscillations.

Watch the pea under the thimble here. The IPCC statements were from a relatively long period (i.e. 1950 to 2005/2010). Judith jumps to assessing shorter trends (i.e. from 1980) and shorter periods obviously have the potential to have a higher component of internal variability. The whole point about looking at longer periods is that internal oscillations have a smaller contribution. Since she is arguing that the AMO/PDO have potentially multi-decadal periods, then she should be supportive of using multi-decadal periods (i.e. 50, 60 years or more) for the attribution.

Further, in the presence of multidecadal oscillations with a nominal 60-80 yr time scale, convincing attribution requires that you can attribute the variability for more than one 60-80 yr period, preferably back to the mid 19th century. Not being able to address the attribution of change in the early 20th century to my mind precludes any highly confident attribution of change in the late 20th century.

This isn’t quite right. Our expectation (from basic theory and models) is that the second half of the 20th C is when anthropogenic effects really took off. Restricting attribution to 120-160 yr trends seems too constraining – though there is no problem in looking at that too. However, Judith is actually assuming what remains to be determined. What is the evidence that all 60-80yr variability is natural? Variations in forcings (in particularly aerosols, and maybe solar) can easily project onto this timescale and so any separation of forced vs. internal variability is really difficult based on statistical arguments alone (see also Mann et al, 2014). Indeed, it is the attribution exercise that helps you conclude what the magnitude of any internal oscillations might be. Note that if we were only looking at the global mean temperature, there would be quite a lot of wiggle room for different contributions. Looking deeper into different variables and spatial patterns is what allows for a more precise result.

The 50-50 argument

There are multiple lines of evidence supporting the 50-50 (middle tercile) attribution argument. Here are the major ones, to my mind.

Sensitivity

The 100% anthropogenic attribution from climate models is derived from climate models that have an average equilibrium climate sensitivity (ECS) around 3C. One of the major findings from AR5 WG1 was the divergence in ECS determined via climate models versus observations. This divergence led the AR5 to lower the likely bound on ECS to 1.5C (with ECS very unlikely to be below 1C).

Judith’s argument misstates how forcing fingerprints from GCMs are used in attribution studies. Notably, they are scaled to get the best fit to the observations (along with the other terms). If the models all had sensitivities of either 1ºC or 6ºC, the attribution to anthropogenic changes would be the same as long as the pattern of change was robust. What would change would be the scaling – less than one would imply a better fit with a lower sensitivity (or smaller forcing), and vice versa (see figure 10.4).

She also misstates how ECS is constrained – all constraints come from observations (whether from long-term paleo-climate observations, transient observations over the 20th Century or observations of emergent properties that correlate to sensitivity) combined with some sort of model. The divergence in AR5 was between constraints based on the transient observations using simplified energy balance models (EBM), and everything else. Subsequent work (for instance by Drew Shindell) has shown that the simplified EBMs are missing important transient effects associated with aerosols, and so the divergence is very likely less than AR5 assessed.

Nic Lewis at Climate Dialogue summarizes the observational evidence for ECS between 1.5 and 2C, with transient climate response (TCR) around 1.3C.

Nic Lewis has a comment at BishopHill on this:

The press release for the new study states: “Rapid warming in the last two and a half decades of the 20th century, they proposed in an earlier study, was roughly half due to global warming and half to the natural Atlantic Ocean cycle that kept more heat near the surface.” If only half the warming over 1976-2000 (linear trend 0.18°C/decade) was indeed anthropogenic, and the IPCC AR5 best estimate of the change in anthropogenic forcing over that period (linear trend 0.33Wm-2/decade) is accurate, then the transient climate response (TCR) would be little over 1°C. That is probably going too far, but the 1.3-1.4°C estimate in my and Marcel Crok’s report A Sensitive Matter is certainly supported by Chen and Tung’s findings.

Since the CMIP5 models used by the IPCC on average adequately reproduce observed global warming in the last two and a half decades of the 20th century without any contribution from multidecadal ocean variability, it follows that those models (whose mean TCR is slightly over 1.8°C) must be substantially too sensitive.

BTW, the longer term anthropogenic warming trends (50, 75 and 100 year) to 2011, after removing the solar, ENSO, volcanic and AMO signals given in Fig. 5 B of Tung’s earlier study (freely accessible via the link), of respectively 0.083, 0.078 and 0.068°C/decade also support low TCR values (varying from 0.91°C to 1.37°C), upon dividing by the linear trends exhibited by the IPCC AR5 best estimate time series for anthropogenic forcing. My own work gives TCR estimates towards the upper end of that range, still far below the average for CMIP5 models.

If true climate sensitivity is only 50-65% of the magnitude that is being simulated by climate models, then it is not unreasonable to infer that attribution of late 20th century warming is not 100% caused by anthropogenic factors, and attribution to anthropogenic forcing is in the middle tercile (50-50).

The IPCC’s attribution statement does not seem logically consistent with the uncertainty in climate sensitivity.

This is related to a paper by Tung and Zhou (2013). Note that the attribution statement has again shifted to the last 25 years of the 20th Century (1976-2000). But there are a couple of major problems with this argument though. First of all, Tung and Zhou assumed that all multi-decadal variability was associated with the Atlantic Multi-decadal Oscillation (AMO) and did not assess whether anthropogenic forcings could project onto this variability. It is circular reasoning to then use this paper to conclude that all multi-decadal variability is associated with the AMO.

The second problem is more serious. Lewis’ argument up until now that the best fit to the transient evolution over the 20th Century is with a relatively small sensitivity and small aerosol forcing (as opposed to a larger sensitivity and larger opposing aerosol forcing). However, in both these cases the attribution of the long-term trend to the combined anthropogenic effects is actually the same (near 100%). Indeed, one valid criticism of the recent papers on transient constraints is precisely that the simple models used do not have sufficient decadal variability!

Climate variability since 1900

From HadCRUT4:

HadCRUT4

The IPCC does not have a convincing explanation for:

  • warming from 1910-1940
  • cooling from 1940-1975
  • hiatus from 1998 to present

The IPCC purports to have a highly confident explanation for the warming since 1950, but it was only during the period 1976-2000 when the global surface temperatures actually increased.

The absence of convincing attribution of periods other than 1976-present to anthropogenic forcing leaves natural climate variability as the cause – some combination of solar (including solar indirect effects), uncertain volcanic forcing, natural internal (intrinsic variability) and possible unknown unknowns.

This point is not an argument for any particular attribution level. As is well known, using an argument of total ignorance to assume that the choice between two arbitrary alternatives must be 50/50 is a fallacy.

Attribution for any particular period follows exactly the same methodology as any other. What IPCC chooses to highlight is of course up to the authors, but there is nothing preventing an assessment of any of these periods. In general, the shorter the time period, the greater potential for internal variability, or (equivalently) the larger the forced signal needs to be in order to be detected. For instance, Pinatubo was a big rapid signal so that was detectable even in just a few years of data.

I gave a basic attribution for the 1910-1940 period above. The 1940-1975 average trend in the CMIP5 ensemble is -0.01ºC/decade (range -0.2 to 0.1ºC/decade), compared to -0.003 to -0.03ºC/decade in the observations and are therefore a reasonable fit. The GHG driven trends for this period are ~0.1ºC/decade, implying that there is a roughly opposite forcing coming from aerosols and volcanoes in the ensemble. The situation post-1998 is a little different because of the CMIP5 design, and ongoing reevaluations of recent forcings (Schmidt et al, 2014;Huber and Knutti, 2014). Better information about ocean heat content is also available to help there, but this is still a work in progress and is a great example of why it is harder to attribute changes over small time periods.

In the GCMs, the importance of internal variability to the trend decreases as a function of time. For 30 year trends, internal variations can have a ±0.12ºC/decade or so impact on trends, for 60 year trends, closer to ±0.08ºC/decade. For an expected anthropogenic trend of around 0.2ºC/decade, the signal will be clearer over the longer term. Thus cutting down the period to ever-shorter periods of years increases the challenges and one can end up simply cherry picking the noise instead of seeing the signal.

A key issue in attribution studies is to provide an answer to the question: When did anthropogenic global warming begin? As per the IPCC’s own analyses, significant warming didn’t begin until 1950. Just the Facts has a good post on this When did anthropogenic global warming begin?

I disagree as to whether this is a “key” issue for attribution studies, but as to when anthropogenic warming began, the answer is actually quite simple – when we started altering the atmosphere and land surface at climatically relevant scales. For the CO2 increase from deforestation this goes back millennia, for fossil fuel CO2, since the invention of the steam engine at least. In both cases there was a big uptick in the 18th Century. Perhaps that isn’t what Judith is getting at though. If she means when was it easily detectable, I discussed that above and the answer is sometime in the early 1980s.

The temperature record since 1900 is often characterized as a staircase, with periods of warming sequentially followed by periods of stasis/cooling. The stadium wave and Chen and Tung papers, among others, are consistent with the idea that the multidecadal oscillations, when superimposed on an overall warming trend, can account for the overall staircase pattern.

Nobody has any problems with the idea that multi-decadal internal variability might be important. The problem with many studies on this topic is the assumption that all multi-decadal variability is internal. This is very much an open question.

Let’s consider the 21st century hiatus. The continued forcing from CO2 over this period is substantial, not to mention ‘warming in the pipeline’ from late 20th century increase in CO2. To counter the expected warming from current forcing and the pipeline requires natural variability to effectively be of the same magnitude as the anthropogenic forcing. This is the rationale that Tung used to justify his 50-50 attribution (see also Tung and Zhou). The natural variability contribution may not be solely due to internal/intrinsic variability, and there is much speculation related to solar activity. There are also arguments related to aerosol forcing, which I personally find unconvincing (the topic of a future post).

Shorter time-periods are noisier. There are more possible influences of an appropriate magnitude and, for the recent period, continued (and very frustrating) uncertainties in aerosol effects. This has very little to do with the attribution for longer-time periods though (since change of forcing is much larger and impacts of internal variability smaller).

The IPCC notes overall warming since 1880. In particular, the period 1910-1940 is a period of warming that is comparable in duration and magnitude to the warming 1976-2000. Any anthropogenic forcing of that warming is very small (see Figure 10.1 above). The timing of the early 20th century warming is consistent with the AMO/PDO (e.g. the stadium wave; also noted by Tung and Zhou). The big unanswered question is: Why is the period 1940-1970 significantly warmer than say 1880-1910? Is it the sun? Is it a longer period ocean oscillation? Could the same processes causing the early 20th century warming be contributing to the late 20th century warming?

If we were just looking at 30 year periods in isolation, it’s inevitable that there will be these ambiguities because data quality degrades quickly back in time. But that is exactly why IPCC looks at longer periods.

Not only don’t we know the answer to these questions, but no one even seems to be asking them!

This is simply not true.

Attribution

I am arguing that climate models are not fit for the purpose of detection and attribution of climate change on decadal to multidecadal timescales. Figure 10.1 speaks for itself in this regard (see figure 11.25 for a zoom in on the recent hiatus). By ‘fit for purpose’, I am prepared to settle for getting an answer that falls in the right tercile.

Given the results above it would require a huge source of error to move the bulk of that probability anywhere else other than the right tercile.

The main relevant deficiencies of climate models are:

  • climate sensitivity that appears to be too high, probably associated with problems in the fast thermodynamic feedbacks (water vapor, lapse rate, clouds)
  • failure to simulate the correct network of multidecadal oscillations and their correct phasing
  • substantial uncertainties in aerosol indirect effects
  • unknown and uncertain solar indirect effects

The sensitivity argument is irrelevant (given that it isn’t zero of course). Simulation of the exact phasing of multi-decadal internal oscillations in a free-running GCM is impossible so that is a tough bar to reach! There are indeed uncertainties in aerosol forcing (not just the indirect effects) and, especially in the earlier part of the 20th Century, uncertainties in solar trends and impacts. Indeed, there is even uncertainty in volcanic forcing. However, none of these issues really affect the attribution argument because a) differences in magnitude of forcing over time are assessed by way of the scales in the attribution process, and b) errors in the spatial pattern will end up in the residuals, which are not large enough to change the overall assessment.

Nonetheless, it is worth thinking about what plausible variations in the aerosol or solar effects could have. Given that we are talking about the net anthropogenic effect, the playing off of negative aerosol forcing and climate sensitivity within bounds actually has very little effect on the attribution, so that isn’t particularly relevant. A much bigger role for solar would have an impact, but the trend would need to be about 5 times stronger over the relevant period to change the IPCC statement and I am not aware of any evidence to support this (and much that doesn’t).

So, how to sort this out and do a more realistic job of detecting climate change and and attributing it to natural variability versus anthropogenic forcing? Observationally based methods and simple models have been underutilized in this regard. Of great importance is to consider uncertainties in external forcing in context of attribution uncertainties.

It is inconsistent to talk in one breath about the importance of aerosol indirect effects and solar indirect effects and then state that ‘simple models’ are going to do the trick. Both of these issues relate to microphysical effects and atmospheric chemistry – neither of which are accounted for in simple models.

The logic of reasoning about climate uncertainty, is not at all straightforward, as discussed in my paper Reasoning about climate uncertainty.

So, am I ‘making things up’? Seems to me that I am applying straightforward logic. Which IMO has been disturbingly absent in attribution arguments, that use climate models that aren’t fit for purpose, use circular reasoning in detection, fail to assess the impact of forcing uncertainties on the attribution, and are heavily spiced by expert judgment and subjective downweighting.

My reading of the evidence suggests clearly that the IPCC conclusions are an accurate assessment of the issue. I have tried to follow the proposed logic of Judith’s points here, but unfortunately each one of these arguments is either based on a misunderstanding, an unfamiliarity with what is actually being done or is a red herring associated with shorter-term variability. If Judith is interested in why her arguments are not convincing to others, perhaps this can give her some clues.

References

  1. M.E. Mann, B.A. Steinman, and S.K. Miller, "On forced temperature changes, internal variability, and the AMO", Geophysical Research Letters, vol. 41, pp. 3211-3219, 2014. http://dx.doi.org/10.1002/2014GL059233
  2. K. Tung, and J. Zhou, "Using data to attribute episodes of warming and cooling in instrumental records", Proceedings of the National Academy of Sciences, vol. 110, pp. 2058-2063, 2013. http://dx.doi.org/10.1073/pnas.1212471110
  3. G.A. Schmidt, D.T. Shindell, and K. Tsigaridis, "Reconciling warming trends", Nature Geosci, vol. 7, pp. 158-160, 2014. http://dx.doi.org/10.1038/ngeo2105
  4. M. Huber, and R. Knutti, "Natural variability, radiative forcing and climate response in the recent hiatus reconciled", Nature Geosci, vol. 7, pp. 651-656, 2014. http://dx.doi.org/10.1038/ngeo2228
Author: "gavin" Tags: "Climate modelling, Climate Science, Inst..."
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 14 Aug 2014 00:03

Siberia has explosion holes in it that smell like methane, and there are newly found bubbles of methane in the Arctic Ocean. As a result, journalists are contacting me assuming that the Arctic Methane Apocalypse has begun. However, as a climate scientist I remain much more concerned about the fossil fuel industry than I am about Arctic methane. Short answer: It would take about 20,000,000 such eruptions within a few years to generate the standard Arctic Methane Apocalypse that people have been talking about. Here’s where that statement comes from:

How much methane emission is “a lot”? The yardstick here comes from Natalie Shakhova, an Arctic methane oceanographer and modeler at the University of Fairbanks. She proposed that 50 Gton of methane (a gigaton is 1015 grams) might erupt from the Arctic on a short time scale Shakhova (2010). Let’s call this a “Shakhova” event. There would be significant short-term climate disruption from a Shakhova event, with economic consequences explored by Whiteman et al Whiteman et al (2013). The radiative forcing right after the release would be similar to that from fossil fuel CO2 by the end of the century, but subsiding quickly rather than continuing to grow as business-as-usual CO2 does.

I and others have been skeptical of the possibility that so much methane could escape from the Arctic so quickly, given the century to millennial time scale of warming the permafrost and ocean sediments, and point out that if the carbon is released slowly, the climate impacts will be small. But now that explosion holes are being found in Siberia, the question is

How much methane came out of that hole in Siberia? The hole is about 80 meters in diameter and 60-100 meters deep.

It’s hard to say exactly how much methane did this, because perhaps the crater allowed methane to be released from the surrounding soil. There may be emissions in the future from permafrost melting laterally from the sides of the hole. But for a start let’s assume that the volume of the hole is the same as the volume of the original, now escaped, bubble. Gases are compressible, so we need to know what its pressure was. The deeper in the Earth it was, the higher the pressure, but if we are concerned about gas whose release might be triggered by climate warming, we should look for pockets that come close to the surface. Deep pockets might take thousands of years for surface warming to reach. The mass of a solid cap ten meters thick would increase the pressure underneath it to about four atmospheres, plus there may have been some overpressure. Let’s assume a pressure of ten atmospheres (enough to hold up the atmosphere plus about 30 meters of rock).

If the bubble was pure methane, it would have contained about … wait for it … 0.000003 Gtons of methane. In other words, building a Shakhova event from these explosions would take approximately 20,000,000 explosions, all within a few years, or else the climate impact of the methane would be muted by the lifetime effect.

What about the bubbles of methane they just found in the Arctic ocean? There were reports this summer of a new expedition to the Siberian margin, documenting vast plumes of methane bubbles rising from sediments ~500 meters water depth.

It is certainly believable that warming ocean waters could trigger an increase in methane emissions to the atmosphere, and that the time scale for changing ocean temperatures can be fast due to circulation changes (we are seeing the same thing in the Antarctic). But the time scale for heat to diffuse into the sediment, where methane hydrate can be found, should be slow, like that for permafrost on land or slower. More importantly, the atmospheric methane flux from the Arctic Ocean is really small (extrapolating estimates from Kort et al 2012), even compared with emissions from the Arctic land surface, which is itself only a few percent of global emissions (dominated by human sources and tropical wetlands).

In conclusion, despite recent explosions suggesting the contrary, I still feel that the future of Earth’s climate in this century and beyond will be determined mostly by the fossil fuel industry, and not by Arctic methane. We should keep our eyes on the ball.

References

  1. N.E. Shakhova, V.A. Alekseev, and I.P. Semiletov, "Predicted methane emission on the East Siberian shelf", Dokl. Earth Sc., vol. 430, pp. 190-193, 2010. http://dx.doi.org/10.1134/S1028334X10020091
  2. G. Whiteman, C. Hope, and P. Wadhams, "Climate science: Vast costs of Arctic change", Nature, vol. 499, pp. 401-403, 2013. http://dx.doi.org/10.1038/499401a
  3. E.A. Kort, S.C. Wofsy, B.C. Daube, M. Diao, J.W. Elkins, R.S. Gao, E.J. Hintsa, D.F. Hurst, R. Jimenez, F.L. Moore, J.R. Spackman, and M.A. Zondlo, "Atmospheric observations of Arctic Ocean methane emissions up to 82° north", Nature Geosci, vol. 5, pp. 318-321, 2012. http://dx.doi.org/10.1038/NGEO1452
Author: "david" Tags: "Arctic and Antarctic, Carbon cycle, Clim..."
Comments Send by mail Print  Save  Delicious 
Date: Tuesday, 05 Aug 2014 13:58

This month’s open thread. Keeping track of the Arctic sea ice minimum is interesting but there should be plenty of other climate science topics to discuss (if people can get past the hype about the Ebola outbreak or imaginary claims about anomalous thrusting). As with last month, pleas no discussion of mitigation strategies – it unfortunately does not bring out the best in the commentariat.

Author: "group" Tags: "Climate Science, Open thread"
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 10 Jul 2014 08:49

A new study by Screen and Simmonds demonstrates the statistical connection between high-amplitude planetary waves in the atmosphere and extreme weather events on the ground.

Guest post by Dim Coumou

There has been an ongoing debate, both in and outside the scientific community, whether rapid climate change in the Arctic might affect circulation patterns in the mid-latitudes, and thereby possibly the frequency or intensity of extreme weather events. The Arctic has been warming much faster than the rest of the globe (about twice the rate), associated with a rapid decline in sea-ice extent. If parts of the world warm faster than others then of course gradients in the horizontal temperature distribution will change – in this case the equator-to-pole gradient – which then could affect large scale wind patterns.

Several dynamical mechanisms for this have been proposed recently. Francis and Vavrus (GRL 2012) argued that a reduction of the north-south temperature gradient would cause weaker zonal winds (winds blowing west to east) and therefore a slower eastward propagation of Rossby waves. A change in Rossby wave propagation has not yet been detected (Barnes 2013) but this does not mean that it will not change in the future. Slowly-traveling waves (or quasi-stationary waves) would lead to more persistent and therefore more extreme weather. Petoukhov et al (2013) actually showed that several recent high-impact extremes, both heat waves and flooding events, were associated with high-amplitude quasi-stationary waves.

Intuitively it makes sense that slowly-propagating Rossby waves lead to more surface extremes. These waves form in the mid-latitudes at the boundary of cold air to the north and warm air to the south. Thus, with persistent strongly meandering isotherms, some regions will experience cold and others hot conditions. Moreover, slow wave propagation would prolong certain weather conditions and therefore lead to extremes on timescales of weeks: One day with temperatures over 30oC in say Western Europe is not really unusual, but 10 or 20 days in a row will be.

But although it intuitively makes sense, the link between high-amplitude Rossby waves and surface extremes was so far not properly documented in a statistical way. It is this piece of the puzzle which is addressed in the new paper by Screen and Simmonds recently published in Nature Climate Change (“Amplified mid-latitude planetary waves favour particular regional weather extremes”).

In a first step they extract the 40 most extreme months in the mid-latitudes for both temperature and precipitation in the 1979-2012 period, using all calendar months. They do this by averaging absolute values of temperature and precipitation anomalies, which is appropriate since planetary waves are likely to induce both negative and positive anomalies simultaneously in different regions. This way they determine the 40 most extreme months and also 40 moderate months, i.e., those months with the smallest absolute anomalies. By using monthly-averaged data, fast-traveling waves are filtered out and thus only the quasi-stationary component remains, i.e. the persistent weather conditions. Next they show that roughly half of the extreme months were associated with statistically significantly amplified waves. Vice versa, the moderate months were associated with reduced wave activity. So this nicely confirms statistically what one would expect.

nclimate2271-f1

Figure: a,b, Normalized monthly time series of mid-latitude–(35°–60° N) mean land-based absolute temperature anomalies (a) and absolute precipitation anomalies (b), 1979–2012. The 40 months with the largest values are identified by circles and labelled on the lower x axis, and the green line shows the threshold value for extremes. c,d, Normalized wave amplitude anomalies, for wave numbers 3–8, during 40 months of mid-latitude–mean temperature extremes (c) and precipitation extremes (d). The months are labelled on the abscissa in order of decreasing extremity from left to right. Grey shading masks anomalies that are not statistically significant at the 90% confidence level; specifically, anomalies with magnitude smaller than 1.64σ, the critical value of a Gaussian (normal) distribution for a two-tailed probability p = 0.1. Red shading indicates wave numbers that are significantly amplified compared to average and blue shading indicates wave numbers that are significantly attenuated compared to average. [Source: Screen and Simmonds, Nature climate Change]

The most insightful part of the study is the regional analysis, whereby the same method is applied to 7 regions in the Northern Hemisphere mid-latitudes. It turns out that especially those regions at the western boundary of the continents (i.e., western North America and Europe) show the most significant association between surface extremes and planetary wave activity. Here, moderate temperatures tend to be particularly associated with reduced wave amplitudes, and extremes with increased wave amplitudes. Further eastwards this link becomes less significant, and in eastern Asia it even inverts: Here moderate temperatures are associated with amplified waves and extremes with reduced wave amplitudes. An explanation for this result is not discussed by the authors. Possibly, it could be explained by the fact that low wave amplitudes imply predominantly westerly flow. Such westerlies will bring moderate oceanic conditions to the western boundary regions, but will bring air from the continental interior towards East Asia.

Finally, the authors redo their analysis once more but now for each tail of the distribution individually. Thus, instead of using absolute anomalies, they treat cold, hot, dry and wet extremes separately. This way, they find that amplified quasi-stationary waves “increase probabilities of heat waves in western North America and central Asia, cold outbreaks in eastern North America, droughts in central North America, Europe and central Asia and wet spells in western Asia.” These results hint at a preferred position (i.e., “phase”) of quasi-stationary waves.

With their study, the authors highlight the importance of quasi-stationary waves in causing extreme surface weather. This is an important step forward, but of course many questions remain. Has planetary wave activity changed in recent decades or is it likely to do so under projected future warming? And, if it is changing, is the rapid Arctic warming indeed responsible?

 

IMG_4765Dim Coumou works as a senior scientist at the Potsdam Institute for Climate Impact Research, where he is leading a new research group which studies the links between large scale circulation and extreme weather.   

 

 

 

 

 

References

  1. J.A. Francis, and S.J. Vavrus, "Evidence linking Arctic amplification to extreme weather in mid-latitudes", Geophysical Research Letters, vol. 39, pp. n/a-n/a, 2012. http://dx.doi.org/10.1029/2012GL051000
  2. E.A. Barnes, "Revisiting the evidence linking Arctic amplification to extreme weather in midlatitudes", Geophysical Research Letters, vol. 40, pp. 4734-4739, 2013. http://dx.doi.org/10.1002/grl.50880
  3. V. Petoukhov, S. Rahmstorf, S. Petri, and H.J. Schellnhuber, "Quasiresonant amplification of planetary waves and recent Northern Hemisphere weather extremes", Proceedings of the National Academy of Sciences, vol. 110, pp. 5336-5341, 2013. http://dx.doi.org/10.1073/pnas.1222000110
  4. J.A. Screen, and I. Simmonds, "Amplified mid-latitude planetary waves favour particular regional weather extremes", Nature Climate change, vol. 4, pp. 704-709, 2014. http://dx.doi.org/10.1038/NCLIMATE2271
Author: "stefan" Tags: "Arctic and Antarctic, Climate Science, I..."
Comments Send by mail Print  Save  Delicious 
Date: Sunday, 06 Jul 2014 14:05

Guest post by Jared Rennie, Cooperative Institute for Climate and Satellites, North Carolina on behalf of the databank working group of the International Surface Temperature Initiative

In the 21st Century, when multi-billion dollar decisions are being made to mitigate and adapt to climate change, society rightly expects openness and transparency in climate science to enable a greater understanding of how climate has changed and how it will continue to change. Arguably the very foundation of our understanding is the observational record. Today a new set of fundamental holdings of land surface air temperature records stretching back deep into the 19th Century has been released as a result of several years of effort by a multinational group of scientists.

The International Surface Temperature Initiative (ISTI) was launched by an international and multi-disciplinary group of scientists in 2010 to improve understanding of the Earth’s climate from the global to local scale. The Databank Working Group, under the leadership of NOAA’s National Climatic Data Center (NCDC), has produced an innovative data holding that largely leverages off existing data sources, but also incorporates many previously unavailable sources of surface air temperature. This data holding provides users a way to better track the origin of the data from its collection through its integration. By providing the data in various stages that lead to the integrated product, by including data origin tracking flags with information on each observation, and by providing the software used to process all observations, the processes involved in creating the observed fundamental climate record are completely open and transparent to the extent humanly possible.

Databank Architecture

figure1

The databank includes six data Stages, starting from the original observation to the final quality controlled and bias corrected product (Figure 1). The databank begins at Stage Zero holdings, which contain scanned images of digital observations in their original form. These images are hosted on the databank server when third party hosting is not possible. Stage One contains digitized data, in its native format, provided by the contributor. No effort is required on their part to convert the data into any other format. This reduces the possibility that errors could occur during translation. We collated over 50 sources ranging from single station records to holdings of several tens of thousands of stations.

Once data are submitted as Stage One, all data are converted into a common Stage Two format. In addition, data provenance flags are added to every observation to provide a history of that particular observation. Stage Two files are maintained in ASCII format, and the code to convert all the sources is provided. After collection and conversion to a common format, the data are then merged into a single, comprehensive Stage Three dataset. The algorithm that performs the merging is described below. Development of the merged dataset is followed by quality control and homogeneity adjustments (Stage Four and Five, respectively). These last two stages are not the responsibility of Databank Working Group, see the discussion of broader context below.

Merge Algorithm Description

The following is an overview of the process in which individual Stage Two sources are combined to form a comprehensive Stage Three dataset. A more detailed description can be found in a manuscript accepted and published by Geoscience Data Journal (Rennie et al., 2014).

The algorithm attempts to mimic the decisions an expert analyst would make manually. Given the fractured nature of historical data stewardship many sources will inevitably contain records for the same station and it is necessary to create a process for identifying and removing duplicate stations, merging some sources to produce a longer station record, and in other cases determining when a station should be brought in as a new distinct record.

The merge process is accomplished in an iterative fashion, starting from the highest priority data source (target) and running progressively through the other sources (candidates). A source hierarchy has been established which prioritizes datasets that have better data provenance, extensive metadata, and long, consistent periods of record. In addition it prioritizes holdings derived from daily data to allow consistency between daily holdings and monthly holdings. Every candidate station read in is compared to all target stations, and one of three possible decisions is made. First, when a station match is found, the candidate station is merged with the target station. Second, if the candidate station is determined to be unique it is added to the target dataset as a new station. Third, the available information is insufficient, conflicting, or ambiguous, and the candidate station is withheld.

Stations are first compared through their metadata to identify matching stations. Four tests are applied: geographic distance, height distance, station name similarity, and when the data record began. Non-missing metrics are then combined to create a metadata metric and it is determined whether to move on to data comparisons, or to withhold the candidate station. If a data comparison is deemed necessary, overlapping data between the target and candidate station is tested for goodness-of-fit using the Index of Agreement (IA). At least five years of overlap are required for a comparison to be made. A lookup table is used to provide two data metrics, the probability of station match (H1) and the probability of station uniqueness (H2). These are then combined with the metadata metric to create posterior metrics of station match and uniqueness. These are used to determine if the station is merged, added as unique, or withheld.

Stage Three Dataset Description

figure2

The integrated data holding recommended and endorsed by ISTI contains over 32,000 global stations (Figure 2), over four times as many stations as GHCN-M version 3. Although station coverage varies spatially and temporally, there are adequate stations with decadal and century periods of record at local, regional, and global scales. Since 1850, there consistently are more stations in the recommended merge than GHCN-M (Figure 3). In GHCN-M version 3, there was a significant drop in stations in 1990 reflecting the dependency on the decadal World Weather Records collection as a source, which is ameliorated by many of the new sources which can be updated much more rapidly and will enable better real-time monitoring.

figure3

Many thresholds are used in the merge and can be set by the user before running the merge program. Changing these thresholds can significantly alter the overall result of the program. Changes will also occur when the source priority hierarchy is altered. In order to characterize the uncertainty associated with the merge parameters, seven different variants of the Stage Three product were developed alongside the recommended merge. This uncertainty reflects the importance of data rescue. While a major effort has been undertaken through this initiative, more can be done to include areas that are lacking on both spatial and temporal scales, or lacking maximum and minimum temperature data.

Data Access

Version 1.0.0 of the Global Land Surface Databank has been released and data are provided from a primary ftp site hosted by the Global Observing Systems Information Center (GOSIC) and World Data Center A at NOAA NCDC. The Stage Three dataset has multiple formats, including a format approved by ISTI, a format similar to GHCN-M, and netCDF files adhering to the Climate and Forecast (CF) convention. The data holding is version controlled and will be updated frequently in response to newly discovered data sources and user comments.

All processing code is provided, for openness and transparency. Users are encouraged to experiment with the techniques used in these algorithms. The programs are designed to be modular, so that individuals have the option to develop and implement other methods that may be more robust than described here. We will remain open to releases of new versions should such techniques be constructed and verified.

ISTI’s online directory provides further details on the merging process and other aspects associated with the full development of the databank as well as all of the data and processing code.

We are always looking to increase the completeness and provenance of the holdings. Data submissions are always welcome and strongly encouraged. If you have a lead on a new data source, please contact data.submission@surfacetemperatures.org with any information which may be useful.

The broader context

It is important to stress that the databank is a release of fundamental data holdings – holdings which contain myriad non-climatic artefacts arising from instrument changes, siting changes, time of observation changes etc. To gain maximum value from these improved holdings it is imperative that as a global community we now analyze them in multiple distinct ways to ascertain better estimates of the true evolution of surface temperatures locally, regionally, and globally. Interested analysts are strongly encouraged to develop innovative approaches to the problem.

To help ascertain what works and what doesn’t the benchmarking working group are developing and will soon release a set of analogs to the databank. These will share the space and time sampling of the holdings but contain a set of known (to the originators) data issues that require removing. When analysts apply their methods to the analogs we can infer something meaningful about their methods. Further details are available in a discussion paper under peer review [Willett et al., submitted].

More Information

www.surfacetemperatures.org
ftp://ftp.ncdc.noaa.gov/pub/data/globaldatabank

References
Rennie, J.J. and coauthors, 2014, The International Surface Temperature Initiative Global Land Surface Databank: Monthly Temperature Data Version 1 Release Description and Methods. Accepted, Geoscience Data Journal.

Willett, K. M. et al., submitted, Concepts for benchmarking of homogenisation algorithm performance on the global scale. http://www.geosci-instrum-method-data-syst-discuss.net/4/235/2014/gid-4-235-2014.html

Author: "rasmus" Tags: "Climate Science, Instrumental Record"
Comments Send by mail Print  Save  Delicious 
Date: Wednesday, 02 Jul 2014 13:55

This month’s open thread. Topics of potential interest: The successful OCO-2 launch, continuing likelihood of an El Niño event this fall, predictions of the September Arctic sea ice minimum, Antarctic sea ice excursions, stochastic elements in climate models etc. Just for a change, no discussion of mitigation efforts please!

Author: "group" Tags: "Climate Science, Open thread"
Comments Send by mail Print  Save  Delicious 
Date: Sunday, 01 Jun 2014 23:35

June is the month when the Arctic Sea Ice outlook gets going, when the EPA releases its rules on power plant CO2 emissions, and when, hopefully, commenters can get back to actually having constructive and respectful conversations about climate science (and not nuclear energy, impending apocalypsi (pl) or how terrible everyone else is). Thanks.

Author: "group" Tags: "Climate Science, Open thread"
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 08 May 2014 13:39

Guest commentary from Michelle L’Heureux, NOAA Climate Prediction Center

Much media attention has been directed at the possibility of an El Niño brewing this year. Many outlets have drawn comparison with the 1997-98 super El Niño. So, what are the odds that El Niño will occur? And if it does, how strong will it be?

To track El Niño, meteorologists at the NOAA/NWS Climate Prediction Center (CPC) release weekly and monthly updates on the status of the El Niño-Southern Oscillation (ENSO). The International Research Institute (IRI) for Climate and Society partner with us on the monthly ENSO release and are also collaborators on a brand new “ENSO blog” which is part of www.climate.gov (co-sponsored by the NOAA Climate Programs Office).

Blogging ENSO is a first for operational ENSO forecasters, and we hope that it gives us another way to both inform and interact with our users on ENSO predictions and impacts. In addition, we will collaborate with other scientists to profile interesting ENSO research and delve into the societal dimensions of ENSO.

As far back as November 2013, the CPC and the IRI have predicted an elevated chance of El Niño (relative to historical chance or climatology) based on a combination of model predictions and general trends over the tropical Pacific Ocean. Once the chance of El Niño reached 50% in March 2014, an El Niño Watch was issued to alert the public that conditions are more favorable for the development of El Niño.
Current forecasts for the Nino-3.4 SST index (as of 5 May 2014) from the NCEP Climate Forecast System version 2 model.
Current forecasts for the Nino-3.4 SST index (as of 5 May 2014) from the NCEP Climate Forecast System version 2 model

More recently, on May 8th, the CPC/IRI ENSO team increased the chance that El Niño will develop, with a peak probability of ~80% during the late fall/early winter of this year. El Nino onset is currently favored sometime in the early summer (May-June-July). At this point, the team remains non-committal on the possible strength of El Niño preferring to watch the system for at least another month or more before trying to infer the intensity. But, could we get a super strong event? The range of possibilities implied by some models allude to such an outcome, but at this point the uncertainty is just too high. While subsurface heat content levels are well above average (March was the highest for that month since 1979 and April was the second highest), ENSO prediction relies on many other variables and factors. We also remain in the spring prediction barrier, which is a more uncertain time to be making ENSO predictions.

Could El Niño predictions fizzle? Yes, there is roughly a 2 in 10 chance at this point that this could happen. It happened in 2012 when an El Nino Watch was issued, chances became as high as 75% and El Niño never formed. Such is the nature of seasonal climate forecasting when there is enough forecast uncertainty that “busts” can and do occur. In fact, more strictly, if the forecast probabilities are “reliable,” an event with an 80% chance of occurring should only occur 80% of the time over a long historical record. Therefore, 20% of the time the event must NOT occur (click here for a description of verification techniques).

While folks might prefer total certainty in our forecasts, we live in an uncertain world. El Niño is most likely to occur this year, so please stay attentive to the various updates linked above and please visit our brand new ENSO blog.

Author: "mike" Tags: "Climate Science"
Comments Send by mail Print  Save  Delicious 
Date: Friday, 02 May 2014 13:35

This month’s open thread. In order to give everyone a break, no discussion of mitigation options this month – that has been done to death in previous threads. Anything related to climate science is totally fine: Carbon dioxide levels maybe, or TED talks perhaps…

Author: "group" Tags: "Climate Science, Open thread"
Comments Send by mail Print  Save  Delicious 
Faking it   New window
Date: Wednesday, 30 Apr 2014 11:36

Every so often contrarians post old newspaper quotes with the implication that nothing being talked about now is unprecedented or even unusual. And frankly, there are lots of old articles that get things wrong, are sensationalist or made predictions without a solid basis. And those are just the articles about the economy.

However, there are plenty of science articles that are just interesting, reporting events and explorations in the Arctic and elsewhere that give a fascinating view into how early scientists were coming to an understanding about climate change and processes. In particular, in the Atlantic sector of the Arctic the summer of 1922 was (for the time) quite warm, and there were a number of reports that discussed some unprecedented (again, for the time) observations of open water. The most detailed report was in the Monthly Weather Review:

The same report was picked up by the Associated Press and short summary articles appeared in the Washington Post and L.A. Times on Nov 2nd (right). As you can read, the basic story is that open water was seen up to 81º 29′N near Spitzbergen (now referred to as Svalbard), and that this was accompanied by a shift in ecosystems and some land ice melting. It seems that the writers were more concerned with fishing than climate change though.

This clip started showing up around Aug 2007 (this is the earliest mention I can find). The main point in bringing it up was (I imagine) to have a bit of fun by noting the similarity of the headline “Arctic Ocean Getting Warm” and contemporaneous headlines discussing the very low sea ice amounts in 2007. Of course, this doesn’t imply that the situation was the same back in 1922 compared to 2007 (see below).

The text of Washington Post piece soon started popping up on blogs and forums. Sometime in late 2009, probably as part of a mass-forwarded email (remember those?), the text started appearing with the following addition (with small variations, e.g. compare this and this):

I apologize, I neglected to mention that this report was from November 2, 1922. As reported by the AP and published in The Washington Post

However, the text was still pretty much what was in the Washington Post article (some versions had typos of “Consulafft” instead of “Consul Ifft” (the actual consul’s name) and a few missing words). Snopes looked into it and they agreed that this was basically accurate – and they correctly concluded that the relevance to present-day ice conditions was limited.

But sometime in January 2010 (the earliest version I can find is from 08/Jan/2010), a version of the email started circulating with an extra line added:

“Within a few years it is predicted that due to the ice melt the sea will rise and make most coastal cities uninhabitable.”

This is odd on multiple levels. First of all, the rest of the piece is just about observations, not predictions of any sort. Nor is there any source given for these mysterious predictions (statistics? soothsaying? folk wisdom?). Indeed, since ice melt large enough to ‘make most coastal cities uninhabitable’ would be a big deal, you’d think that the Consul and AP would have been a little more concerned about the level of the sea instead of the level of the seals. In any case, the line is completely made up, a fiction, an untruth, a lie.

But now, instead of just an observation that sounds like observations being made today, the fake quote is supposed to demonstrate that people (implicitly scientists) have been making alarmist and unsupported claims for decades with obvious implications. This is pretty low by any standards.

The article with the fake quote has done the rounds of most of the major contrarian sites – including the GWPF, right-wing leaning local papers (Provo, UT), magazines (Quadrant in Australia, Canada Free Press) and blogs (eg. Small dead animals). The only pseudo-sceptic blog that doesn’t appear to have used it is WUWT! (though it has come up in comments). This is all despite some people noting that the last line was fake (at least as early as April 2011). Some of the mentions even link to the Snopes article (which doesn’t mention the fake last line) as proof that their version (with the fake quote) is authentic.

Last week it was used again by Richard Rahn in the Washington Times, and the fake quote was extracted and tweeted by CFACT, which is where I saw it.

So we have a situation where something real and actually interesting is found in the archives, it gets misrepresented as a ‘gotcha’ talking point, but someone thinks it can be made ‘better’ and so adds a fake last line to sex it up. Now with twitter, with its short quotes, some contrarians only quote the fakery. And thus a completely false talking point is created out of the whole cloth.

Unfortunately, this is not unusual.

Comparing 1922 and now

To understand why the original story is actually interesting, we need a little context. Estimates of Arctic sea ice go back to the 19th Century from fishing vessels and explorers though obviously they have got better in recent decades because of the satellite coverage. The IPCC AR5 report (Figure 4.3) shows a compilation of sea ice extent from HadISST1 (which is being updated as we speak), but it is clear enough for our purposes:

I have annotated the summer of 1922, which did see quite a large negative excursion Arctic-wide compared to previous years, though the excursion is perhaps not that unusual for the period. A clearer view can be seen in the Danish ice charts for August 1921 and 1922 (via the Icelandic Met Office):



The latitude for open-water in the 1922 figure is around 81ºN, as reported by the Consul. Browsing other images in the series indicates that Spitzbergen almost always remained ice-bound even in August, so the novelty of the 1922 observation is clear.

But what of now? We can look at the August 2013 operational ice charts (that manually combine satellite and in situ observations) from the Norwegian Met Office, and focus on the area of Svalbard/Spitzbergen. Note that 2013 was the widely touted year that Arctic sea ice ‘recovered’:



The open-water easily extends to past 84ºN – many hundreds of kilometers further north than the ‘unprecedented’ situation in 1922. Data from the last 4 years shows some variability of course, but by late August there is consistently open-water further north than 81ºN 30′. The Consul’s observation, far from being novel, is now commonplace.

This implies that this article – when seen in context – is actually strongly confirming of a considerable decline in Arctic sea ice over the last 90 years. Not that CFACT is going to tweet that.

Author: "gavin" Tags: "Arctic and Antarctic, Climate Science, I..."
Comments Send by mail Print  Save  Delicious 
Date: Saturday, 26 Apr 2014 02:57

Somewhat randomly, my thoughts turned to the Nenana Ice Classic this evening, only to find that the ice break up had only just occurred (3:48 pm Alaskan Standard Time, April 25). This is quite early (the 7th earliest date, regardless of details associated with the vernal equinox or leap year issues), though perhaps unsurprising after the warm Alaskan winter this year (8th warmest on record). This is in strong contrast to the very late break up last year.



Break up dates accounting for leap years and variations in the vernal equinox.

As mentioned in my recent post, the Nenana break up date is a good indicator of Alaskan regional temperatures and despite last year’s late anomaly, the trends are very much towards a earlier spring. This is also true for trends in temperatures and ice break up mostly everywhere else too, despite individual years (like 2013/2014) being anomalously cold (for instance in the Great Lakes region). As we’ve often stressed, it is the trends that are important for judging climate change, not the individual years. Nonetheless, odds on dates as early as this years have more than doubled over the last century.

Author: "gavin" Tags: "Climate impacts, Climate Science, Instru..."
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 24 Apr 2014 19:47

metadata-fig “These results are quite strange”, my colleague told me. He analysed some of the recent climate model results from an experiment known by the cryptic name ‘CMIP5‘. It turned out that the results were ok, but we had made an error when reading and processing the model output. The particular climate model that initially gave the strange results had used a different calendar set-up to the previous models we had examined.

In fact, the models used to compute the results in CMIP5 use several different calendars: Gregorian, idealistic 360-day, or assuming no leap years. These differences do not really affect the model results, however, they are important to take into account in further analysis.

Just to make things more complicated, model results and data often come with different time units (such as counting hours since 0001-01-01 00:00:0.0) and physical units (precipitation m/day or kg m/s; temperature: Kelvin, Fahrenheit, or Centigrade). Different countries use different decimal delimiters: point or comma. And missing values are sometimes represented as a blank space, some unrealistic number (-999), or ‘NA’ (not available) if the data is provided as ASCII files. No recorded rainfall is often represented by either 0 or the ASCII character ‘.’.

For station data, the numbers are often ordered differently, either as rows or columns, and with a few lines in the beginning (the header) with various amount of description. There are almost as many ways to store data as there are groups providing data. Great!

Murphy’s law combined with typically different formats imply that reading data and testing takes time. Different scripts must be written for each data portal. The time it takes to read data can in principle be reduced to seconds, given appropriate means to do so (and the risk of making mistakes eliminated). Some data portals provide codes such as Fortran programs, but using Fortran for data analysis is no longer very efficient.

We are not done with the formats. There are more aspects to data, analyses, and model results. A proper and unambiguous description of the data is always needed so that people know exactly what they are looking at. I think this will become more important with new efforts devoted to the World Meteorological Organisation’s (WMO) global framework on climate services (GFCS).

Data description is known as ‘meta-data‘, telling what a variable represents, what units, the location, time, the method used to record or compute, and its quality.

It is important to distinguish measurements from model results. The quality of data is given by error bars, whereas the reliability of model result can be described by various skill scores, depending on their nature.

There is a large range of possibilities for describing methods and skill scores, and my guess is that there is no less diversity than we see in data formats used in different portals. This diversity is also found in empirical-statistical downscaling.

A new challenge is that the volume of climate model results has grown almost explosively. How do make sense out of all these results and all the data? If the results come with proper meta-data, it may be possible to apply further statistical analysis to sort, categorise, identify links (regression), or apply geo-statistics.

Meta-data with a controlled vocabulary can help keep track of results and avoid ambiguities. It is also easier to design common analytical and visualisation methods for data which have a standard format. There are already some tools for visualisation and analysis such as Ferret and GrADS, however, mainly for gridded data.

Standardised meta-data also allows easy comparisons between same type of results from different research communities, or different types of results, e.g. by the means of experimental design (Thorarinsdottir et. al. 2014). Such statistical analysis may make it possible to say whether certain choices lead to different results, if they are tagged with the different schemes employed in the models. This type of analysis makes use of certain key words, based on a set of commonly agreed terms.

Similar terms, however, may mean different things to different communities, such as ‘model’, ‘prediction’, ‘non-stationarity’, ‘validation’, and ‘skill’. I have seen how misinterpretation of such concepts has lead to confusion, particularly among people who don’t think that climate change is a problem.

There have been recent efforts to establish controlled vocabularies, e.g. through EUPORIAS and a project called downscaling metadata, and a new breed of concepts has entered climate research, such as COG and CIM.

There are further coordinated initiatives addressing standards for meta-data, data formats, and controlled vocabularies. Perhaps most notable are the Earth System Grid Federation (ESGF), the coupled model inter-comparison project (CMIP), and the coordinated regional downscaling experiment (CORDEX). The data format used by climate models, netCDFCF‘, is a good start, at least for model results on longitude-latitude grids. However, these initiatives don’t yet offer explanations of validation methods, skill scores, modelling details.

Validation, definitions and meta-data have been discussed in a research project called ‘SPECS‘ (that explores the possibility for seasonal-to-decadal prediction) because it is important to understand the implications and limitations of its forecasts. There is also another project called VALUE that addresses the question of validation strategies and skill scores for downscaling methods.

Many climate models have undergone thorough evaluation, but this is not apparent unless one reads chapter 9 on model evaluation in the latest IPCC report (AR5). Even in this report, a systematic summary of the different evaluation schemes and skill scores is sometimes lacking, with an exception of a summary of spatial correlation between model results and analyses.

The information about model skill would be more readily accessible if the results were tagged with the type of tests used to verify the results, and the test results (skill scores). An extra bonus is that a common practice of including a quality stamp describing validation may enhance the visibility of the evaluation aspect. To make such labelling effective, they should use well-defined terms and glossaries.

There is more than gridded results from a regional climate model. What about quantities such as return values, probabilities, storm tracks, number of freezing events, intense rainfall events, start of a rainy season, wet-day frequency, extremes, or droughts? The larger society needs information in a range of different formats, provided by climate services. Statistical analysis and empirical-statistical downscaling provide information in untraditional ways, as well as improved quantification of uncertainty (Katz et al., 2013).

Another important piece of information is the process history to make the results traceable and in principle replicable. The history is important for both the science community and for use in climate services.

unlabelmedicne
One analogy to proper meta-data is to provide a label on climate information in a similar way to labels on medicine.

In summary,there has been much progress on climate data formats and standards, but I think we can go even further and become even more efficient by extending this work.

Update: Also see related Climate Informatics: Human Experts and the End-to-End System

References

  1. T. Thorarinsdottir, J. Sillmann, and R. Benestad, "Studying Statistical Methodology in Climate Research", Eos, Transactions American Geophysical Union, vol. 95, pp. 129-129, 2014. http://dx.doi.org/10.1002/2014EO150008
  2. R.W. Katz, P.F. Craigmile, P. Guttorp, M. Haran, B. Sansó, and M.L. Stein, "Uncertainty analysis in climate change assessments", Nature Climate change, vol. 3, pp. 769-771, 2013. http://dx.doi.org/10.1038/nclimate1980
Author: "rasmus" Tags: "Climate modelling, Glossary, Scientific ..."
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 17 Apr 2014 08:56

Brigitte Knopf_441B9424_Sep2012_web

 

 

 

Guest post by Brigitte Knopf

 

 

 

 

 

 

Global emissions continue to rise further and this is in the first place due to economic growth and to a lesser extent to population growth. To achieve climate protection, fossil power generation without CCS has to be phased out almost entirely by the end of the century. The mitigation of climate change constitutes a major technological and institutional challenge. But: It does not cost the world to save the planet.

This is how the new report was summarized by Ottmar Edenhofer, Co-Chair of Working Group III of the IPCC, whose report was adopted on 12 April 2014 in Berlin after intense debates with governments. The report consists of 16 chapters with more than 2000 pages. It was written by 235 authors from 58 countries and reviewed externally by 900 experts. Most prominent in public is the 33-page Summary for Policymakers (SPM) that was approved by all 193 countries. At a first glance, the above summary does not sound spectacular but more like a truism that we’ve often heard over the years. But this report indeed has something new to offer.

The 2-degree limit

For the first time, a detailed analysis was performed of how the 2-degree limit can be kept, based on over 1200 future projections (scenarios) by a variety of different energy-economy computer models. The analysis is not just about the 2-degree guardrail in the strict sense but evaluates the entire space between 1.5 degrees Celsius, a limit demanded by small island states, and a 4-degree world. The scenarios show a variety of pathways, characterized by different costs, risks and co-benefits. The result is a table with about 60 entries that translates the requirements for limiting global warming to below 2-degrees into concrete numbers for cumulative emissions and emission reductions required by 2050 and 2100. This is accompanied by a detailed table showing the costs for these future pathways.

The IPCC represents the costs as consumption losses as compared to a hypothetical ‘business-as-usual’ case. The table does not only show the median of all scenarios, but also the spread among the models. It turns out that the costs appear to be moderate in the medium-term until 2030 and 2050, but in the long-term towards 2100, a large spread occurs and also high costs of up to 11% consumption losses in 2100 could be faced under specific circumstances. However, translated into reduction of growth rate, these numbers are actually quite low. Ambitious climate protection would cost only 0.06 percentage points of growth each year. This means that instead of a growth rate of about 2% per year, we would see a growth rate of 1.94% per year. Thus economic growth would merely continue at a slightly slower pace. However, and this is also said in the report, the distributional effects of climate policy between different countries can be very large. There will be countries that would have to bear much higher costs because they cannot use or sell any more of their coal and oil resources or have only limited potential to switch to renewable energy.

The technological challenge

Furthermore – and this is new and important compared to the last report of 2007 – the costs are not only shown for the case when all technologies are available, but also how the costs increase if, for example, we would dispense with nuclear power worldwide or if solar and wind energy remain more expensive than expected.

The results show that economically and technically it would still be possible to remain below the level of 2-degrees temperature increase, but it will require rapid and global action and some technologies would be key:

Many models could not achieve atmospheric concentration levels of about 450 ppm CO2eq by 2100, if additional mitigation is considerably delayed or under limited availability of key technologies, such as bioenergy, CCS, and their combination (BECCS).

Probably not everyone likes to hear that CCS is a very important technology for keeping to the 2-degree limit and the report itself cautions that CCS and BECCS are not yet available at a large scale and also involve some risks. But it is important to emphasize that the technological challenges are similar for less ambitious temperature limits.

The institutional challenge

Of course, climate change is not just a technological issue but is described in the report as a major institutional challenge:

Substantial reductions in emissions would require large changes in investment patterns

Over the next two decades, these investment patterns would have to change towards low-carbon technologies and higher energy efficiency improvements (see Figure 1). In addition, there is a need for dedicated policies to reduce emissions, such as the establishment of emissions trading systems, as already existent in Europe and in a handful of other countries.

Since AR4, there has been an increased focus on policies designed to integrate multiple objectives, increase co‐benefits and reduce adverse side‐effects.

The growing number of national and sub-national policies, such as at the level of cities, means that in 2012, 67% of global GHG emissions were subject to national legislation or strategies compared to  only 45% in 2007. Nevertheless, and that is clearly stated in the SPM, there is no trend reversal of emissions within sight – instead a global increase of emissions is observed.

IPCC_WG3_SPM_Figure_9

Figure 1: Change in annual investment flows from the average baseline level over the next two decades (2010 to 2029) for mitigation scenarios that stabilize concentrations within the range of approximately 430–530 ppm CO2eq by 2100. Source: SPM, Figure SPM.9

 

Trends in emissions

A particularly interesting analysis, showing from which countries these emissions originate, was removed from the SPM due to the intervention of some governments, as it shows a regional breakdown of emissions that was not in the interest of every country (see media coverage here or here). These figures are still available in the underlying chapters and the Technical Summary (TS), as the government representatives may not intervene here and science can speak freely and unvarnished. One of these figures shows very clearly that in the last 10 years emissions in countries of upper middle income – including, for example, China and Brazil – have increased while emissions in high-income countries – including Germany – stagnate, see Figure 2. As income is the main driver of emissions in addition to the population growth, the regional emissions growth can only be understood by taking into account the development of the income of countries.

Historically, before 1970, emissions have mainly been emitted by industrialized countries. But with the regional shift of economic growth now emissions have shifted to countries with upper middle income, see Figure 2, while the industrialized countries have stabilized at a high level. The condensed message of Figure 2 does not look promising: all countries seem to follow the path of the industrialized countries, with no “leap-frogging” of fossil-based development directly to a world of renewables and energy efficiency being observed so far.

AR5_figure_TS.4

Figure 2: Trends in GHG emissions by country income groups. Left panel: Total annual anthropogenic GHG emissions from 1970 to 2010 (GtCO2eq/yr). Middle panel: Trends in annual per capita mean and median GHG emissions from 1970 to 2010 (tCO2eq/cap/yr). Right panel: Distribution of annual per capita GHG emissions in 2010 of countries within each income group (tCO2/cap/yr). Source: TS, Figure TS.4

 

But the fact that today’s emissions especially rise in countries like China is only one side of the coin. Part of the growth in CO2 emissions in the low and middle income countries is due to the production of consumption goods that are intended for export to the high-income countries (see Figure 3). Put in plain language: part of the growth of Chinese emissions is due to the fact that the smartphones used in Europe or the US are produced in China.

AR5_figure_TS.5

Figure 3: Total annual CO2 emissions (GtCO2/yr) from fossil fuel combustion for country income groups attributed on the basis of territory (solid line) and final consumption (dotted line). The shaded areas are the net CO2 trade balance (difference) between each of the four country income groups and the rest of the world. Source: TS, Figure TS.5

 

The philosophy of climate change

Besides all the technological details there has been a further innovation in this report, that is the chapter on “Social, economic and ethical concepts and methods“. This chapter could be called the philosophy of climate change. It emphasizes that

Issues of equity, justice, and fairness arise with respect to mitigation and adaptation. […] Many areas of climate policy‐making involve value judgements and ethical considerations.

This implies that many of these issues cannot be answered solely by science, such as the question of a temperature level that avoids dangerous anthropogenic interference with the climate system or which technologies are being perceived as risky. It means that science can provide information about costs, risks and co-benefits of climate change but in the end it remains a social learning process and debate to find the pathway society wants to take.

Conclusion

The report contains many more details about renewable energies, sectoral strategies such as in the electricity and transport sector, and co-benefits of avoided climate change, such as improvements of air quality. The aim of Working Group III of the IPCC was, and the Co-Chair emphasized this several times, that scientists are mapmakers that will help policymakers to navigate through this difficult terrain in this highly political issue of climate change. And this without being policy prescriptive about which pathway should be taken or which is the “correct” one. This requirement has been fulfilled and the map is now available. It remains to be seen where the policymakers are heading in the future.

 

The report :

Climate Change 2014: Mitigation of Climate Change – IPCC Working Group III Contribution to AR5

 

Brigitte Knopf is head of the research group Energy Strategies Europe and Germany at the Potsdam Institute for Climate Impact Research (PIK) and one of the authors of the report of the IPCC Working Group III and is on Twitter as @BrigitteKnopf

This article was translated from the German original at RC’s sister blog KlimaLounge.

 

Reaclimate coverage of the IPCC 5th Assessment Report:

Summary of Part 1, Physical Science Basis

Summary of Part 2, Impacts, Adaptation, Vulnerability

Summary of Part 3, Mitigation

Sea-level rise in the AR5

Attribution of climate change to human causes

Radiative forcing of climate change

Author: "stefan" Tags: "IPCC"
Comments Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader