• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Friday, 26 Sep 2014 20:10

Global Warming: The Science and Modeling of Climate Change is a free online adaptation of a college-level class for non-science majors at the University of Chicago (textbook, video lectures). The class includes 33 short exercises for playing with on-line models, 5 “number-cruncher” problems where you create simple models from scratch in a spreadsheet or programming language, and 8 “explainer” assignments where you explain some concept as you would to a smart 11-year old child (short, simple, clear), and exchange these with other students in the class for feedback. The discussion forums are very lively, as thousands of people from around the world make their way through the video lectures and exercises, lots to chat about. This is our third run of the class, so we’re getting the kinks out. We hope you find it useful. September 29 – December 31 2014.

global-warming-coursera-logo

Author: "david" Tags: "Climate Science"
Comments Send by mail Print  Save  Delicious 
Date: Tuesday, 23 Sep 2014 13:54
Author: "david" Tags: "Climate Science"
Comments Send by mail Print  Save  Delicious 
Date: Tuesday, 02 Sep 2014 12:08

Climate blogs and comment threads are full of ‘arguments by analogy’. Depending on what ‘side’ one is on, climate science is either like evolution/heliocentrism/quantum physics/relativity or eugenics/phrenology/Ptolemaic cosmology/phlogiston. Climate contrarians are either like flat-earthers/birthers/moon-landing hoaxers/vaccine-autism linkers or Galileo/stomach ulcer-Helicobacter proponents/Wegener/Copernicus. Episodes of clear misconduct or dysfunction in other spheres of life are closely parsed only to find clubs with which to beat an opponent. Etc. Etc.

While the users of these ‘arguments’ often assume that they are persuasive or illuminating, the only thing that is revealed is how the proposer feels about climate science. If they think it is generally on the right track, the appropriate analogy is some consensus that has been validated many times and the critics are foolish stuck-in-the-muds or corporate disinformers, and if they don’t, the analogy is to a consensus that was overturned and where the critics are the noble paradigm-shifting ‘heretics’. This is far closer to wishful thinking than actual thinking, but it does occasionally signal clearly who is not worth talking to. For instance, an article pretending to serious discussion on climate that starts with a treatise about Lysenkoism in the Soviet Union is not to be taken seriously.

Since the truth of falsity of any scientific claim can only be evaluated on it’s own terms – and not via its association with other ideas or the character of its proponents – this kind of argument is only rhetorical. It gets no-one closer to the truth of any particular matter. The fact is that many, many times, mainstream science has survived multiple challenges by ‘sceptics’, and that sometimes (though not at all often), a broad consensus has been overturned. But knowing which case is which in any particular issue simply by looking for points of analogy with previous issues, but without actually examining the data and theory directly, is impossible. The point being that arguments by analogy are not persuasive to anyone who doesn’t already agree with you on the substance.

Given the rarity of a consensus-overturning event, the only sensible prior is to assume that a consensus is probably valid absent very strong evidence to the contrary, which is incidentally the position adopted by the arch-sceptic Bertrand Russell. The contrary assumption implies there are no a priori reasons to think any scientific body of work is credible which, while consistent, is not one that I have ever found anyone professing in practice. Far more common is a selective rejection of science dependent on other reasons and that is not a coherent philosophical position at all.

Analogies do have their place of course – usually to demonstrate that a supposedly logical point falls down completely when applied to a different (but analogous) case. For instance, an implicit claim that all correct scientific theories are supported by a unanimity of Nobel Prize winners/members of the National Academies, is easily dismissed by reference to Kary Mullis or Peter Duesberg. A claim that CO2 can’t possibly have a significant effect solely because of its small atmospheric mixing ratio, can be refuted as a general claim by reference to other substances (such as arsenic, plutonium or Vitamin C) whose large effects due to small concentrations are well known. Or if a claim is made that all sciences except climate science are devoid of uncertainty, this is refuted by reference to, well, any other scientific field.

To be sure, I am not criticising the use of metaphor in a more general sense. Metaphors that use blankets to explaining how the greenhouse effect works, income and spending in your bank account to stand in for the carbon cycle, what the wobbles in the Earth’s orbit look like if the planet was your head, or conceptualizing the geologic timescale by compressing it to a day, for instance, all serve useful pedagogic roles. The crucial difference is that these mappings don’t come dripping with over-extended value judgements.

Another justification for the kind of analogy I’m objecting to is that it is simply for amusement: “Of course, I’m not really comparing my opponents to child molesters/food adulterers/mass-murderers – why can’t you take a joke?”. However, if you need to point out to someone that a joke (for adults at least) needs to have more substance than just calling someone a poopyhead, it is probably not worth the bother.

It would be nice to have a moratorium on all such analogical arguments, though obviously that is unlikely to happen. The comment thread here can assess this issue directly, but most such arguments on other threads are ruthlessly condemned to the bore-hole (where indeed many of them already co-exist). But perhaps we can put some pressure on users of these fallacies by pointing to this post and then refusing to engage further until someone actually has something substantive to offer. It may be pointless, but we can at least try.

Author: "gavin" Tags: "Climate Science, Communicating Climate"
Comments Send by mail Print  Save  Delicious 
Date: Tuesday, 02 Sep 2014 11:58

This month’s open thread. People could waste time rebunking predictable cherry-picked claims about the upcoming Arctic sea ice minimum, or perhaps discuss a selection of 10 climate change controversies from ICSU… Anything! (except mitigation).

Author: "group" Tags: "Climate Science, Open thread"
Comments Send by mail Print  Save  Delicious 
Date: Wednesday, 27 Aug 2014 13:45

I have written a number of times about the procedure used to attribute recent climate change (here in 2010, in 2012 (about the AR4 statement), and again in 2013 after AR5 was released). For people who want a summary of what the attribution problem is, how we think about the human contributions and why the IPCC reaches the conclusions it does, read those posts instead of this one.

The bottom line is that multiple studies indicate with very strong confidence that human activity is the dominant component in the warming of the last 50 to 60 years, and that our best estimates are that pretty much all of the rise is anthropogenic.



The probability density function for the fraction of warming attributable to human activity (derived from Fig. 10.5 in IPCC AR5). The bulk of the probability is far to the right of the “50%” line, and the peak is around 110%.

If you are still here, I should be clear that this post is focused on a specific claim Judith Curry has recently blogged about supporting a “50-50″ attribution (i.e. that trends since the middle of the 20th Century are 50% human-caused, and 50% natural, a position that would center her pdf at 0.5 in the figure above). She also commented about her puzzlement about why other scientists don’t agree with her. Reading over her arguments in detail, I find very little to recommend them, and perhaps the reasoning for this will be interesting for readers. So, here follows a line-by-line commentary on her recent post. Please excuse the length.

Starting from the top… (note, quotes from Judith Curry’s blog are blockquoted).

Pick one:

a) Warming since 1950 is predominantly (more than 50%) caused by humans.

b) Warming since 1950 is predominantly caused by natural processes.

When faced with a choice between a) and b), I respond: ‘I can’t choose, since i think the most likely split between natural and anthropogenic causes to recent global warming is about 50-50′. Gavin thinks I’m ‘making things up’, so I promised yet another post on this topic.

This is not a good start. The statements that ended up in the IPCC SPMs are descriptions of what was found in the main chapters and in the papers they were assessing, not questions that were independently thought about and then answered. Thus while this dichotomy might represent Judith’s problem right now, it has nothing to do with what IPCC concluded. In addition, in framing this as a binary choice, it gives implicit (but invalid) support to the idea that each choice is equally likely. That this is invalid reasoning should be obvious by simply replacing 50% with any other value and noting that the half/half argument could be made independent of any data.

For background and context, see my previous 4 part series Overconfidence in the IPCC’s detection and attribution.

Framing

The IPCC’s AR5 attribution statement:

It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together. The best estimate of the human induced contribution to warming is similar to the observed warming over this period.

I’ve remarked on the ‘most’ (previous incarnation of ‘more than half’, equivalent in meaning) in my Uncertainty Monster paper:

Further, the attribution statement itself is at best imprecise and at worst ambiguous: what does “most” mean – 51% or 99%?

Whether it is 51% or 99% would seem to make a rather big difference regarding the policy response. It’s time for climate scientists to refine this range.

I am arguing here that the ‘choice’ regarding attribution shouldn’t be binary, and there should not be a break at 50%; rather we should consider the following terciles for the net anthropogenic contribution to warming since 1950:

  • >66%
  • 33-66%
  • <33%

JC note: I removed the bounds at 100% and 0% as per a comment from Bart Verheggen.

Hence 50-50 refers to the tercile 33-66% (as the midpoint)

Here Judith makes the same mistake that I commented on in my 2012 post – assuming that a statement about where the bulk of the pdf lies is a statement about where it’s mean is and that it must be cut off at some value (whether it is 99% or 100%). Neither of those things follow. I will gloss over the completely unnecessary confusion of the meaning of the word ‘most’ (again thoroughly discussed in 2012). I will also not get into policy implications since the question itself is purely a scientific one.

The division into terciles for the analysis is not a problem though, and the weight of the pdf in each tercile can easily be calculated. Translating the top figure, the likelihood of the attribution of the 1950+ trend to anthropogenic forcings falling in each tercile is 2×10-4%, 0.4% and 99.5% respectively.

Note: I am referring only to a period of overall warming, so by definition the cooling argument is eliminated. Further, I am referring to the NET anthropogenic effect (greenhouse gases + aerosols + etc). I am looking to compare the relative magnitudes of net anthropogenic contribution with net natural contributions.

The two IPCC statements discussed attribution to greenhouse gases (in AR4) and to all anthropogenic forcings (in AR5) (the subtleties involved there are discussed in the 2013 post). I don’t know what she refers to as the ‘cooling argument’, since it is clear that the temperatures have indeed warmed since 1950 (the period referred to in the IPCC statements). It is worth pointing out that there can be no assumption that natural contributions must be positive – indeed for any random time period of any length, one would expect natural contributions to be cooling half the time.

Further, by global warming I refer explicitly to the historical record of global average surface temperatures. Other data sets such as ocean heat content, sea ice extent, whatever, are not sufficiently mature or long-range (see Climate data records: maturity matrix). Further, the surface temperature is most relevant to climate change impacts, since humans and land ecosystems live on the surface. I acknowledge that temperature variations can vary over the earth’s surface, and that heat can be stored/released by vertical processes in the atmosphere and ocean. But the key issue of societal relevance (not to mention the focus of IPCC detection and attribution arguments) is the realization of this heat on the Earth’s surface.

Fine with this.

IPCC

Before getting into my 50-50 argument, a brief review of the IPCC perspective on detection and attribution. For detection, see my post Overconfidence in IPCC’s detection and attribution. Part I.

Let me clarify the distinction between detection and attribution, as used by the IPCC. Detection refers to change above and beyond natural internal variability. Once a change is detected, attribution attempts to identify external drivers of the change.

The reasoning process used by the IPCC in assessing confidence in its attribution statement is described by this statement from the AR4:

“The approaches used in detection and attribution research described above cannot fully account for all uncertainties, and thus ultimately expert judgement is required to give a calibrated assessment of whether a specific cause is responsible for a given climate change. The assessment approach used in this chapter is to consider results from multiple studies using a variety of observational data sets, models, forcings and analysis techniques. The assessment based on these results typically takes into account the number of studies, the extent to which there is consensus among studies on the significance of detection results, the extent to which there is consensus on the consistency between the observed change and the change expected from forcing, the degree of consistency with other types of evidence, the extent to which known uncertainties are accounted for in and between studies, and whether there might be other physically plausible explanations for the given climate change. Having determined a particular likelihood assessment, this was then further downweighted to take into account any remaining uncertainties, such as, for example, structural uncertainties or a limited exploration of possible forcing histories of uncertain forcings. The overall assessment also considers whether several independent lines of evidence strengthen a result.” (IPCC AR4)

I won’t make a judgment here as to how ‘expert judgment’ and subjective ‘down weighting’ is different from ‘making things up’

Is expert judgement about the structural uncertainties in a statistical procedure associated with various assumptions that need to be made different from ‘making things up’? Actually, yes – it is.

AR5 Chapter 10 has a more extensive discussion on the philosophy and methodology of detection and attribution, but the general idea has not really changed from AR4.

In my previous post (related to the AR4), I asked the question: what was the original likelihood assessment from which this apparently minimal downweighting occurred? The AR5 provides an answer:

The best estimate of the human induced contribution to warming is similar to the observed warming over this period.

So, I interpret this as scything that the IPCC’s best estimate is that 100% of the warming since 1950 is attributable to humans, and they then down weight this to ‘more than half’ to account for various uncertainties. And then assign an ‘extremely likely’ confidence level to all this.

Making things up, anyone?

This is very confused. The basis of the AR5 calculation is summarised in figure 10.5:


Figure 10.5 IPCC AR5

The best estimate of the warming due to anthropogenic forcings (ANT) is the orange bar (noting the 1𝛔 uncertainties). Reading off the graph, it is 0.7±0.2ºC (5-95%) with the observed warming 0.65±0.06 (5-95%). The attribution then follows as having a mean of ~110%, with a 5-95% range of 80–130%. This easily justifies the IPCC claims of having a mean near 100%, and a very low likelihood of the attribution being less than 50% (p < 0.0001!). Note there is no ‘downweighting’ of any argument here – both statements are true given the numerical distribution. However, there must be some expert judgement to assess what potential structural errors might exist in the procedure. For instance, the assumption that fingerprint patterns are linearly additive, or uncertainties in the pattern because of deficiencies in the forcings or models etc. In the absence of any reason to think that the attribution procedure is biased (and Judith offers none), structural uncertainties will only serve to expand the spread. Note that one would need to expand the uncertainties by a factor of 3 in both directions to contradict the first part of the IPCC statement. That seems unlikely in the absence of any demonstration of some huge missing factors.

I’ve just reread Overconfidence in IPCC’s detection and attribution. Part IV, I recommend that anyone who seriously wants to understand this should read this previous post. It explains why I think the AR5 detection and attribution reasoning is flawed.

Of particular relevance to the 50-50 argument, the IPCC has failed to convincingly demonstrate ‘detection.’ Because historical records aren’t long enough and paleo reconstructions are not reliable, the climate models ‘detect’ AGW by comparing natural forcing simulations with anthropogenically forced simulations. When the spectra of the variability of the unforced simulations is compared with the observed spectra of variability, the AR4 simulations show insufficient variability at 40-100 yrs, whereas AR5 simulations show reasonable variability. The IPCC then regards the divergence between unforced and anthropogenically forced simulations after ~1980 as the heart of the their detection and attribution argument. See Figure 10.1 from AR5 WGI (a) is with natural and anthropogenic forcing; (b) is without anthropogenic forcing:

Slide1

This is also confused. “Detection” is (like attribution) a model-based exercise, starting from the idea that one can estimate the result of a counterfactual: what would the temperature have done in the absence of the drivers compared to what it would do if they were included? GCM results show clearly that the expected anthropogenic signal would start to be detectable (“come out of the noise”) sometime after 1980 (for reference, Hansen’s public statement to that effect was in 1988). There is no obvious discrepancy in spectra between the CMIP5 models and the observations, and so I am unclear why Judith finds the detection step lacking. It is interesting to note that given the variability in the models, the anthropogenic signal is now more than 5𝛔 over what would have been expected naturally (and if it’s good enough for the Higgs Boson….).

Note in particular that the models fail to simulate the observed warming between 1910 and 1940.

Here Judith is (I think) referring to the mismatch between the ensemble mean (red) and the observations (black) in that period. But the red line is simply an estimate of the forced trends, so the correct reading of the graph would be that the models do not support an argument suggesting that all of the 1910-1940 excursion is forced (contingent on the forcing datasets that were used), which is what was stated in AR5. However, the observations are well within the spread of the models and so could easily be within the range of the forced trend + simulated internal variability. A quick analysis (a proper attribution study is more involved than this) gives an observed trend over 1910-1940 as 0.13 to 0.15ºC/decade (depending the dataset, with ±0.03ºC (5-95%) uncertainty in the OLS), while the spread in my collation of the historical CMIP5 models is 0.07±0.07ºC/decade (5-95%). Specifically, 8 model runs out of 131 have trends over that period greater than 0.13ºC/decade – suggesting that one might see this magnitude of excursion 5-10% of the time. For reference, the GHG related trend in the GISS models over that period is about 0.06ºC/decade. However, the uncertainties in the forcings for that period are larger than in recent decades (in particular for the solar and aerosol-related emissions) and so the forced trend (0.07ºC/decade) could have been different in reality. And since we don’t have good ocean heat content data, nor any satellite observations, or any measurements of stratospheric temperatures to help distinguish potential errors in the forcing from internal variability, it is inevitable that there will be more uncertainty in the attribution for that period than for more recently.

The glaring flaw in their logic is this. If you are trying to attribute warming over a short period, e.g. since 1980, detection requires that you explicitly consider the phasing of multidecadal natural internal variability during that period (e.g. AMO, PDO), not just the spectra over a long time period. Attribution arguments of late 20th century warming have failed to pass the detection threshold which requires accounting for the phasing of the AMO and PDO. It is typically argued that these oscillations go up and down, in net they are a wash. Maybe, but they are NOT a wash when you are considering a period of the order, or shorter than, the multidecadal time scales associated with these oscillations.

Watch the pea under the thimble here. The IPCC statements were from a relatively long period (i.e. 1950 to 2005/2010). Judith jumps to assessing shorter trends (i.e. from 1980) and shorter periods obviously have the potential to have a higher component of internal variability. The whole point about looking at longer periods is that internal oscillations have a smaller contribution. Since she is arguing that the AMO/PDO have potentially multi-decadal periods, then she should be supportive of using multi-decadal periods (i.e. 50, 60 years or more) for the attribution.

Further, in the presence of multidecadal oscillations with a nominal 60-80 yr time scale, convincing attribution requires that you can attribute the variability for more than one 60-80 yr period, preferably back to the mid 19th century. Not being able to address the attribution of change in the early 20th century to my mind precludes any highly confident attribution of change in the late 20th century.

This isn’t quite right. Our expectation (from basic theory and models) is that the second half of the 20th C is when anthropogenic effects really took off. Restricting attribution to 120-160 yr trends seems too constraining – though there is no problem in looking at that too. However, Judith is actually assuming what remains to be determined. What is the evidence that all 60-80yr variability is natural? Variations in forcings (in particularly aerosols, and maybe solar) can easily project onto this timescale and so any separation of forced vs. internal variability is really difficult based on statistical arguments alone (see also Mann et al, 2014). Indeed, it is the attribution exercise that helps you conclude what the magnitude of any internal oscillations might be. Note that if we were only looking at the global mean temperature, there would be quite a lot of wiggle room for different contributions. Looking deeper into different variables and spatial patterns is what allows for a more precise result.

The 50-50 argument

There are multiple lines of evidence supporting the 50-50 (middle tercile) attribution argument. Here are the major ones, to my mind.

Sensitivity

The 100% anthropogenic attribution from climate models is derived from climate models that have an average equilibrium climate sensitivity (ECS) around 3C. One of the major findings from AR5 WG1 was the divergence in ECS determined via climate models versus observations. This divergence led the AR5 to lower the likely bound on ECS to 1.5C (with ECS very unlikely to be below 1C).

Judith’s argument misstates how forcing fingerprints from GCMs are used in attribution studies. Notably, they are scaled to get the best fit to the observations (along with the other terms). If the models all had sensitivities of either 1ºC or 6ºC, the attribution to anthropogenic changes would be the same as long as the pattern of change was robust. What would change would be the scaling – less than one would imply a better fit with a lower sensitivity (or smaller forcing), and vice versa (see figure 10.4).

She also misstates how ECS is constrained – all constraints come from observations (whether from long-term paleo-climate observations, transient observations over the 20th Century or observations of emergent properties that correlate to sensitivity) combined with some sort of model. The divergence in AR5 was between constraints based on the transient observations using simplified energy balance models (EBM), and everything else. Subsequent work (for instance by Drew Shindell) has shown that the simplified EBMs are missing important transient effects associated with aerosols, and so the divergence is very likely less than AR5 assessed.

Nic Lewis at Climate Dialogue summarizes the observational evidence for ECS between 1.5 and 2C, with transient climate response (TCR) around 1.3C.

Nic Lewis has a comment at BishopHill on this:

The press release for the new study states: “Rapid warming in the last two and a half decades of the 20th century, they proposed in an earlier study, was roughly half due to global warming and half to the natural Atlantic Ocean cycle that kept more heat near the surface.” If only half the warming over 1976-2000 (linear trend 0.18°C/decade) was indeed anthropogenic, and the IPCC AR5 best estimate of the change in anthropogenic forcing over that period (linear trend 0.33Wm-2/decade) is accurate, then the transient climate response (TCR) would be little over 1°C. That is probably going too far, but the 1.3-1.4°C estimate in my and Marcel Crok’s report A Sensitive Matter is certainly supported by Chen and Tung’s findings.

Since the CMIP5 models used by the IPCC on average adequately reproduce observed global warming in the last two and a half decades of the 20th century without any contribution from multidecadal ocean variability, it follows that those models (whose mean TCR is slightly over 1.8°C) must be substantially too sensitive.

BTW, the longer term anthropogenic warming trends (50, 75 and 100 year) to 2011, after removing the solar, ENSO, volcanic and AMO signals given in Fig. 5 B of Tung’s earlier study (freely accessible via the link), of respectively 0.083, 0.078 and 0.068°C/decade also support low TCR values (varying from 0.91°C to 1.37°C), upon dividing by the linear trends exhibited by the IPCC AR5 best estimate time series for anthropogenic forcing. My own work gives TCR estimates towards the upper end of that range, still far below the average for CMIP5 models.

If true climate sensitivity is only 50-65% of the magnitude that is being simulated by climate models, then it is not unreasonable to infer that attribution of late 20th century warming is not 100% caused by anthropogenic factors, and attribution to anthropogenic forcing is in the middle tercile (50-50).

The IPCC’s attribution statement does not seem logically consistent with the uncertainty in climate sensitivity.

This is related to a paper by Tung and Zhou (2013). Note that the attribution statement has again shifted to the last 25 years of the 20th Century (1976-2000). But there are a couple of major problems with this argument though. First of all, Tung and Zhou assumed that all multi-decadal variability was associated with the Atlantic Multi-decadal Oscillation (AMO) and did not assess whether anthropogenic forcings could project onto this variability. It is circular reasoning to then use this paper to conclude that all multi-decadal variability is associated with the AMO.

The second problem is more serious. Lewis’ argument up until now that the best fit to the transient evolution over the 20th Century is with a relatively small sensitivity and small aerosol forcing (as opposed to a larger sensitivity and larger opposing aerosol forcing). However, in both these cases the attribution of the long-term trend to the combined anthropogenic effects is actually the same (near 100%). Indeed, one valid criticism of the recent papers on transient constraints is precisely that the simple models used do not have sufficient decadal variability!

Climate variability since 1900

From HadCRUT4:

HadCRUT4

The IPCC does not have a convincing explanation for:

  • warming from 1910-1940
  • cooling from 1940-1975
  • hiatus from 1998 to present

The IPCC purports to have a highly confident explanation for the warming since 1950, but it was only during the period 1976-2000 when the global surface temperatures actually increased.

The absence of convincing attribution of periods other than 1976-present to anthropogenic forcing leaves natural climate variability as the cause – some combination of solar (including solar indirect effects), uncertain volcanic forcing, natural internal (intrinsic variability) and possible unknown unknowns.

This point is not an argument for any particular attribution level. As is well known, using an argument of total ignorance to assume that the choice between two arbitrary alternatives must be 50/50 is a fallacy.

Attribution for any particular period follows exactly the same methodology as any other. What IPCC chooses to highlight is of course up to the authors, but there is nothing preventing an assessment of any of these periods. In general, the shorter the time period, the greater potential for internal variability, or (equivalently) the larger the forced signal needs to be in order to be detected. For instance, Pinatubo was a big rapid signal so that was detectable even in just a few years of data.

I gave a basic attribution for the 1910-1940 period above. The 1940-1975 average trend in the CMIP5 ensemble is -0.01ºC/decade (range -0.2 to 0.1ºC/decade), compared to -0.003 to -0.03ºC/decade in the observations and are therefore a reasonable fit. The GHG driven trends for this period are ~0.1ºC/decade, implying that there is a roughly opposite forcing coming from aerosols and volcanoes in the ensemble. The situation post-1998 is a little different because of the CMIP5 design, and ongoing reevaluations of recent forcings (Schmidt et al, 2014;Huber and Knutti, 2014). Better information about ocean heat content is also available to help there, but this is still a work in progress and is a great example of why it is harder to attribute changes over small time periods.

In the GCMs, the importance of internal variability to the trend decreases as a function of time. For 30 year trends, internal variations can have a ±0.12ºC/decade or so impact on trends, for 60 year trends, closer to ±0.08ºC/decade. For an expected anthropogenic trend of around 0.2ºC/decade, the signal will be clearer over the longer term. Thus cutting down the period to ever-shorter periods of years increases the challenges and one can end up simply cherry picking the noise instead of seeing the signal.

A key issue in attribution studies is to provide an answer to the question: When did anthropogenic global warming begin? As per the IPCC’s own analyses, significant warming didn’t begin until 1950. Just the Facts has a good post on this When did anthropogenic global warming begin?

I disagree as to whether this is a “key” issue for attribution studies, but as to when anthropogenic warming began, the answer is actually quite simple – when we started altering the atmosphere and land surface at climatically relevant scales. For the CO2 increase from deforestation this goes back millennia, for fossil fuel CO2, since the invention of the steam engine at least. In both cases there was a big uptick in the 18th Century. Perhaps that isn’t what Judith is getting at though. If she means when was it easily detectable, I discussed that above and the answer is sometime in the early 1980s.

The temperature record since 1900 is often characterized as a staircase, with periods of warming sequentially followed by periods of stasis/cooling. The stadium wave and Chen and Tung papers, among others, are consistent with the idea that the multidecadal oscillations, when superimposed on an overall warming trend, can account for the overall staircase pattern.

Nobody has any problems with the idea that multi-decadal internal variability might be important. The problem with many studies on this topic is the assumption that all multi-decadal variability is internal. This is very much an open question.

Let’s consider the 21st century hiatus. The continued forcing from CO2 over this period is substantial, not to mention ‘warming in the pipeline’ from late 20th century increase in CO2. To counter the expected warming from current forcing and the pipeline requires natural variability to effectively be of the same magnitude as the anthropogenic forcing. This is the rationale that Tung used to justify his 50-50 attribution (see also Tung and Zhou). The natural variability contribution may not be solely due to internal/intrinsic variability, and there is much speculation related to solar activity. There are also arguments related to aerosol forcing, which I personally find unconvincing (the topic of a future post).

Shorter time-periods are noisier. There are more possible influences of an appropriate magnitude and, for the recent period, continued (and very frustrating) uncertainties in aerosol effects. This has very little to do with the attribution for longer-time periods though (since change of forcing is much larger and impacts of internal variability smaller).

The IPCC notes overall warming since 1880. In particular, the period 1910-1940 is a period of warming that is comparable in duration and magnitude to the warming 1976-2000. Any anthropogenic forcing of that warming is very small (see Figure 10.1 above). The timing of the early 20th century warming is consistent with the AMO/PDO (e.g. the stadium wave; also noted by Tung and Zhou). The big unanswered question is: Why is the period 1940-1970 significantly warmer than say 1880-1910? Is it the sun? Is it a longer period ocean oscillation? Could the same processes causing the early 20th century warming be contributing to the late 20th century warming?

If we were just looking at 30 year periods in isolation, it’s inevitable that there will be these ambiguities because data quality degrades quickly back in time. But that is exactly why IPCC looks at longer periods.

Not only don’t we know the answer to these questions, but no one even seems to be asking them!

This is simply not true.

Attribution

I am arguing that climate models are not fit for the purpose of detection and attribution of climate change on decadal to multidecadal timescales. Figure 10.1 speaks for itself in this regard (see figure 11.25 for a zoom in on the recent hiatus). By ‘fit for purpose’, I am prepared to settle for getting an answer that falls in the right tercile.

Given the results above it would require a huge source of error to move the bulk of that probability anywhere else other than the right tercile.

The main relevant deficiencies of climate models are:

  • climate sensitivity that appears to be too high, probably associated with problems in the fast thermodynamic feedbacks (water vapor, lapse rate, clouds)
  • failure to simulate the correct network of multidecadal oscillations and their correct phasing
  • substantial uncertainties in aerosol indirect effects
  • unknown and uncertain solar indirect effects

The sensitivity argument is irrelevant (given that it isn’t zero of course). Simulation of the exact phasing of multi-decadal internal oscillations in a free-running GCM is impossible so that is a tough bar to reach! There are indeed uncertainties in aerosol forcing (not just the indirect effects) and, especially in the earlier part of the 20th Century, uncertainties in solar trends and impacts. Indeed, there is even uncertainty in volcanic forcing. However, none of these issues really affect the attribution argument because a) differences in magnitude of forcing over time are assessed by way of the scales in the attribution process, and b) errors in the spatial pattern will end up in the residuals, which are not large enough to change the overall assessment.

Nonetheless, it is worth thinking about what plausible variations in the aerosol or solar effects could have. Given that we are talking about the net anthropogenic effect, the playing off of negative aerosol forcing and climate sensitivity within bounds actually has very little effect on the attribution, so that isn’t particularly relevant. A much bigger role for solar would have an impact, but the trend would need to be about 5 times stronger over the relevant period to change the IPCC statement and I am not aware of any evidence to support this (and much that doesn’t).

So, how to sort this out and do a more realistic job of detecting climate change and and attributing it to natural variability versus anthropogenic forcing? Observationally based methods and simple models have been underutilized in this regard. Of great importance is to consider uncertainties in external forcing in context of attribution uncertainties.

It is inconsistent to talk in one breath about the importance of aerosol indirect effects and solar indirect effects and then state that ‘simple models’ are going to do the trick. Both of these issues relate to microphysical effects and atmospheric chemistry – neither of which are accounted for in simple models.

The logic of reasoning about climate uncertainty, is not at all straightforward, as discussed in my paper Reasoning about climate uncertainty.

So, am I ‘making things up’? Seems to me that I am applying straightforward logic. Which IMO has been disturbingly absent in attribution arguments, that use climate models that aren’t fit for purpose, use circular reasoning in detection, fail to assess the impact of forcing uncertainties on the attribution, and are heavily spiced by expert judgment and subjective downweighting.

My reading of the evidence suggests clearly that the IPCC conclusions are an accurate assessment of the issue. I have tried to follow the proposed logic of Judith’s points here, but unfortunately each one of these arguments is either based on a misunderstanding, an unfamiliarity with what is actually being done or is a red herring associated with shorter-term variability. If Judith is interested in why her arguments are not convincing to others, perhaps this can give her some clues.

References

  1. M.E. Mann, B.A. Steinman, and S.K. Miller, "On forced temperature changes, internal variability, and the AMO", Geophysical Research Letters, vol. 41, pp. 3211-3219, 2014. http://dx.doi.org/10.1002/2014GL059233
  2. K. Tung, and J. Zhou, "Using data to attribute episodes of warming and cooling in instrumental records", Proceedings of the National Academy of Sciences, vol. 110, pp. 2058-2063, 2013. http://dx.doi.org/10.1073/pnas.1212471110
  3. G.A. Schmidt, D.T. Shindell, and K. Tsigaridis, "Reconciling warming trends", Nature Geosci, vol. 7, pp. 158-160, 2014. http://dx.doi.org/10.1038/ngeo2105
  4. M. Huber, and R. Knutti, "Natural variability, radiative forcing and climate response in the recent hiatus reconciled", Nature Geosci, vol. 7, pp. 651-656, 2014. http://dx.doi.org/10.1038/ngeo2228
Author: "gavin" Tags: "Climate modelling, Climate Science, Inst..."
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 14 Aug 2014 00:03

Siberia has explosion holes in it that smell like methane, and there are newly found bubbles of methane in the Arctic Ocean. As a result, journalists are contacting me assuming that the Arctic Methane Apocalypse has begun. However, as a climate scientist I remain much more concerned about the fossil fuel industry than I am about Arctic methane. Short answer: It would take about 20,000,000 such eruptions within a few years to generate the standard Arctic Methane Apocalypse that people have been talking about. Here’s where that statement comes from:

How much methane emission is “a lot”? The yardstick here comes from Natalie Shakhova, an Arctic methane oceanographer and modeler at the University of Fairbanks. She proposed that 50 Gton of methane (a gigaton is 1015 grams) might erupt from the Arctic on a short time scale Shakhova (2010). Let’s call this a “Shakhova” event. There would be significant short-term climate disruption from a Shakhova event, with economic consequences explored by Whiteman et al Whiteman et al (2013). The radiative forcing right after the release would be similar to that from fossil fuel CO2 by the end of the century, but subsiding quickly rather than continuing to grow as business-as-usual CO2 does.

I and others have been skeptical of the possibility that so much methane could escape from the Arctic so quickly, given the century to millennial time scale of warming the permafrost and ocean sediments, and point out that if the carbon is released slowly, the climate impacts will be small. But now that explosion holes are being found in Siberia, the question is

How much methane came out of that hole in Siberia? The hole is about 80 meters in diameter and 60-100 meters deep.

It’s hard to say exactly how much methane did this, because perhaps the crater allowed methane to be released from the surrounding soil. There may be emissions in the future from permafrost melting laterally from the sides of the hole. But for a start let’s assume that the volume of the hole is the same as the volume of the original, now escaped, bubble. Gases are compressible, so we need to know what its pressure was. The deeper in the Earth it was, the higher the pressure, but if we are concerned about gas whose release might be triggered by climate warming, we should look for pockets that come close to the surface. Deep pockets might take thousands of years for surface warming to reach. The mass of a solid cap ten meters thick would increase the pressure underneath it to about four atmospheres, plus there may have been some overpressure. Let’s assume a pressure of ten atmospheres (enough to hold up the atmosphere plus about 30 meters of rock).

If the bubble was pure methane, it would have contained about … wait for it … 0.000003 Gtons of methane. In other words, building a Shakhova event from these explosions would take approximately 20,000,000 explosions, all within a few years, or else the climate impact of the methane would be muted by the lifetime effect.

What about the bubbles of methane they just found in the Arctic ocean? There were reports this summer of a new expedition to the Siberian margin, documenting vast plumes of methane bubbles rising from sediments ~500 meters water depth.

It is certainly believable that warming ocean waters could trigger an increase in methane emissions to the atmosphere, and that the time scale for changing ocean temperatures can be fast due to circulation changes (we are seeing the same thing in the Antarctic). But the time scale for heat to diffuse into the sediment, where methane hydrate can be found, should be slow, like that for permafrost on land or slower. More importantly, the atmospheric methane flux from the Arctic Ocean is really small (extrapolating estimates from Kort et al 2012), even compared with emissions from the Arctic land surface, which is itself only a few percent of global emissions (dominated by human sources and tropical wetlands).

In conclusion, despite recent explosions suggesting the contrary, I still feel that the future of Earth’s climate in this century and beyond will be determined mostly by the fossil fuel industry, and not by Arctic methane. We should keep our eyes on the ball.

References

  1. N.E. Shakhova, V.A. Alekseev, and I.P. Semiletov, "Predicted methane emission on the East Siberian shelf", Dokl. Earth Sc., vol. 430, pp. 190-193, 2010. http://dx.doi.org/10.1134/S1028334X10020091
  2. G. Whiteman, C. Hope, and P. Wadhams, "Climate science: Vast costs of Arctic change", Nature, vol. 499, pp. 401-403, 2013. http://dx.doi.org/10.1038/499401a
  3. E.A. Kort, S.C. Wofsy, B.C. Daube, M. Diao, J.W. Elkins, R.S. Gao, E.J. Hintsa, D.F. Hurst, R. Jimenez, F.L. Moore, J.R. Spackman, and M.A. Zondlo, "Atmospheric observations of Arctic Ocean methane emissions up to 82° north", Nature Geosci, vol. 5, pp. 318-321, 2012. http://dx.doi.org/10.1038/NGEO1452
Author: "david" Tags: "Arctic and Antarctic, Carbon cycle, Clim..."
Comments Send by mail Print  Save  Delicious 
Date: Tuesday, 05 Aug 2014 13:58

This month’s open thread. Keeping track of the Arctic sea ice minimum is interesting but there should be plenty of other climate science topics to discuss (if people can get past the hype about the Ebola outbreak or imaginary claims about anomalous thrusting). As with last month, pleas no discussion of mitigation strategies – it unfortunately does not bring out the best in the commentariat.

Author: "group" Tags: "Climate Science, Open thread"
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 10 Jul 2014 08:49

A new study by Screen and Simmonds demonstrates the statistical connection between high-amplitude planetary waves in the atmosphere and extreme weather events on the ground.

Guest post by Dim Coumou

There has been an ongoing debate, both in and outside the scientific community, whether rapid climate change in the Arctic might affect circulation patterns in the mid-latitudes, and thereby possibly the frequency or intensity of extreme weather events. The Arctic has been warming much faster than the rest of the globe (about twice the rate), associated with a rapid decline in sea-ice extent. If parts of the world warm faster than others then of course gradients in the horizontal temperature distribution will change – in this case the equator-to-pole gradient – which then could affect large scale wind patterns.

Several dynamical mechanisms for this have been proposed recently. Francis and Vavrus (GRL 2012) argued that a reduction of the north-south temperature gradient would cause weaker zonal winds (winds blowing west to east) and therefore a slower eastward propagation of Rossby waves. A change in Rossby wave propagation has not yet been detected (Barnes 2013) but this does not mean that it will not change in the future. Slowly-traveling waves (or quasi-stationary waves) would lead to more persistent and therefore more extreme weather. Petoukhov et al (2013) actually showed that several recent high-impact extremes, both heat waves and flooding events, were associated with high-amplitude quasi-stationary waves.

Intuitively it makes sense that slowly-propagating Rossby waves lead to more surface extremes. These waves form in the mid-latitudes at the boundary of cold air to the north and warm air to the south. Thus, with persistent strongly meandering isotherms, some regions will experience cold and others hot conditions. Moreover, slow wave propagation would prolong certain weather conditions and therefore lead to extremes on timescales of weeks: One day with temperatures over 30oC in say Western Europe is not really unusual, but 10 or 20 days in a row will be.

But although it intuitively makes sense, the link between high-amplitude Rossby waves and surface extremes was so far not properly documented in a statistical way. It is this piece of the puzzle which is addressed in the new paper by Screen and Simmonds recently published in Nature Climate Change (“Amplified mid-latitude planetary waves favour particular regional weather extremes”).

In a first step they extract the 40 most extreme months in the mid-latitudes for both temperature and precipitation in the 1979-2012 period, using all calendar months. They do this by averaging absolute values of temperature and precipitation anomalies, which is appropriate since planetary waves are likely to induce both negative and positive anomalies simultaneously in different regions. This way they determine the 40 most extreme months and also 40 moderate months, i.e., those months with the smallest absolute anomalies. By using monthly-averaged data, fast-traveling waves are filtered out and thus only the quasi-stationary component remains, i.e. the persistent weather conditions. Next they show that roughly half of the extreme months were associated with statistically significantly amplified waves. Vice versa, the moderate months were associated with reduced wave activity. So this nicely confirms statistically what one would expect.

nclimate2271-f1

Figure: a,b, Normalized monthly time series of mid-latitude–(35°–60° N) mean land-based absolute temperature anomalies (a) and absolute precipitation anomalies (b), 1979–2012. The 40 months with the largest values are identified by circles and labelled on the lower x axis, and the green line shows the threshold value for extremes. c,d, Normalized wave amplitude anomalies, for wave numbers 3–8, during 40 months of mid-latitude–mean temperature extremes (c) and precipitation extremes (d). The months are labelled on the abscissa in order of decreasing extremity from left to right. Grey shading masks anomalies that are not statistically significant at the 90% confidence level; specifically, anomalies with magnitude smaller than 1.64σ, the critical value of a Gaussian (normal) distribution for a two-tailed probability p = 0.1. Red shading indicates wave numbers that are significantly amplified compared to average and blue shading indicates wave numbers that are significantly attenuated compared to average. [Source: Screen and Simmonds, Nature climate Change]

The most insightful part of the study is the regional analysis, whereby the same method is applied to 7 regions in the Northern Hemisphere mid-latitudes. It turns out that especially those regions at the western boundary of the continents (i.e., western North America and Europe) show the most significant association between surface extremes and planetary wave activity. Here, moderate temperatures tend to be particularly associated with reduced wave amplitudes, and extremes with increased wave amplitudes. Further eastwards this link becomes less significant, and in eastern Asia it even inverts: Here moderate temperatures are associated with amplified waves and extremes with reduced wave amplitudes. An explanation for this result is not discussed by the authors. Possibly, it could be explained by the fact that low wave amplitudes imply predominantly westerly flow. Such westerlies will bring moderate oceanic conditions to the western boundary regions, but will bring air from the continental interior towards East Asia.

Finally, the authors redo their analysis once more but now for each tail of the distribution individually. Thus, instead of using absolute anomalies, they treat cold, hot, dry and wet extremes separately. This way, they find that amplified quasi-stationary waves “increase probabilities of heat waves in western North America and central Asia, cold outbreaks in eastern North America, droughts in central North America, Europe and central Asia and wet spells in western Asia.” These results hint at a preferred position (i.e., “phase”) of quasi-stationary waves.

With their study, the authors highlight the importance of quasi-stationary waves in causing extreme surface weather. This is an important step forward, but of course many questions remain. Has planetary wave activity changed in recent decades or is it likely to do so under projected future warming? And, if it is changing, is the rapid Arctic warming indeed responsible?

 

IMG_4765Dim Coumou works as a senior scientist at the Potsdam Institute for Climate Impact Research, where he is leading a new research group which studies the links between large scale circulation and extreme weather.   

 

 

 

 

 

References

  1. J.A. Francis, and S.J. Vavrus, "Evidence linking Arctic amplification to extreme weather in mid-latitudes", Geophysical Research Letters, vol. 39, pp. n/a-n/a, 2012. http://dx.doi.org/10.1029/2012GL051000
  2. E.A. Barnes, "Revisiting the evidence linking Arctic amplification to extreme weather in midlatitudes", Geophysical Research Letters, vol. 40, pp. 4734-4739, 2013. http://dx.doi.org/10.1002/grl.50880
  3. V. Petoukhov, S. Rahmstorf, S. Petri, and H.J. Schellnhuber, "Quasiresonant amplification of planetary waves and recent Northern Hemisphere weather extremes", Proceedings of the National Academy of Sciences, vol. 110, pp. 5336-5341, 2013. http://dx.doi.org/10.1073/pnas.1222000110
  4. J.A. Screen, and I. Simmonds, "Amplified mid-latitude planetary waves favour particular regional weather extremes", Nature Climate change, vol. 4, pp. 704-709, 2014. http://dx.doi.org/10.1038/NCLIMATE2271
Author: "stefan" Tags: "Arctic and Antarctic, Climate Science, I..."
Comments Send by mail Print  Save  Delicious 
Date: Sunday, 06 Jul 2014 14:05

Guest post by Jared Rennie, Cooperative Institute for Climate and Satellites, North Carolina on behalf of the databank working group of the International Surface Temperature Initiative

In the 21st Century, when multi-billion dollar decisions are being made to mitigate and adapt to climate change, society rightly expects openness and transparency in climate science to enable a greater understanding of how climate has changed and how it will continue to change. Arguably the very foundation of our understanding is the observational record. Today a new set of fundamental holdings of land surface air temperature records stretching back deep into the 19th Century has been released as a result of several years of effort by a multinational group of scientists.

The International Surface Temperature Initiative (ISTI) was launched by an international and multi-disciplinary group of scientists in 2010 to improve understanding of the Earth’s climate from the global to local scale. The Databank Working Group, under the leadership of NOAA’s National Climatic Data Center (NCDC), has produced an innovative data holding that largely leverages off existing data sources, but also incorporates many previously unavailable sources of surface air temperature. This data holding provides users a way to better track the origin of the data from its collection through its integration. By providing the data in various stages that lead to the integrated product, by including data origin tracking flags with information on each observation, and by providing the software used to process all observations, the processes involved in creating the observed fundamental climate record are completely open and transparent to the extent humanly possible.

Databank Architecture

figure1

The databank includes six data Stages, starting from the original observation to the final quality controlled and bias corrected product (Figure 1). The databank begins at Stage Zero holdings, which contain scanned images of digital observations in their original form. These images are hosted on the databank server when third party hosting is not possible. Stage One contains digitized data, in its native format, provided by the contributor. No effort is required on their part to convert the data into any other format. This reduces the possibility that errors could occur during translation. We collated over 50 sources ranging from single station records to holdings of several tens of thousands of stations.

Once data are submitted as Stage One, all data are converted into a common Stage Two format. In addition, data provenance flags are added to every observation to provide a history of that particular observation. Stage Two files are maintained in ASCII format, and the code to convert all the sources is provided. After collection and conversion to a common format, the data are then merged into a single, comprehensive Stage Three dataset. The algorithm that performs the merging is described below. Development of the merged dataset is followed by quality control and homogeneity adjustments (Stage Four and Five, respectively). These last two stages are not the responsibility of Databank Working Group, see the discussion of broader context below.

Merge Algorithm Description

The following is an overview of the process in which individual Stage Two sources are combined to form a comprehensive Stage Three dataset. A more detailed description can be found in a manuscript accepted and published by Geoscience Data Journal (Rennie et al., 2014).

The algorithm attempts to mimic the decisions an expert analyst would make manually. Given the fractured nature of historical data stewardship many sources will inevitably contain records for the same station and it is necessary to create a process for identifying and removing duplicate stations, merging some sources to produce a longer station record, and in other cases determining when a station should be brought in as a new distinct record.

The merge process is accomplished in an iterative fashion, starting from the highest priority data source (target) and running progressively through the other sources (candidates). A source hierarchy has been established which prioritizes datasets that have better data provenance, extensive metadata, and long, consistent periods of record. In addition it prioritizes holdings derived from daily data to allow consistency between daily holdings and monthly holdings. Every candidate station read in is compared to all target stations, and one of three possible decisions is made. First, when a station match is found, the candidate station is merged with the target station. Second, if the candidate station is determined to be unique it is added to the target dataset as a new station. Third, the available information is insufficient, conflicting, or ambiguous, and the candidate station is withheld.

Stations are first compared through their metadata to identify matching stations. Four tests are applied: geographic distance, height distance, station name similarity, and when the data record began. Non-missing metrics are then combined to create a metadata metric and it is determined whether to move on to data comparisons, or to withhold the candidate station. If a data comparison is deemed necessary, overlapping data between the target and candidate station is tested for goodness-of-fit using the Index of Agreement (IA). At least five years of overlap are required for a comparison to be made. A lookup table is used to provide two data metrics, the probability of station match (H1) and the probability of station uniqueness (H2). These are then combined with the metadata metric to create posterior metrics of station match and uniqueness. These are used to determine if the station is merged, added as unique, or withheld.

Stage Three Dataset Description

figure2

The integrated data holding recommended and endorsed by ISTI contains over 32,000 global stations (Figure 2), over four times as many stations as GHCN-M version 3. Although station coverage varies spatially and temporally, there are adequate stations with decadal and century periods of record at local, regional, and global scales. Since 1850, there consistently are more stations in the recommended merge than GHCN-M (Figure 3). In GHCN-M version 3, there was a significant drop in stations in 1990 reflecting the dependency on the decadal World Weather Records collection as a source, which is ameliorated by many of the new sources which can be updated much more rapidly and will enable better real-time monitoring.

figure3

Many thresholds are used in the merge and can be set by the user before running the merge program. Changing these thresholds can significantly alter the overall result of the program. Changes will also occur when the source priority hierarchy is altered. In order to characterize the uncertainty associated with the merge parameters, seven different variants of the Stage Three product were developed alongside the recommended merge. This uncertainty reflects the importance of data rescue. While a major effort has been undertaken through this initiative, more can be done to include areas that are lacking on both spatial and temporal scales, or lacking maximum and minimum temperature data.

Data Access

Version 1.0.0 of the Global Land Surface Databank has been released and data are provided from a primary ftp site hosted by the Global Observing Systems Information Center (GOSIC) and World Data Center A at NOAA NCDC. The Stage Three dataset has multiple formats, including a format approved by ISTI, a format similar to GHCN-M, and netCDF files adhering to the Climate and Forecast (CF) convention. The data holding is version controlled and will be updated frequently in response to newly discovered data sources and user comments.

All processing code is provided, for openness and transparency. Users are encouraged to experiment with the techniques used in these algorithms. The programs are designed to be modular, so that individuals have the option to develop and implement other methods that may be more robust than described here. We will remain open to releases of new versions should such techniques be constructed and verified.

ISTI’s online directory provides further details on the merging process and other aspects associated with the full development of the databank as well as all of the data and processing code.

We are always looking to increase the completeness and provenance of the holdings. Data submissions are always welcome and strongly encouraged. If you have a lead on a new data source, please contact data.submission@surfacetemperatures.org with any information which may be useful.

The broader context

It is important to stress that the databank is a release of fundamental data holdings – holdings which contain myriad non-climatic artefacts arising from instrument changes, siting changes, time of observation changes etc. To gain maximum value from these improved holdings it is imperative that as a global community we now analyze them in multiple distinct ways to ascertain better estimates of the true evolution of surface temperatures locally, regionally, and globally. Interested analysts are strongly encouraged to develop innovative approaches to the problem.

To help ascertain what works and what doesn’t the benchmarking working group are developing and will soon release a set of analogs to the databank. These will share the space and time sampling of the holdings but contain a set of known (to the originators) data issues that require removing. When analysts apply their methods to the analogs we can infer something meaningful about their methods. Further details are available in a discussion paper under peer review [Willett et al., submitted].

More Information

www.surfacetemperatures.org
ftp://ftp.ncdc.noaa.gov/pub/data/globaldatabank

References
Rennie, J.J. and coauthors, 2014, The International Surface Temperature Initiative Global Land Surface Databank: Monthly Temperature Data Version 1 Release Description and Methods. Accepted, Geoscience Data Journal.

Willett, K. M. et al., submitted, Concepts for benchmarking of homogenisation algorithm performance on the global scale. http://www.geosci-instrum-method-data-syst-discuss.net/4/235/2014/gid-4-235-2014.html

Author: "rasmus" Tags: "Climate Science, Instrumental Record"
Comments Send by mail Print  Save  Delicious 
Date: Wednesday, 02 Jul 2014 13:55

This month’s open thread. Topics of potential interest: The successful OCO-2 launch, continuing likelihood of an El Niño event this fall, predictions of the September Arctic sea ice minimum, Antarctic sea ice excursions, stochastic elements in climate models etc. Just for a change, no discussion of mitigation efforts please!

Author: "group" Tags: "Climate Science, Open thread"
Comments Send by mail Print  Save  Delicious 
Date: Sunday, 01 Jun 2014 23:35

June is the month when the Arctic Sea Ice outlook gets going, when the EPA releases its rules on power plant CO2 emissions, and when, hopefully, commenters can get back to actually having constructive and respectful conversations about climate science (and not nuclear energy, impending apocalypsi (pl) or how terrible everyone else is). Thanks.

Author: "group" Tags: "Climate Science, Open thread"
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 08 May 2014 13:39

Guest commentary from Michelle L’Heureux, NOAA Climate Prediction Center

Much media attention has been directed at the possibility of an El Niño brewing this year. Many outlets have drawn comparison with the 1997-98 super El Niño. So, what are the odds that El Niño will occur? And if it does, how strong will it be?

To track El Niño, meteorologists at the NOAA/NWS Climate Prediction Center (CPC) release weekly and monthly updates on the status of the El Niño-Southern Oscillation (ENSO). The International Research Institute (IRI) for Climate and Society partner with us on the monthly ENSO release and are also collaborators on a brand new “ENSO blog” which is part of www.climate.gov (co-sponsored by the NOAA Climate Programs Office).

Blogging ENSO is a first for operational ENSO forecasters, and we hope that it gives us another way to both inform and interact with our users on ENSO predictions and impacts. In addition, we will collaborate with other scientists to profile interesting ENSO research and delve into the societal dimensions of ENSO.

As far back as November 2013, the CPC and the IRI have predicted an elevated chance of El Niño (relative to historical chance or climatology) based on a combination of model predictions and general trends over the tropical Pacific Ocean. Once the chance of El Niño reached 50% in March 2014, an El Niño Watch was issued to alert the public that conditions are more favorable for the development of El Niño.
Current forecasts for the Nino-3.4 SST index (as of 5 May 2014) from the NCEP Climate Forecast System version 2 model.
Current forecasts for the Nino-3.4 SST index (as of 5 May 2014) from the NCEP Climate Forecast System version 2 model

More recently, on May 8th, the CPC/IRI ENSO team increased the chance that El Niño will develop, with a peak probability of ~80% during the late fall/early winter of this year. El Nino onset is currently favored sometime in the early summer (May-June-July). At this point, the team remains non-committal on the possible strength of El Niño preferring to watch the system for at least another month or more before trying to infer the intensity. But, could we get a super strong event? The range of possibilities implied by some models allude to such an outcome, but at this point the uncertainty is just too high. While subsurface heat content levels are well above average (March was the highest for that month since 1979 and April was the second highest), ENSO prediction relies on many other variables and factors. We also remain in the spring prediction barrier, which is a more uncertain time to be making ENSO predictions.

Could El Niño predictions fizzle? Yes, there is roughly a 2 in 10 chance at this point that this could happen. It happened in 2012 when an El Nino Watch was issued, chances became as high as 75% and El Niño never formed. Such is the nature of seasonal climate forecasting when there is enough forecast uncertainty that “busts” can and do occur. In fact, more strictly, if the forecast probabilities are “reliable,” an event with an 80% chance of occurring should only occur 80% of the time over a long historical record. Therefore, 20% of the time the event must NOT occur (click here for a description of verification techniques).

While folks might prefer total certainty in our forecasts, we live in an uncertain world. El Niño is most likely to occur this year, so please stay attentive to the various updates linked above and please visit our brand new ENSO blog.

Author: "mike" Tags: "Climate Science"
Comments Send by mail Print  Save  Delicious 
Date: Friday, 02 May 2014 13:35

This month’s open thread. In order to give everyone a break, no discussion of mitigation options this month – that has been done to death in previous threads. Anything related to climate science is totally fine: Carbon dioxide levels maybe, or TED talks perhaps…

Author: "group" Tags: "Climate Science, Open thread"
Comments Send by mail Print  Save  Delicious 
Faking it   New window
Date: Wednesday, 30 Apr 2014 11:36

Every so often contrarians post old newspaper quotes with the implication that nothing being talked about now is unprecedented or even unusual. And frankly, there are lots of old articles that get things wrong, are sensationalist or made predictions without a solid basis. And those are just the articles about the economy.

However, there are plenty of science articles that are just interesting, reporting events and explorations in the Arctic and elsewhere that give a fascinating view into how early scientists were coming to an understanding about climate change and processes. In particular, in the Atlantic sector of the Arctic the summer of 1922 was (for the time) quite warm, and there were a number of reports that discussed some unprecedented (again, for the time) observations of open water. The most detailed report was in the Monthly Weather Review:

The same report was picked up by the Associated Press and short summary articles appeared in the Washington Post and L.A. Times on Nov 2nd (right). As you can read, the basic story is that open water was seen up to 81º 29′N near Spitzbergen (now referred to as Svalbard), and that this was accompanied by a shift in ecosystems and some land ice melting. It seems that the writers were more concerned with fishing than climate change though.

This clip started showing up around Aug 2007 (this is the earliest mention I can find). The main point in bringing it up was (I imagine) to have a bit of fun by noting the similarity of the headline “Arctic Ocean Getting Warm” and contemporaneous headlines discussing the very low sea ice amounts in 2007. Of course, this doesn’t imply that the situation was the same back in 1922 compared to 2007 (see below).

The text of Washington Post piece soon started popping up on blogs and forums. Sometime in late 2009, probably as part of a mass-forwarded email (remember those?), the text started appearing with the following addition (with small variations, e.g. compare this and this):

I apologize, I neglected to mention that this report was from November 2, 1922. As reported by the AP and published in The Washington Post

However, the text was still pretty much what was in the Washington Post article (some versions had typos of “Consulafft” instead of “Consul Ifft” (the actual consul’s name) and a few missing words). Snopes looked into it and they agreed that this was basically accurate – and they correctly concluded that the relevance to present-day ice conditions was limited.

But sometime in January 2010 (the earliest version I can find is from 08/Jan/2010), a version of the email started circulating with an extra line added:

“Within a few years it is predicted that due to the ice melt the sea will rise and make most coastal cities uninhabitable.”

This is odd on multiple levels. First of all, the rest of the piece is just about observations, not predictions of any sort. Nor is there any source given for these mysterious predictions (statistics? soothsaying? folk wisdom?). Indeed, since ice melt large enough to ‘make most coastal cities uninhabitable’ would be a big deal, you’d think that the Consul and AP would have been a little more concerned about the level of the sea instead of the level of the seals. In any case, the line is completely made up, a fiction, an untruth, a lie.

But now, instead of just an observation that sounds like observations being made today, the fake quote is supposed to demonstrate that people (implicitly scientists) have been making alarmist and unsupported claims for decades with obvious implications. This is pretty low by any standards.

The article with the fake quote has done the rounds of most of the major contrarian sites – including the GWPF, right-wing leaning local papers (Provo, UT), magazines (Quadrant in Australia, Canada Free Press) and blogs (eg. Small dead animals). The only pseudo-sceptic blog that doesn’t appear to have used it is WUWT! (though it has come up in comments). This is all despite some people noting that the last line was fake (at least as early as April 2011). Some of the mentions even link to the Snopes article (which doesn’t mention the fake last line) as proof that their version (with the fake quote) is authentic.

Last week it was used again by Richard Rahn in the Washington Times, and the fake quote was extracted and tweeted by CFACT, which is where I saw it.

So we have a situation where something real and actually interesting is found in the archives, it gets misrepresented as a ‘gotcha’ talking point, but someone thinks it can be made ‘better’ and so adds a fake last line to sex it up. Now with twitter, with its short quotes, some contrarians only quote the fakery. And thus a completely false talking point is created out of the whole cloth.

Unfortunately, this is not unusual.

Comparing 1922 and now

To understand why the original story is actually interesting, we need a little context. Estimates of Arctic sea ice go back to the 19th Century from fishing vessels and explorers though obviously they have got better in recent decades because of the satellite coverage. The IPCC AR5 report (Figure 4.3) shows a compilation of sea ice extent from HadISST1 (which is being updated as we speak), but it is clear enough for our purposes:

I have annotated the summer of 1922, which did see quite a large negative excursion Arctic-wide compared to previous years, though the excursion is perhaps not that unusual for the period. A clearer view can be seen in the Danish ice charts for August 1921 and 1922 (via the Icelandic Met Office):



The latitude for open-water in the 1922 figure is around 81ºN, as reported by the Consul. Browsing other images in the series indicates that Spitzbergen almost always remained ice-bound even in August, so the novelty of the 1922 observation is clear.

But what of now? We can look at the August 2013 operational ice charts (that manually combine satellite and in situ observations) from the Norwegian Met Office, and focus on the area of Svalbard/Spitzbergen. Note that 2013 was the widely touted year that Arctic sea ice ‘recovered’:



The open-water easily extends to past 84ºN – many hundreds of kilometers further north than the ‘unprecedented’ situation in 1922. Data from the last 4 years shows some variability of course, but by late August there is consistently open-water further north than 81ºN 30′. The Consul’s observation, far from being novel, is now commonplace.

This implies that this article – when seen in context – is actually strongly confirming of a considerable decline in Arctic sea ice over the last 90 years. Not that CFACT is going to tweet that.

Author: "gavin" Tags: "Arctic and Antarctic, Climate Science, I..."
Comments Send by mail Print  Save  Delicious 
Date: Saturday, 26 Apr 2014 02:57

Somewhat randomly, my thoughts turned to the Nenana Ice Classic this evening, only to find that the ice break up had only just occurred (3:48 pm Alaskan Standard Time, April 25). This is quite early (the 7th earliest date, regardless of details associated with the vernal equinox or leap year issues), though perhaps unsurprising after the warm Alaskan winter this year (8th warmest on record). This is in strong contrast to the very late break up last year.



Break up dates accounting for leap years and variations in the vernal equinox.

As mentioned in my recent post, the Nenana break up date is a good indicator of Alaskan regional temperatures and despite last year’s late anomaly, the trends are very much towards a earlier spring. This is also true for trends in temperatures and ice break up mostly everywhere else too, despite individual years (like 2013/2014) being anomalously cold (for instance in the Great Lakes region). As we’ve often stressed, it is the trends that are important for judging climate change, not the individual years. Nonetheless, odds on dates as early as this years have more than doubled over the last century.

Author: "gavin" Tags: "Climate impacts, Climate Science, Instru..."
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 24 Apr 2014 19:47

metadata-fig “These results are quite strange”, my colleague told me. He analysed some of the recent climate model results from an experiment known by the cryptic name ‘CMIP5‘. It turned out that the results were ok, but we had made an error when reading and processing the model output. The particular climate model that initially gave the strange results had used a different calendar set-up to the previous models we had examined.

In fact, the models used to compute the results in CMIP5 use several different calendars: Gregorian, idealistic 360-day, or assuming no leap years. These differences do not really affect the model results, however, they are important to take into account in further analysis.

Just to make things more complicated, model results and data often come with different time units (such as counting hours since 0001-01-01 00:00:0.0) and physical units (precipitation m/day or kg m/s; temperature: Kelvin, Fahrenheit, or Centigrade). Different countries use different decimal delimiters: point or comma. And missing values are sometimes represented as a blank space, some unrealistic number (-999), or ‘NA’ (not available) if the data is provided as ASCII files. No recorded rainfall is often represented by either 0 or the ASCII character ‘.’.

For station data, the numbers are often ordered differently, either as rows or columns, and with a few lines in the beginning (the header) with various amount of description. There are almost as many ways to store data as there are groups providing data. Great!

Murphy’s law combined with typically different formats imply that reading data and testing takes time. Different scripts must be written for each data portal. The time it takes to read data can in principle be reduced to seconds, given appropriate means to do so (and the risk of making mistakes eliminated). Some data portals provide codes such as Fortran programs, but using Fortran for data analysis is no longer very efficient.

We are not done with the formats. There are more aspects to data, analyses, and model results. A proper and unambiguous description of the data is always needed so that people know exactly what they are looking at. I think this will become more important with new efforts devoted to the World Meteorological Organisation’s (WMO) global framework on climate services (GFCS).

Data description is known as ‘meta-data‘, telling what a variable represents, what units, the location, time, the method used to record or compute, and its quality.

It is important to distinguish measurements from model results. The quality of data is given by error bars, whereas the reliability of model result can be described by various skill scores, depending on their nature.

There is a large range of possibilities for describing methods and skill scores, and my guess is that there is no less diversity than we see in data formats used in different portals. This diversity is also found in empirical-statistical downscaling.

A new challenge is that the volume of climate model results has grown almost explosively. How do make sense out of all these results and all the data? If the results come with proper meta-data, it may be possible to apply further statistical analysis to sort, categorise, identify links (regression), or apply geo-statistics.

Meta-data with a controlled vocabulary can help keep track of results and avoid ambiguities. It is also easier to design common analytical and visualisation methods for data which have a standard format. There are already some tools for visualisation and analysis such as Ferret and GrADS, however, mainly for gridded data.

Standardised meta-data also allows easy comparisons between same type of results from different research communities, or different types of results, e.g. by the means of experimental design (Thorarinsdottir et. al. 2014). Such statistical analysis may make it possible to say whether certain choices lead to different results, if they are tagged with the different schemes employed in the models. This type of analysis makes use of certain key words, based on a set of commonly agreed terms.

Similar terms, however, may mean different things to different communities, such as ‘model’, ‘prediction’, ‘non-stationarity’, ‘validation’, and ‘skill’. I have seen how misinterpretation of such concepts has lead to confusion, particularly among people who don’t think that climate change is a problem.

There have been recent efforts to establish controlled vocabularies, e.g. through EUPORIAS and a project called downscaling metadata, and a new breed of concepts has entered climate research, such as COG and CIM.

There are further coordinated initiatives addressing standards for meta-data, data formats, and controlled vocabularies. Perhaps most notable are the Earth System Grid Federation (ESGF), the coupled model inter-comparison project (CMIP), and the coordinated regional downscaling experiment (CORDEX). The data format used by climate models, netCDFCF‘, is a good start, at least for model results on longitude-latitude grids. However, these initiatives don’t yet offer explanations of validation methods, skill scores, modelling details.

Validation, definitions and meta-data have been discussed in a research project called ‘SPECS‘ (that explores the possibility for seasonal-to-decadal prediction) because it is important to understand the implications and limitations of its forecasts. There is also another project called VALUE that addresses the question of validation strategies and skill scores for downscaling methods.

Many climate models have undergone thorough evaluation, but this is not apparent unless one reads chapter 9 on model evaluation in the latest IPCC report (AR5). Even in this report, a systematic summary of the different evaluation schemes and skill scores is sometimes lacking, with an exception of a summary of spatial correlation between model results and analyses.

The information about model skill would be more readily accessible if the results were tagged with the type of tests used to verify the results, and the test results (skill scores). An extra bonus is that a common practice of including a quality stamp describing validation may enhance the visibility of the evaluation aspect. To make such labelling effective, they should use well-defined terms and glossaries.

There is more than gridded results from a regional climate model. What about quantities such as return values, probabilities, storm tracks, number of freezing events, intense rainfall events, start of a rainy season, wet-day frequency, extremes, or droughts? The larger society needs information in a range of different formats, provided by climate services. Statistical analysis and empirical-statistical downscaling provide information in untraditional ways, as well as improved quantification of uncertainty (Katz et al., 2013).

Another important piece of information is the process history to make the results traceable and in principle replicable. The history is important for both the science community and for use in climate services.

unlabelmedicne
One analogy to proper meta-data is to provide a label on climate information in a similar way to labels on medicine.

In summary,there has been much progress on climate data formats and standards, but I think we can go even further and become even more efficient by extending this work.

Update: Also see related Climate Informatics: Human Experts and the End-to-End System

References

  1. T. Thorarinsdottir, J. Sillmann, and R. Benestad, "Studying Statistical Methodology in Climate Research", Eos, Transactions American Geophysical Union, vol. 95, pp. 129-129, 2014. http://dx.doi.org/10.1002/2014EO150008
  2. R.W. Katz, P.F. Craigmile, P. Guttorp, M. Haran, B. Sansó, and M.L. Stein, "Uncertainty analysis in climate change assessments", Nature Climate change, vol. 3, pp. 769-771, 2013. http://dx.doi.org/10.1038/nclimate1980
Author: "rasmus" Tags: "Climate modelling, Glossary, Scientific ..."
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 17 Apr 2014 08:56

Brigitte Knopf_441B9424_Sep2012_web

 

 

 

Guest post by Brigitte Knopf

 

 

 

 

 

 

Global emissions continue to rise further and this is in the first place due to economic growth and to a lesser extent to population growth. To achieve climate protection, fossil power generation without CCS has to be phased out almost entirely by the end of the century. The mitigation of climate change constitutes a major technological and institutional challenge. But: It does not cost the world to save the planet.

This is how the new report was summarized by Ottmar Edenhofer, Co-Chair of Working Group III of the IPCC, whose report was adopted on 12 April 2014 in Berlin after intense debates with governments. The report consists of 16 chapters with more than 2000 pages. It was written by 235 authors from 58 countries and reviewed externally by 900 experts. Most prominent in public is the 33-page Summary for Policymakers (SPM) that was approved by all 193 countries. At a first glance, the above summary does not sound spectacular but more like a truism that we’ve often heard over the years. But this report indeed has something new to offer.

The 2-degree limit

For the first time, a detailed analysis was performed of how the 2-degree limit can be kept, based on over 1200 future projections (scenarios) by a variety of different energy-economy computer models. The analysis is not just about the 2-degree guardrail in the strict sense but evaluates the entire space between 1.5 degrees Celsius, a limit demanded by small island states, and a 4-degree world. The scenarios show a variety of pathways, characterized by different costs, risks and co-benefits. The result is a table with about 60 entries that translates the requirements for limiting global warming to below 2-degrees into concrete numbers for cumulative emissions and emission reductions required by 2050 and 2100. This is accompanied by a detailed table showing the costs for these future pathways.

The IPCC represents the costs as consumption losses as compared to a hypothetical ‘business-as-usual’ case. The table does not only show the median of all scenarios, but also the spread among the models. It turns out that the costs appear to be moderate in the medium-term until 2030 and 2050, but in the long-term towards 2100, a large spread occurs and also high costs of up to 11% consumption losses in 2100 could be faced under specific circumstances. However, translated into reduction of growth rate, these numbers are actually quite low. Ambitious climate protection would cost only 0.06 percentage points of growth each year. This means that instead of a growth rate of about 2% per year, we would see a growth rate of 1.94% per year. Thus economic growth would merely continue at a slightly slower pace. However, and this is also said in the report, the distributional effects of climate policy between different countries can be very large. There will be countries that would have to bear much higher costs because they cannot use or sell any more of their coal and oil resources or have only limited potential to switch to renewable energy.

The technological challenge

Furthermore – and this is new and important compared to the last report of 2007 – the costs are not only shown for the case when all technologies are available, but also how the costs increase if, for example, we would dispense with nuclear power worldwide or if solar and wind energy remain more expensive than expected.

The results show that economically and technically it would still be possible to remain below the level of 2-degrees temperature increase, but it will require rapid and global action and some technologies would be key:

Many models could not achieve atmospheric concentration levels of about 450 ppm CO2eq by 2100, if additional mitigation is considerably delayed or under limited availability of key technologies, such as bioenergy, CCS, and their combination (BECCS).

Probably not everyone likes to hear that CCS is a very important technology for keeping to the 2-degree limit and the report itself cautions that CCS and BECCS are not yet available at a large scale and also involve some risks. But it is important to emphasize that the technological challenges are similar for less ambitious temperature limits.

The institutional challenge

Of course, climate change is not just a technological issue but is described in the report as a major institutional challenge:

Substantial reductions in emissions would require large changes in investment patterns

Over the next two decades, these investment patterns would have to change towards low-carbon technologies and higher energy efficiency improvements (see Figure 1). In addition, there is a need for dedicated policies to reduce emissions, such as the establishment of emissions trading systems, as already existent in Europe and in a handful of other countries.

Since AR4, there has been an increased focus on policies designed to integrate multiple objectives, increase co‐benefits and reduce adverse side‐effects.

The growing number of national and sub-national policies, such as at the level of cities, means that in 2012, 67% of global GHG emissions were subject to national legislation or strategies compared to  only 45% in 2007. Nevertheless, and that is clearly stated in the SPM, there is no trend reversal of emissions within sight – instead a global increase of emissions is observed.

IPCC_WG3_SPM_Figure_9

Figure 1: Change in annual investment flows from the average baseline level over the next two decades (2010 to 2029) for mitigation scenarios that stabilize concentrations within the range of approximately 430–530 ppm CO2eq by 2100. Source: SPM, Figure SPM.9

 

Trends in emissions

A particularly interesting analysis, showing from which countries these emissions originate, was removed from the SPM due to the intervention of some governments, as it shows a regional breakdown of emissions that was not in the interest of every country (see media coverage here or here). These figures are still available in the underlying chapters and the Technical Summary (TS), as the government representatives may not intervene here and science can speak freely and unvarnished. One of these figures shows very clearly that in the last 10 years emissions in countries of upper middle income – including, for example, China and Brazil – have increased while emissions in high-income countries – including Germany – stagnate, see Figure 2. As income is the main driver of emissions in addition to the population growth, the regional emissions growth can only be understood by taking into account the development of the income of countries.

Historically, before 1970, emissions have mainly been emitted by industrialized countries. But with the regional shift of economic growth now emissions have shifted to countries with upper middle income, see Figure 2, while the industrialized countries have stabilized at a high level. The condensed message of Figure 2 does not look promising: all countries seem to follow the path of the industrialized countries, with no “leap-frogging” of fossil-based development directly to a world of renewables and energy efficiency being observed so far.

AR5_figure_TS.4

Figure 2: Trends in GHG emissions by country income groups. Left panel: Total annual anthropogenic GHG emissions from 1970 to 2010 (GtCO2eq/yr). Middle panel: Trends in annual per capita mean and median GHG emissions from 1970 to 2010 (tCO2eq/cap/yr). Right panel: Distribution of annual per capita GHG emissions in 2010 of countries within each income group (tCO2/cap/yr). Source: TS, Figure TS.4

 

But the fact that today’s emissions especially rise in countries like China is only one side of the coin. Part of the growth in CO2 emissions in the low and middle income countries is due to the production of consumption goods that are intended for export to the high-income countries (see Figure 3). Put in plain language: part of the growth of Chinese emissions is due to the fact that the smartphones used in Europe or the US are produced in China.

AR5_figure_TS.5

Figure 3: Total annual CO2 emissions (GtCO2/yr) from fossil fuel combustion for country income groups attributed on the basis of territory (solid line) and final consumption (dotted line). The shaded areas are the net CO2 trade balance (difference) between each of the four country income groups and the rest of the world. Source: TS, Figure TS.5

 

The philosophy of climate change

Besides all the technological details there has been a further innovation in this report, that is the chapter on “Social, economic and ethical concepts and methods“. This chapter could be called the philosophy of climate change. It emphasizes that

Issues of equity, justice, and fairness arise with respect to mitigation and adaptation. […] Many areas of climate policy‐making involve value judgements and ethical considerations.

This implies that many of these issues cannot be answered solely by science, such as the question of a temperature level that avoids dangerous anthropogenic interference with the climate system or which technologies are being perceived as risky. It means that science can provide information about costs, risks and co-benefits of climate change but in the end it remains a social learning process and debate to find the pathway society wants to take.

Conclusion

The report contains many more details about renewable energies, sectoral strategies such as in the electricity and transport sector, and co-benefits of avoided climate change, such as improvements of air quality. The aim of Working Group III of the IPCC was, and the Co-Chair emphasized this several times, that scientists are mapmakers that will help policymakers to navigate through this difficult terrain in this highly political issue of climate change. And this without being policy prescriptive about which pathway should be taken or which is the “correct” one. This requirement has been fulfilled and the map is now available. It remains to be seen where the policymakers are heading in the future.

 

The report :

Climate Change 2014: Mitigation of Climate Change – IPCC Working Group III Contribution to AR5

 

Brigitte Knopf is head of the research group Energy Strategies Europe and Germany at the Potsdam Institute for Climate Impact Research (PIK) and one of the authors of the report of the IPCC Working Group III and is on Twitter as @BrigitteKnopf

This article was translated from the German original at RC’s sister blog KlimaLounge.

 

Reaclimate coverage of the IPCC 5th Assessment Report:

Summary of Part 1, Physical Science Basis

Summary of Part 2, Impacts, Adaptation, Vulnerability

Summary of Part 3, Mitigation

Sea-level rise in the AR5

Attribution of climate change to human causes

Radiative forcing of climate change

Author: "stefan" Tags: "IPCC"
Comments Send by mail Print  Save  Delicious 
Date: Tuesday, 08 Apr 2014 12:25

Guest commentary from Drew Shindell

There has been a lot of discussion of my recent paper in Nature Climate Change (Shindell, 2014). That study addressed a puzzle, namely that recent studies using the observed changes in Earth’s surface temperature suggested climate sensitivity is likely towards the lower end of the estimated range. However, studies evaluating model performance on key observed processes and paleoclimate evidence suggest that the higher end of sensitivity is more likely, partially conflicting with the studies based on the recent transient observed warming. The new study shows that climate sensitivity to historical changes in the abundance of aerosol particles in the atmosphere is larger than the sensitivity to CO2, primarily because the aerosols are largely located near industrialized areas in the Northern Hemisphere middle and high latitudes where they trigger more rapid land responses and strong snow & ice feedbacks. Therefore studies based on observed warming have underestimated climate sensitivity as they did not account for the greater response to aerosol forcing, and multiple lines of evidence are now consistent in showing that climate sensitivity is in fact very unlikely to be at the low end of the range in recent estimates.

In particular, a criticism of the paper written by Nic Lewis has gotten some attention. Lewis makes a couple of potentially interesting points, chief of which concern the magnitude and uncertainty in the aerosol forcing I used and the time period over which the calculation is done, and I address these issues here. There are also a number of less substantive points in his piece that I will not bother with.

Lewis states that “The extensive adjustments made by Shindell to the data he uses are a source of concern. One of those adjustments is to add +0.3 W/m² to the figures used for model aerosol forcing to bring the estimated model aerosol forcing into line with the AR5 best estimate of -0.9 W/m².” Indeed the estimate of aerosol forcing used in the calculation of transient climate response (TCR) in the paper does not come directly from climate models, but instead incorporates an adjustment to those models so that the forcing better matches the assessed estimates from the Fifth Assessment Report (AR5) of the Intergovernmental Panel on Climate Change (IPCC). An adjustment is necessary because as climate models are continually evaluated against observations evidence has become emerged that the strength of their aerosol-cloud interactions are too strong (i.e. the models’ ‘aerosol indirect effect’ is larger than inferred from observations). There have been numerous papers on this topic and this issue was thoroughly assessed in IPCC AR5 chapter 7. The assessed best estimate was that the historical negative aerosol forcing (radiation and cloud effects, but not black carbon on snow/ice) was too strong by about 0.3 Wm-2 in the models that included that effect, a conclusion very much in line with a prior publication on climate sensitivity by Otto et al. (2013). Given numerous scientific studies on this topic, there is ample support for the conclusion that models overestimate the magnitude of aerosol forcing, though the uncertainty in aerosol forcing (which is incorporated into the analysis in the paper) is large, especially in comparison with CO2 forcing which can be better constrained by observations.

The second substantive point Lewis raised relates to the time period over which the TCR is evaluated. The IPCC emphasizes forcing estimates relative to 1750 since most of the important anthropogenic impacts are thought to have been small at that time (biomass burning may be an exception, but appears to have a relatively small net forcing). Surface temperature observations become sparser going back further in time, however, and the most widely used datasets only go back to 1880 or 1850. Radiative forcing, especially that due to aerosols, is highly uncertain for the period 1750-1850 as there is little modeling and even less data to constrain those models. The AR5 gives a value for 1850 aerosol forcing (relative to 1750) (Annex II, Table AII.1.2) of -0.178 W/m² for direct+indirect (radiation+clouds). There is also a BC snow forcing of 0.014 W/m², for a total of -0.164 W/m². While these estimates are small, they are nonetheless very poorly constrained.

Hence there are two logical choices for an analysis of TCR. One could assume that there was minimal global mean surface temperature change between 1750 and 1850, as some datasets suggest, and compare the 1850-2000 temperature change with the full 1750-2000 forcing estimate, as in my paper and Otto et al. In this case, aerosol forcing over 1750-2000 is used.

Alternatively, one could assume we can estimate forcing during this early period realistically enough to remove if from the longer 1750-2000 estimates, and so compare forcing and response over 1850-2000. In this case, this must be done for all forcings, not just for the aerosols. The well-mixed greenhouse gas forcing in 1850 is 0.213 W/m². Including well-mixed solar and stratospheric water that becomes 0.215 W/m². LU and ozone almost exactly cancel one another. So to adjust from 1750-2000 to 1850-2000 forcings, one must remove 0.215 W/m² and also remove the -0.164 W/m² aerosol forcing, multiplying the latter by it’s impact relative to that of well-mixed greenhouse gases (~1.5) that gives about -0.25 W/m².

If this is done consistently, the denominator of the climate sensitivity calculation containing total forcing barely changes and hence the TCR results are essentially the same (a change of only 0.03°C). Lewis’ claim that the my TCR results are mistaken because they did not account for 1750-1850 aerosol forcing is incorrect because he fails to use consistent time periods for all forcing agents. The results are in fact quite robust to either analysis option provided they are done consistently.

Lewis also discusses the uncertainty in aerosol forcing and in the degree to which the response to aerosols are enhanced relative to the response to CO2. Much of this discussion follows a common pattern of looking through the peer-reviewed paper to find all the caveats and discussion points, and then repeating them back as if they undermine the paper’s conclusions rather than reflecting that they are uncertainties that were already taken into account. It is important to realize that the results presented in the paper include both the uncertainty in the aerosol forcing and the uncertainty in the enhancement of the response to aerosol forcing, as explicitly stated. Hence any statement that the uncertainty is underestimated in the results presented in the paper, due to the fact that (included) uncertainty in these two components is large, is groundless.

In fact, this is an important issue to keep in mind as Lewis also argues that the climate models do not provide good enough information to determine the value of the enhanced aerosol response (the parameter I call E in the paper, where E is the ratio of the global mean temperature response to aerosol forcing versus the response to the same global mean magnitude of CO2 forcing, so that E=1.5 would be a 50% stronger response to aerosols). While the models indeed are imperfect and have uncertainties, they provide the best available method we have to determine the value of E as this cannot be isolated from observations directly. Furthermore, basic physical understanding supports the modeled value of E being substantially greater than 1, as deep oceans clearly take longer to respond than the land surface, so the Northern Hemisphere, with most of the world’s land, will respond more rapidly than the Southern Hemisphere with more ocean. Quantifying the value of E accurately is difficult, and the variation across the models is substantial, primarily reflecting our incomplete knowledge of aerosol forcing. This leads to a range of E quoted in the paper of 1.18 to 2.43. I used this range, assuming a lognormal distribution, along with the mean value of 1.53, in the calculation for the TCR.

Lewis then argues that the large uncertainty ranges in E and in aerosol forcing make it the TCR estimates “worthless”. While “worthless” is a little strong, it is important to fully assess uncertainties in trying to constrain any properties in the real world. It’s worthwhile to note that Lewis co-authored a recent report claiming that TCR could in fact be constrained to be low. That report relies on studies that include the large aerosol forcing uncertainty, so criticizing my paper for that would be inconsistent. However, Lewis’ study assumed that all forcings induce the same response in global mean temperature as CO2. This is equivalent to assuming that E is exactly 1.0 with NO uncertainty whatsoever. This is a reasonable first guess in the absence of evidence to the contrary, but as my paper recently showed, there is evidence to indicate that assumption is biased.

But while Lewis argues that the uncertainty in E is large and climate models do not give the value as accurately as we’d like, that does not justify ignoring that uncertainty entirely. Instead, we need to characterize that uncertainty as best we can and propagate that through the calculation (as can be seen in the figure below). The real question is not whether climate models provide us perfect information (they do not), but rather whether they provide better information than some naïve prior assumption. In this case, it is clear that they do.



Figure shows representative probability distribution functions for TCR using the numbers from Shindell (2014) in a Monte Carlo calculation (Gaussian for Fghg and dTobs, lognormal fits for the skewed distributions for Faerosol+ozone+LU and E). The green line is if you assume exactly no difference between the effects of aerosols and GHGs; Red is if you estimate that difference using climate models; Dashed red is the small difference made by using a different start date (1850 instead of 1750).

This highlights the critical distinction in our reasoning: I fully support the basic methods used in prior work such as Otto et al and have simply quantified an additional physical factor in the existing methodology. I am however confused that Lewis, on one hand, appears to now object to the basic method used in prior work in which the authors first adjusted aerosol forcing, second included it’s uncertainty, and then finally quantified estimates of TCR, Yet on the other hand, he not only co-authored the Otto et al paper but released a report praising that study just three days before the publication of my paper.

For completeness, I should acknowledge that Lewis correctly identified a typo in the last row of the first column of Table S2, which has been corrected in the version posted where there is also access to the computer codes used in the calculations. The climate model output itself is already publicly available at the CMIP5 website (also linked at that page).

Finally, I note that the conclusions of the paper send a sobering message. It would be nice if sensitivity was indeed quite low and society could get away with smaller emission cuts to stabilize climate. Unfortunately, several lines of independent evidence now agree that this is not the case.

References

  1. D.T. Shindell, "Inhomogeneous forcing and transient climate sensitivity", Nature Climate change, vol. 4, pp. 274-277, 2014. http://dx.doi.org/10.1038/nclimate2136
  2. A. Otto, F.E.L. Otto, O. Boucher, J. Church, G. Hegerl, P.M. Forster, N.P. Gillett, J. Gregory, G.C. Johnson, R. Knutti, N. Lewis, U. Lohmann, J. Marotzke, G. Myhre, D. Shindell, B. Stevens, and M.R. Allen, "Energy budget constraints on climate response", Nature Geosci, vol. 6, pp. 415-416, 2013. http://dx.doi.org/10.1038/ngeo1836
Author: "group" Tags: "Climate modelling, Climate Science, Inst..."
Comments Send by mail Print  Save  Delicious 
Date: Sunday, 06 Apr 2014 15:02

More open thread. Unusually, we are keeping the UV Mar 2014 thread open for more Diogenetic conversation and to keep this thread open for more varied fare.

Author: "group" Tags: "Climate Science, Open thread"
Comments Send by mail Print  Save  Delicious 
Date: Friday, 04 Apr 2014 08:41

The second part of the new IPCC Report has been approved – as usual after lengthy debates – by government delegations in Yokohama (Japan) and is now public. Perhaps the biggest news is this: the situation is no less serious than it was at the time of the previous report 2007. Nonetheless there is progress in many areas, such as a better understanding of observed impacts worldwide and of the specific situation of many developing countries. There is also a new assessment of “smart” options for adaptation to climate change. The report clearly shows that adaptation is an option only if efforts to mitigate greenhouse gas emissions are strengthened substantially. Without mitigation, the impacts of climate change will be devastating.

cramer

 

 

Guest post by Wolfgang Cramer

 

 

On all continents and across the oceans

Impacts of anthropogenic climatic change are observed worldwide and have been linked to observed climate using rigorous methods. Such impacts have occurred in many ecosystems on land and in the ocean, in glaciers and rivers, and they concern food production and the livelihoods of people in developing countries. Many changes occur in combination with other environmental problems (such as urbanization, air pollution, biodiversity loss), but the role of climate change for them emerges more clearly than before.

abb1

Fig. 1 Observed impacts of climate change during the period since publication of the IPCC Fourth Assessment Report 2007

 

During the presentation for approval of this map in Yokohama many delegates asked why there are not many more impacts on it. This is because authors only listed those cases where solid scientific analysis allowed attribution. An important implication of this is that absence of icons from the map may well be due to lacking data (such as in parts of Africa) – and certainly does not imply an absence of impacts in reality. Compared to the earlier report in 2007, a new element of these documented findings is that impacts on crop yields are now clearly identified in many regions, also in Europe. Improved irrigation and other technological advances have so far helped to avoid shrinking yields in many cases – but the increase normally expected from technological improvements is leveling off rapidly.

 

A future of increasing risks

More than previous IPCC reports, the new report deals with future risks. Among other things, it seeks to identify those situations where adaptation could become unfeasible and damages therefore become inevitable. A general finding is that “high” scenarios of climate change (those where global mean temperature reaches four degrees C or more above preindustrial conditions – a situation that is not at all unlikely according to part one of the report) will likely result in catastrophic impacts on most aspects of human life on the planet.

abb2

Fig. 2 Risks for various systems with high (blue) or low (red) efforts in climate change mitigation

 

These risks concern entire ecosystems, notably those of the Arctic and the corals of warm waters around the world (the latter being a crucial resource for fisheries in many developing countries), the global loss of biodiversity, but also the working conditions for many people in agriculture (the report offers many details from various regions). Limiting global warming to 1.5-2.0 degrees C through aggressive emission reductions would not avoid all of these damages, but the risks would be significantly lower (a similar chart has been shown in earlier reports, but the assessment of risks is now, based on the additional scientific knowledge available, more alarming than before, a point that is expressed most prominently by the deep red color in the first bar).

 

Food security increasingly at risk

In the short term, warming may improve agricultural yields in some cooler regions, but significant reductions are highly likely to dominate in later decades of the present century, particularly for wheat, rice and maize. The illustration is an example of the assessment of numerous studies in the scientific literature, showing that, from 2030 onwards, significant losses are to be expected. This should be seen in the context of already existing malnutrition in many regions, a growing problem also in the absence of climate change, due to growing populations, increasing economic disparities and the continuing shift of diet towards animal protein.

abb3

Fig. 3 Studies indicating increased crop yields (blue) or reduced crop yields (brown), accounting for various scenarios of climate change and technical adaptation

 

The situation for global fisheries is comparably bleak. While some regions, such as the North Atlantic, might allow larger catches, there is a loss of marine productivity to be expected in nearly all tropical waters, caused by warming and acidification. This affects poor countries in South-East Asia and the Pacific in particular. Many of these countries will also be affected disproportionately by the consequences of sea-level rise for coastal mega-cities.

abb4

Fig. 4 Change in maximum fish catch potential 2051-2060 compared to 2001-2010 for the climate change scenario SRES A1B

 

Urban areas in developing countries particularly affected

Nearly all developing countries experience significant growth in their mega-cities – but it is here that higher temperatures and limited potential for technical adaptation have the largest effect on people. Improved urban planning, focusing on the resilience of residential areas and transport systems of the poor, can deliver important contributions to adaptation. This would also have to include better preparation for the regionally rising risks from typhoons, heat waves and floods.

 

Conflicts in a warmer climate

It has been pointed out that no direct evidence is available to connect the occurrence of violent conflict to observed climate change. But recent research has shown that it is likely that dry and hot periods may have been contributing factors. Studies also show that the use of violence increases with high temperatures in some countries. The IPCC therefore concludes that enhanced global warming may significantly increase risks of future violent conflict.

 

Climate change and the economy

Studies estimate the impact of future climate change as around few percent of global income, but these numbers are considered hugely uncertain. More importantly, any economic losses will be most tangible for countries, regions and social groups already disadvantaged compared to others. It is therefore to be expected that economic impacts of climate change will push large additional numbers of people into poverty and the risk of malnutrition, due to various factors including increase in food prices.

 

Options for adaptation to the impacts of climate change

The report underlines that there is no globally acceptable “one-fits-all” concept for adaptation. Instead, one must seek context-specific solutions. Smart solutions can provide opportunities to enhance the quality of life and local economic development in many regions – this would then also reduce vulnerabilities to climate change. It is important that such measures account for cultural diversity and the interests of indigenous people. It also becomes increasingly clear that policies that reduce emissions of greenhouse gases (e.g., by the application of more sustainable agriculture techniques or the avoidance of deforestation) need not be in conflict with adaptation to climate change. Both can improve significantly the livelihoods of people in developing countries, as well as their resilience to climate change.

It is beyond doubt that unabated climate change will exhaust the potential for adaptation in many regions – particularly for the coastal regions in developing countries where sea-level rise and ocean acidification cause major risks.

The summary of the report is found here. Also the entire report with all underlying chapters is online. Further there is a nicely crafted background video.

Wolfgang Cramer is scientific director of the Institut Méditerranéen de Biodiversité et d’Ecologie marine et continentale (IMBE) in Aix-en-Provence one of the authors of the IPCC  working group 2 report.

This article was translated from the German original at RC’s sister blog KlimaLounge.

 

Weblink

Here is our summary of part 1 of the IPCC report.

Author: "stefan" Tags: "Climate impacts, Climate Science, IPCC"
Comments Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader