Implications of lower aerosol forcing for climate sensitivity

by Nic Lewis

A new paper on aerosol radiative forcing has important implications for estimates of climate sensitivity.

In a paper published last year (Lewis & Curry 2014), discussed here, Judith Curry and I derived best estimates for equilibrium/effective climate sensitivity (ECS) and transient climate response (TCR). At 1.64°C, our estimate for ECS was below all those exhibited by CMIP5 global climate models, and at 1.33°C for TCR, nearly all. However, our upper (95%) uncertainty bounds, at 4.05°C for ECS and 2.5°C for TCR, ruled out only a few CMIP5 climate models. The main reason was that they reflected the AR5 aerosol forcing best estimate and uncertainty range of −0.9 W/m2 (for 2011 vs 1750), with a 5–95% range of −1.9 to −0.1 W/m2. The strongly negative 5% bound of that aerosol forcing range accounts for the fairly high upper bounds on ECS and TCR estimates derived from AR5 forcing estimates.

The AR5 aerosol forcing best estimate and range reflect a compromise between satellite-instrumentation based estimates and estimates derived directly from global climate models. Although it is impracticable to estimate indirect aerosol forcing – that arising through aerosol-cloud interactions, which is particularly uncertain – without some involvement of climate models, observations can be used to constrain model estimates, with the resulting estimates generally being smaller and having narrower uncertainty ranges than those obtained directly from global climate models. Inverse estimates of aerosol forcing (derived by diagnosing the effects of aerosols on more easily estimated quantities, such as spatiotemporal surface temperature patterns) tended also to be smaller and less uncertain than those from climate models, but were disregarded in AR5.

Since AR5, various papers concerning aerosol forcing have been published, without really narrowing down the uncertainty range. Aerosol forcing is extremely complex, and estimating it is very difficult. One major problem is that indirect aerosol forcing has, approximately, a logarithmic relationship to aerosol levels. As a result, the change in aerosol forcing over the industrial period – anthropogenic aerosol forcing – is sensitive to the exact level of preindustrial aerosols (Carslaw et al 2013), and determining natural aerosol background levels is very difficult.

In this context, what is IMO a compelling new paper by Bjorn Stevens estimating aerosol forcing using multiple physically-based, observationally-constrained approaches is a game changer. Bjorn Stevens is Director of the Department ‘Atmosphere in the Earth System’ at the MPI for Meteorology in Hamburg. Stevens is an expert on cloud processes and their role in the general circulation and climate change. Through the introduction of new constraints arising from the time history of forcing and asymmetries in Earth’s energy budget, Stevens derives a lower limit for total aerosol forcing, from 1750 to recent years, of −1.0 W/m2. Although there is no best estimate given in the published paper, it can be worked out (from the uncertainty analyses given) to be −0.5 W/m2, and a time series for it derived from an analytical fit used in the analysis. An upper bound of −0.3 W/m2 is also given, but that comes from an earlier study (Murphy et al., 2009) rather than being a new estimate.

I have re-run the Lewis & Curry 2014 calculations using aerosol forcing estimates in line with the analysis in the Stevens paper (see Appendix) substituted for the AR5 estimates. I’ve accepted the Murphy et al (2009) upper bound of −0.3 W/m2 adopted by Stevens despite, IMO, the AR5 upper bound of −0.1 W/m2 being more consistent with the error distribution assumptions in his paper.

The Lewis & Curry 2014 energy budget study involves comparing, between a base and a final period, changes in global mean surface temperature (GMST) with changes in effective radiative forcing and – for ECS only – in the rate of ocean etc. climate system heat uptake. The preferred pairing was 1859–1882 with 1995–2011, the longest early and late periods free of significant volcanic activity. These periods are well matched for influence from internal variability and provide the largest change in forcing (and hence the narrowest uncertainty ranges). Neither the original Lewis & Curry 2014 ECS and TCR estimates nor the new estimates are significantly influenced by the low increase in surface warming this century.

Table 1 shows ECS and TCR estimates using the Stevens 2015 based aerosol forcing estimates, with for comparison the original estimates based on AR5 aerosol forcings. Estimates are shown for the preferred 1859–1882 base period to 1995–2011 final period combination, and also for a similarly well-matched 1930–1950 base period to 1995–2011 final period combination. That combination involves lower forcing and GMST increases and more heat uptake uncertainty, but probably better forcing and temperature data. Estimates from two AR5-vintage studies that used zonal temperature data to form their own inverse estimates of aerosol forcing are also shown.

table 1

Table 1: Best estimates are medians (50% probability points). Ranges (Ring et al: none given) and aerosol forcings are to the nearest 0.05°C. § Aldrin et al. aerosol forcing estimate is for 1750–2007 and based on replacing the AR4 aerosol forcing distribution used as the prior in the study, which significantly biased the inverse estimate, with the AR5 distribution. * With +0.1 W/m2 added to adjust for omitted black-carbon-on-snow forcing affecting the inverse estimate of aerosol forcing due to its similar fingerprint.

Compared with using the AR5 aerosol forcing estimates, the preferred ECS best estimate using an 1859–1882 base period reduces by 0.2°C to 1.45°C, with the TCR best estimate falling by 0.1°C to 1.21°C. More importantly, the upper 83% ECS bound comes down to 1.8°C and the 95% bound reduces dramatically – from 4.05°C to 2.2°C, below the ECS of all CMIP5 climate models except GISS-E2-R and inmcm4. Similarly, the upper 83% TCR bound falls to 1.45°C and the 95% bound is cut from 2.5°C to 1.65°C. Only a handful of CMIP5 models have TCRs below 1.65°C.

CMIP5 models with high TCRs are able to match the historical instrumental GMST record, or even warm less, mainly because most of them have highly negative aerosol forcing that until recently offset the bulk of greenhouse gas and other positive forcings. The mean aerosol forcing in CMIP5 models for which it has been diagnosed is about −1.2 W/m2 over 1850–2000, two and a half times Stevens’ best estimate.

The new ECS and TCR estimates, and the uncertainty associated with them, can also be presented in the form of probability density functions, as in Figures 1 and 2. The PDFs are skewed due principally to the dominant uncertainty, that in forcing, affecting the denominator of the fractions used to estimate ECS and TCR.

fig 1

Figure 1: Energy budget estimated PDFs for ECS using Stevens 2015 based aerosol forcing

fig 2

Figure 2: Energy budget estimated PDFs for TCR using Stevens 2015 based aerosol forcing

Appendix: Derivation of a best estimate time series for aerosol forcing from Stevens 2015

The best estimate for direct aerosol forcing (Fari) is taken as −0.15 W/m2 (line 494). The best estimate taken for indirect aerosol forcing (Faci) is that which, when δN/N has a bidirectional factor of two (0.5× to 2.0×) 5–95% uncertainty (line 620; taken as corresponding to a lognormal distribution) and C has a median estimate of 0.1 and a 95% bound of 0.15 (line 584; uncertainty independent and assumed Gaussian), produces a 95% bound for Faci of −0.75 W/m2. That implies a median estimate for Faci of −0.32 W/m2, which when added to the −0.15 W/m2 for Fari gives a best estimate for total aerosol forcing Faer of −0.5 W/m2.

The timeseries for Qa given by Eq.(A9) is used to scale, according to Eq.(1), the best estimate for Faer of −0.5 W/m2 as at 2005 over 1750 to 2011. Values of Qn = 76, α = 0.00167and β = 0.317 are used. Qn is taken from the caption to Fig.2 and α from the last line of Appendix A. The value of β is set to produce a total aerosol forcing of −0.5 W/m2 in 2005. Although giving a slightly different breakdown between Faci and Fari than that just derived, these parameter values result in an almost identical evolution of aerosol forcing.

References

Aldrin M, Holden M, Guttorp P, Skeie RB, Myhre G, Berntsen TK. Bayesian estimation of climate sensitivity based on a simple climate model fitted to observations of hemispheric temperatures and global ocean heat content. Environmetrics 23:253–271 (2012)

Carslaw, K. S., and coauthors. Large contribution of natural aerosols to uncertainty in indirect forcing. Nature, 503 (7474), 67–71 (2013)
Lewis N An objective Bayesian, improved approach for applying optimal fingerprint techniques to estimate climate sensitivity. J Clim 26:7414–7429 (2013)

Lewis N and Curry J A: The implications for climate sensitivity of AR5 forcing and heat uptake estimates, Climate Dynamics doi: 10.1007/s00382-014-2342-y (2014)

Stevens, B. Rethinking the lower bound on aerosol radiative forcing. In press, J.Clim (2015) doi: http://dx.doi.org/10.1175/JCLI-D-14-00656.1

JC note:  As with all guest posts, keep your comments relevant and civil.  Please treat this as a technical thread, comments will be moderated for relevance.

191 responses to “Implications of lower aerosol forcing for climate sensitivity

  1. As far as I can see, the 1970’s cooling was largely AMO driven, which is why it shows more in the northern hemisphere, and also by increased La Nina episodes. I doubt that aerosols played much part in it.
    And what drove the cold AMO and La Nina was the fast/hot solar plasma in the 1970’s, by increasing positive AO/NAO which speeds up the AMOC, and by speeding up the trade winds.

    • Data source:
      http://omniweb.gsfc.nasa.gov/form/dx1.html
      Low plasma velocity* displays a tight fit with major El Nino 1997/98 and 2009/10.

      • Ulrich, you may need to demonstrate this “tight fit “, it is not immediately obvious ;)

        I fully agree that there are probably significant undiagnosed or misdiagnosed solar effects on climate. This was one thing the recent Marotzke paper showed. The biggest deviations of models is not the current “plateau” but the failure to match 1960s and 70s:

        Using a better filter than their “sliding trend” I showed that the divergence between models and HadCRUT4 ( their choice, not mine ) bore a striking resemblance to SSN:

      • David Springer

        What’s the mechanism by which plasma velocity alters North Atlantic conveyor belt speed?

      • Trying to explain the 1970’s cooling by means of aerosols just shows how little that they understand climate. The cardinal errors being made are the assumption that natural variability is internal, and assuming that global mean surface temperatures follow forcings at decadal to inter-decadal scales. If it were not for oceanic negative feedbacks to the solar signal, the 1970’s would have warmed, and it would have cooled since the mid 1990’s. As the Antarctic has.

      • David Springer,
        Forcing mechanisms of the North Atlantic Oscillation are likely to be at least the plasma coupling in the polar regions causing direct Joule heating of the upper polar atmosphere causing very strong circulation changes, and by nitric oxide production destroying polar ozone.
        Negative NAO slows the AMOC, resulting in warmer water accumulation in the north Atlantic and Arctic instead of overturning. E.g. early summer 2007 and mid summer 2012:
        http://www.rapid.ac.uk/

      • David Springer

        I’m not buying it. You need to come up with a detailed physical mechanism that makes sense.

      • I doubt that adequate observations have yet been made to effectively describe the linkage between the NAO/AO and daily solar wind variations. Though papers have been written on the correlations.
        I make my analysis on what the data suggests, notably the the fine correlation between negative NAO/AO episodes and low AMOC events, and their relation to warming of the AMO and Arctic Ocean.

  2. It sounds like it’s going to become more difficult to use aerosols as a general fudge factor.

  3. Danny Thomas

    If I understand this correctly then someone’s “medium” confidence level may need to be modified:”As estimated by the IPCC Fifth Assessment Report (AR5) “there is high confidence that ECS is extremely unlikely less than 1°C and medium confidence that the ECS is likely between 1.5°C and 4.5°C and very unlikely greater than 6°C.”‘

    Dr. Curry, I’d asked independently if in the APS presentation that aerosols were used in backcasting in the model ensembles but they were then removed in the forecasting (projections) which might account in part for the “models running warm”. Just confirming if that portion of the APS was indeed understood correctly? If so, since you’ve selected out the “most likely” natural variability scenarios it seems the removal was indeed in error.

    • Danny: I believe the RCPs prepared for the model project (CMIP5) include the aerosol concentrations. I don’t know if these concentrations are issued using a prescribed grid or if it’s averaged.

      As far as I can tell the models are tweaked to run warm, the tweaks aren’t intentional. And I suspect their performance can be improved with a bit of blood, sweat and tears.

      • Danny Thomas

        Fernando,
        The comment was based on the interaction in the APS presentation (page 259 on from here: http://www.aps.org/policy/statements/upload/climate-seminar-transcript.pdf) pg 264 specifies aerosols, Dr. Collings mid pg 267 states “they did not include aerosols”. (discussion ends about 274 on this portion)
        I was left with the impression that aerosols were used in hindcasting to “tune” the models but removed for forecasting (projections) due to uncertainty.

    • None of this is something that 30-50 more years of dedicated, public-funded research cannot get to the bottom of…

  4. stevefitzpatrick

    Bjorn Stevens must feel very secure in his job.
    Any significant reduction in the likely range of aerosol influence will effectively eliminate the ‘long tail’ (or is that the ‘long tale’?) of very high climate sensitivity in the probability distribution. The paper is effectively declaring that aerosol forcing levels in GCM’s are much too high, and by implication, that GCM’s are diagnosing far too high climate sensitivity. I will be very surprised if the paper is not broadly attacked in blogs, ‘sciency’ popular publications, and in journals; there is far too much at stake for the paper to go unchallenged.

    • Actually, after re-thinking it, I’m sure that his paper won’t cause any problems anyway It’s based on modeling, and we all know that “skeptics” reject papers based on climate modeling, so this paper will obviously not be used as evidence to challenge the “consensus.”

      • Danny Thomas

        Joshua,

        Maybe it’s a “my model can beat up your model” scenario.

      • Maybe averaging bad models will capture reality perfectly. In that way, we can substantially reduce systemic errors in model estimates by averaging in the results of models that show global warming is negatively correlated with increases in CO2, which shouldn’t be hard because historically that has been the case i.e., global warming followed increases in atmospheric CO2 by hundreds of years.

      • There’s potentially an interesting irony here. One of the arguments being made in Stevens’ paper is that the CMIP5 models underestimate the warming over the period 1920-1950 and that this is consistent with them having too large a negative aerosol forcing. As I understand it, these models have ECS values higher than those suggested by Nic Lewis’s updated analysis. So, how consistent is it to use models with high ECS values to argue for a low ECS value? Additionally, if too negative an aerosol forcing during 1920-1950 explains the models over-estimating the warming then, what does it imply about the period from 1998-now?

      • I wasn’t aware that anyone had argued for strong pollution aerosol forcing during 1920-1950. The explanations for the warming from 1910-1940 that I’ve seen are solar, volcanoes, and natural internal variability.

      • One of the arguments being made in Stevens’ paper is that the CMIP5 models underestimate the warming over the period 1920-1950 and that this is consistent with them having too large a negative aerosol forcing. […] Additionally, if too negative an aerosol forcing during 1920-1950 explains the models over-estimating the warming then, what does it imply about the period from 1998-now?

        Does your comment even have a point when you correct your internal contradiction? Or is it just arm-waving intended to confuse people who aren’t prepared to parse your “logic”?

      • AK,

        Does your comment even have a point when you correct your internal contradiction?

        If you think it’s my internal contradiction, you weren’t concentrating.

      • AK,
        Okay, it was a typo on my part. Of course, that should have been obvious if you’d read it with a remotely open mind. It should have been

        Additionally, if too negative an aerosol forcing during 1920-1950 explains the models under-estimating the warming then, what does it imply about the period from 1998-now?

      • @…and Then There’s Physics…

        Of course I was pretty sure that’s what you meant. Didn’t hurt to ask, tho, before addressing your argument

        […] if too negative an aerosol forcing during 1920-1950 explains the models under-estimating the warming then, what does it imply about the period from 1998-now?

        Nothing. They’re going to get that right because that’s what they were tuned for. It implies that natural variation (or maybe some other “forcing”) produced more warming than they hindcast for 1920-1950, meaning it makes sense for natural variation to have produced a greater “fraction” of the warming from 1970-1998. (I put “fraction” in “scare quotes” because, of course, these effects don’t add in a linear fashion.)

        As I understand it, these models have ECS values higher than those suggested by Nic Lewis’s updated analysis. So, how consistent is it to use models with high ECS values to argue for a low ECS value?

        If the models make predictions, or hindcasts that don’t match reality, they make a good demonstration that their ECS is too high.

      • AK,

        Of course I was pretty sure that’s what you meant.

        Didn’t consider framing your point in a different way then?

        The rest of your point is, however, broadly consistent with mine. If you’re using models with high CS to argue for reduced aerosol forcing, how can you then use that to argue for a low CS? Either the models are wrong and shouldn’t be part of the argument, or they’re not as wrong as you think.

      • Either the models are wrong and shouldn’t be part of the argument, or they’re not as wrong as you think.

        From the lead post (above):

        The AR5 aerosol forcing best estimate and range reflect a compromise between satellite-instrumentation based estimates and estimates derived directly from global climate models. Although it is impracticable to estimate indirect aerosol forcing – that arising through aerosol-cloud interactions, which is particularly uncertain – without some involvement of climate models, observations can be used to constrain model estimates, with the resulting estimates generally being smaller and having narrower uncertainty ranges than those obtained directly from global climate models.

        How does this not answer your question?

      • AK,
        How is that a response to my point? Part of the argument in the paper uses model results to argue that the models have too negative an aerosol forcing (because they under-estimate the 1920-1950 warming). If these models have ECS values in the range above 1.5 – 4.5K, it doesn’t seem quite consistent to then argue for an ECS below 1.5K (or mostly below).

      • Part of the argument in the paper uses model results to argue that the models have too negative an aerosol forcing (because they under-estimate the 1920-1950 warming).

        As I understand it, the paper uses constrained model results to estimate the aerosol “forcing”. Constrained by observations. This isn’t using “model results to argue”, it’s using model results under (observational) constraints.

        If these models have ECS values in the range above 1.5 – 4.5K, it doesn’t seem quite consistent to then argue for an ECS below 1.5K (or mostly below).

        AFAIK the models with “ECS values in the range above 1.5 – 4.5K” produce aerosol estimates higher, and with a broader uncertainty range. IOW, it’s different sets of “models”.

        Hopefully, Nic will jump in to clarify, since I can only argue my (limited) understanding of his points.

      • I think the question concerning model ECS is does it mean anything at all given all the approximations necessary to run the model in less than a century.

      • AK,
        I think it does two things. It does indeed, as you say, use various modelling-type calculations to estimate the aerosol forcing. I’m not disputing those calculations at all. I was simply pointing out that in Section 5 it also argues that these lower (less negative) aerosol forcing estimates may explain why models have tended to underestimate the warming between 1920 and 1950. Therefore, it is suggesting that if you were to use the updated aerosol forcings in these models, they would better reproduce the warming over that period. However, these models typically have ECS values above 2K. Therefore, all I was suggesting is that if part of the argument for a less negative aerosol forcing is because this would produce a better model-observation fit (with models that have ECS vaues above 2K), then it’s quite hard to see how this is also consistent with an argument that the ECS being probably less than 1.5K.

        I’m not making some kind of strong statement here, simply making an observation.

      • I think the question concerning model ECS is does it mean anything at all given all the approximations necessary to run the model in less than a century.

        I think the question concerning model ECS is does it mean anything at all given all the approximations necessary to run the model in less than a century.?

        There. Fixed it for you.

      • Therefore, it is suggesting that if you were to use the updated aerosol forcings in these models, they would better reproduce the warming over that period.

        I thought the “forcings” for aerosols via clouds were entirely emergent properties of the models. Which means you couldn’t “use the updated aerosol forcings in these models,” you’d have to tweak whatever parameters generate those results. And then, what would the new, emergent ECS’s be?

      • AK,

        I thought the “forcings” for aerosols via clouds were entirely emergent properties of the models. Which means you couldn’t “use the updated aerosol forcings in these models,” you’d have to tweak whatever parameters generate those results. And then, what would the new, emergent ECS’s be?

        Not quite right. The cloud feedback is a fully emergent property. Anthropogenic aerosols are, however, an external factor. However, I think you’re right that the models do calculate the radiative influence (through – for example – the aerosol cloud effect) explcitly, so in that sense that radiative effect is calculated in the models, and is not explicitly imposed on the models.

        However, the ECS is determined by running a model in which CO2 only is doubled at 1% per year over a period of 70 years, and then the model is run to equilibrium. In such a calculation, therefore, there are no aerosols, as the only external change is the increase in atmospheric CO2. Therefore, a change in the aerosol forcing should not influence a model’s ECS.

      • Judith,

        I wasn’t aware that anyone had argued for strong pollution aerosol forcing during 1920-1950. The explanations for the warming from 1910-1940 that I’ve seen are solar, volcanoes, and natural internal variability.

        I don’t know if they’re arguing for a strong effect. I think the argument is simply that the models may be over-estimating the aerosol influence. However, as with you, I’ve mostly seen arguments along the lines of a combination of anthro, solar, volcanoes, and internal variability. I’m not hugely convinced by the argument in Stevens’ paper. I was simply pointing out that one argument being made is that over-estimating the aerosol influence buring that period may explain the discrepancy.

      • The greater significance of this is for the cooling of th 1940-1975 period, sort of destroys the argument that this was caused by aerosols.

      • The greater significance of this is for the cooling of th 1940-1975 period, sort of destroys the argument that this was caused by aerosols.

        Or, it suggests that the aerosol forcing can’t be a low as Stevens’ suggests :-)

      • However, the ECS is determined by running a model in which CO2 only is doubled at 1% per year over a period of 70 years, and then the model is run to equilibrium. In such a calculation, therefore, there are no aerosols, as the only external change is the increase in atmospheric CO2. Therefore, a change in the aerosol forcing should not influence a model’s ECS.

        My point isn’t the aerosols, it’s that once you’ve tweaked the parameters to make the the aerosols work, what’s it done to the emergent ECS? After all, isn’t it (some of) the same parameters?

      • My point isn’t the aerosols, it’s that once you’ve tweaked the parameters to make the the aerosols work, what’s it done to the emergent ECS? After all, isn’t it (some of) the same parameters?

        Okay, it shouldn’t affect the radiative influence of CO2, the Classius Clapeyron relation (which determines water vapour feedback), or the lapse rate. However, it could – I guess – influence cloud feedbacks, but I’m not convinced that it would. My feeling would be that the aerosol effect (which is external) should be independent of all these others effect. Not a very good answer, but the best I can do.

      • Okay, it shouldn’t affect the radiative influence of CO2, the Classius Clapeyron relation (which determines water vapour feedback), or the lapse rate.

        ATTP,
        Classius-Clapeyron does not determine what the water vapour feedback will be. H2O has a source ( the oceans ) and a sink ( precipitation ). Between the two, atmospheric dynamics ( the unpredictable part of the problem ) determine where H2O goes and in what amount. If you take a given atmosphere ( pick a day ) and uniformly increase water vapour, a strange thing happens. The lower half of the troposphere does indeed experience radiative forcing ( net increase in radiance ) from the increased opacity. But the upper half of the troposphere actually experiences a net loss of radiance. Enhanced radiative warming of the lower troposphere and enhanced radiative cooling of the lower troposphere implies a greater transfer of energy through even a constant amount of convective exchange. To some extent, this makes the earth more effective at radiating to space. Convection and radiance are locked in a deadly embrace, the victor is not clear.

      • Sorry. Sentence should read:

        Enhanced radiative warming of the lower troposphere and enhanced radiative cooling of the upper troposphere implies a greater transfer of energy through even a constant amount of convective exchange.

        This is understandable by the upper troposphere becoming more humid and thus more emissive while the atmosphere above hasn’t enough water vapour to intercept the upper troposphere emissions in the appropriate bands.

      • Lucifer,

        Classius-Clapeyron does not determine what the water vapour feedback will be.

        Yes, I know, I was just writing quickly. I was simply pointing out that the physical processes that lead to the water vapour feedback should be independent of whatever effect anthropogenic aerosols have. It’s not clear that any adjustment in the models to account for a change in the aerosol forcing would necessarily have any effect on the other factors influence the ECS.

      • Slight changes possible to the water vapor, lapse rate and cloud feedbacks; these are all coupled.

      • t’s not clear that any adjustment in the models to account for a change in the aerosol forcing would necessarily have any effect on the other factors influence the ECS.

        But they should.

        Without any convection, models indicate a much greater surface temperature ( the radiative only model ):

        Changes in energy transfer within the troposphere have a profound influence on the surface temperature.

        If models have these internals wrong ( which the unpredictability of fluid flow all but guarantees ), and they models now exhibit serious errors for:
        1.) aerosol impact
        2.) the ‘Hot Spot’ and
        3.) albedo

        They are guaranteed to be erroneous for surface temperature and emission to space.

      • Slight changes possible to the water vapor, lapse rate and cloud feedbacks; these are all coupled.

        Yes, I can see that they’re all coupled. I don’t see how this means that if there is some kind of error in the anthropogenic aerosol effect, that it implies that there is some kind of error in these feedbacks (other than Lucifer’s “it’s all wrong!!!!!” type of argument.) They’re not necessarily related.

      • The integral of all the feedbacks, after fixing incorrect aerosol forcing, may be surprising (all this is not linear), and it may be pretty much zero.

      • Lucifer: Changes in energy transfer within the troposphere have a profound influence on the surface temperature.

        What is the source for the diagram in that post?

      • “One of the arguments being made in Stevens’ paper is that the CMIP5 models underestimate the warming over the period 1920-1950 and that this is consistent with them having too large a negative aerosol forcing.”

        1920-1950 is a period that is particularly uncertain in the observational record, straddling, as it does, a World War and wholesale change in measurement practice at sea.

      • ==> “1920-1950 is a period that is particularly uncertain in the observational record, ”

        Yet another factor that leads to a bag o’ unintentional irony in the “skept-o-sphere.”

      • What is the source for the diagram in that post?

        Manabe & Strickler 1964

        Early work using the 1D models.

        The convection ( and lapse rate assumptions ) are of low fidelity,
        but the principles remain.

      • Joshua, “==> “1920-1950 is a period that is particularly uncertain in the observational record, ”

        Yet another factor that leads to a bag o’ unintentional irony in the “skept-o-sphere.””

        The shift from buckets to intake likely didn’t have that much impact on surface station measurement. BEST, GISS and Hadley seem to have a good deal of confidence in their land based records which show about the same shift as SST. Those land based measurements with estimated uncertainty limit how much SST can be adjusted so with quantifying “. is particularly uncertain..” Dr. Kennedy’s comment is very constructive is it?

      • Lucifer: Manabe & Strickler 1964

        Thank you. Do you have anything recent on the same topic?

      • Fair point, Joshua, about the models, but what he’s doing is comparing the models with measurements and finding discrepancy. He’s then providing one way of resolving the discrepancy, which leads to certain conclusions. Presumably a way can be found of testing those conclusions, until which time they are of course unreliable, as you point out. In the meantime the models as previously parameterised were clearly wrong and should therefore be discarded.

      • David Springer

        http://www.breitbart.com/london/2015/03/20/new-climate-paper-gives-global-warming-alarmists-one-helluva-beating/

        A new scientific paper has driven yet another nail into the coffin of Catastrophic Anthropogenic Global Warming theory. (H/T Bishop Hill)

        The paper – Rethinking the lower bound on aerosol radiative forcing by Bjorn Stevens of the Max Planck Institute for Meteorology in Hamburg, Germany, published in the American Meteorological Society journal – finds that the effects of aerosols on climate are much smaller than those in almost all the computer models used by the Intergovernmental Panel on Climate Change.

        Aerosols are the minute particles added to the atmosphere by burning fossil fuels (as well as by non-anthropogenic sources, like volcanoes). The reason they are important is that they are so often cited by alarmists to excuse the awkward fact that the world has stubbornly failed to warm at the disastrous rate they predicted it would.

        Apparently – or so the excuse goes – these aerosols are masking the true extent of runaway climate change by cancelling out the effects of man-made CO2.

        Here, for example, is a NASA expert in 2009:

        http://climate.nasa.gov/news/215/

        “Using climate models, we estimate that aerosols have masked about 50 percent of the warming that would otherwise have been caused by greenhouse gases trapping heat near the surface of the Earth”

        Here is a report on a study from another institution – NOAA – with a long track record of ramping up the alarmist cause.

        http://phys.org/news/2011-07-noaa-aerosols-inhibiting-global.html

        “A new study led by the U.S, National Oceanic and Atmospheric Administration (NOAA) shows that tiny particles that make their way all the way up into the stratosphere may be offsetting a global rise in temperatures due to carbon emissions.”

        Aerosols are often used to explain the lack of “global warming” in the cooling period between 1940 and 1970 (when the growth in industrialisation and all that extra man-made CO2 ought to have begun taking effect).

        They have also been used in this 2011 paper -http://www.pnas.org/content/108/29/11790.abstract – whose co-authors include one Michael Mann, which gives you an idea of its quality and reliability – for the Proceedings of the National Academy of Sciences of the USA (PNAS). It claims that the reason there has been a “hiatus” in global warming since 1998 is because of the effect of aerosol emissions. This got one of the BBC’s resident alarmists Richard Black very excited. He wrote it up in an article entitled Global warming lull down to China’s coal growth. http://www.bbc.co.uk/news/science-environment-14002264 (Oddly he forgot to surround it with scare quotes, or finish it with a question mark.)

        The new Stevens paper has been described as a “game-changer” by one expert in the field, Nic Lewis.

        According to the IPCC’s models, the effect of aerosols on climate could be as much as 4.5 degrees C. But Stevens paper suggests that this is a considerable overestimate and that the reduction they effect on temperature cannot be more than 1.8 degrees C.

        This pretty much kills the alarmists’ “the aerosols ate my homework” excuse stone dead. If the cooling effects of aerosols turn out to be much smaller than the IPCC thinks, then what this means is that the rise in global temperatures attributable to man-made CO2 is also much smaller than the alarmists’ computer models acknowledge.

        As Andrew Montford comments here:

        http://bishophill.squarespace.com/blog/2015/3/19/climate-sensitivity-takes-another-tumble.html

        “Jim Hansen, Bob Ward, Kevin Trenberth, Michael Mann and Gavin Schmidt, your climate alarmism just took one helluva beating.”

    • stevefitzpatrick

      Stevens apparently submitted first to Nature in 2014…. and was rejected after review. See the references listed here: http://www.euclipse.eu/downloads/D4.5_Evaluation%20to%20what%20extend%20aerosol-cloud-climate%20effects%20depend%20on%20the%20representation%20of%20cloud%20processes.pdf

    • stevefitzpatrick

      He does indeed seem pretty secure in his job: http://www.mpimet.mpg.de/fileadmin/staff/stevensbjorn/Documents/bjorn_cv_letter_zentriert_140225.pdf
      Started as an electrical engineer….PhD adviser: William R. Cotton (sort-of-skeptic, and friend of Roger Pielke Sr). ‘splains everything.

      • I was wondering if his office was next door to Marotzke’s but I see they are in different sections, Stevens in atmosphere, Marotzke in oceans. Still I can feel the tension.

    • stevefitzpatrick

      Here is a very interesting presentation on aerosols and clouds where Stevens seems to suggest that aerosol influences on clouds, while very complex, may be overstated in GCM’s: https://www.asp.ucar.edu/thompson/2010/NCAR-ASP_TLS2_BjornStevens.pdf

  5. Good post.

    My question is slightly off topic. At 1.64°C, our estimate for ECS was below all those exhibited by CMIP5 global climate models, and at 1.33°C for TCR, nearly all.

    After a doubling of CO2 concentration, how much time would be required for global mean surface temperature to increase by 1.30°C? And how long to increase by 1.60°C?

    • Danny Thomas

      Matthew,

      Even further off topic. Re:”http://newscenter.lbl.gov/2015/02/25/co2-greenhouse-effect-increase/” I believe you were one who also asked why this study had ended in 2010, but wasn’t published till 2015. The comments on it were closed at WUWT, but just got this reply from the article’s author today: “Here is the response from Dan Feldman, lead scientist on this research:

      Good question. We need a lot of different datasets in order to do this analysis. Some of those datasets currently do not extend past the beginning of 2011. Specifically the dataset with which we screen for clouds (called Active Remote Sensing of Cloud Locations). However, that dataset should be updated and available soon and extend to the present, so we can extend the analysis forward in time. “

  6. So if Nic’s latest estimates are the best indication, we can stop worrying (if we were) about dangerous warming, and can all go home and relax. Problem solved.

    Faustino

  7. Aerosols don’t have anything to do with the ”global” temp! The confusion was created: when was lots of volcanoes activity – the were expecting the temp to rise because: volcanoes + hot vents produce lots of CO2 + WV +SO2 PLUS lots of heat from the molten lava BUT warming wasn’t happening; because of the Earth’s Temperature Self Adjusting Mechanism (ETSAM) – so: they confused themselves that: -”probably is got to do something with aerosol gases” -> another confusion for the fundamentalists from bot camps on the net was create…

  8. Another inconvenient paper . Mean forcing in CMIP5 about
    2 1/2 times Steven’s best estimate.Now how are those
    CMIPmodels going to offset those greenhouse gas and
    other positive forcings as required w/out highly negative
    aerosol forcings.?

  9. aneipris, I’m afraid you’re sort of missing the point…

    • I doubt anyone is “missing the point.” It’s just tiresome to have Josh muck up every post with this nonsense.

      • I was responding to stevefitzpatrick’s nonsense, circular-reasoning w/r/t Stevens’ job security.

  10. I can’t read the paper since it’s behind a pay-wall. So, I’m not sure what to say about it.

  11. Bjorn Stevens of the Max Planck Institute for Meteorology in Hamburg, who works on a model called ECHAM, emphasizes how crucial it is for climate simulations to get what is known about clouds right. He recently found that ECHAM was representing clouds in an unrealistically crude way: Instead of allowing cloudiness to vary smoothly from 0 (perfectly clear) to 1 (overcast), the value was forced to occupy one of the extremes. When Stevens and colleagues changed their computer code to allow fractional cloudiness, the model’s prediction for future temperature rise doubled. …

    • From that article (my emphasis):
      In January 2014, scientists analyzed how climate models simulate convection and found that many simulations get the process wrong. As a result, the team reported in Nature, these simulations produce too many low, sunlight-reflecting clouds. Models that get convection right predict, on average, substantially more warming over the next century. The study authors, who include Bony, concluded that doubling carbon dioxide should raise temperatures by 3 to 4.5 degrees, the upper half of the IPCC’s current range.

      But not all evidence points in that direction. Since 1998, Earth’s surface temperature has remained roughly constant, a substantial shift after three decades of rapid warming (SN: 10/5/13, p. 14). If the climate were really as sensitive to greenhouse gases as Bony and her colleagues think, warming should have continued apace, or even accelerated. Studies of past climate changes also hint that greenhouse gases may have less impact on global temperature than many models predict. Reconciling this evidence with scientists’ latest findings on clouds is one of the main challenges facing the field today.

      Aerosols could also play a joker in the climate game. In pre-industrial times, clouds nucleated around natural aerosols like salt from sea spray, volcanic sulfates and desert dust. These days, however, human-caused emissions from power plants, factory chimneys and wood stoves supplement the natural aerosol load. With more particles in the air, cloud droplets become smaller and more numerous, and therefore reflect more sunlight.

      • So we need more particulates? Burn more coal. :)

      • Watching the pause happen, and watching it unhappen.

        This is clearly not true:

        Since 1998, Earth’s surface temperature has remained roughly constant, a substantial shift after three decades of rapid warming (SN: 10/5/13, p. 14). If the climate were really as sensitive to greenhouse gases as Bony and her colleagues think, warming should have continued apace, or even accelerated. …

        Warming to 2006 was, on a 15-year trend, .26C per decade. Stout. The pause did not happen after 1998; it happened after 2006.

      • Even Marotzke and Forster saw that the volcanoes threw the models into a fit. Although 1816, after Tambora erupted, they had the “year without a summer.” But the models were making it a decade of reaction. http://en.wikipedia.org/wiki/Year_Without_a_Summer

    • Danny Thomas

      JCH,

      Further down in the article:”But not all evidence points in that direction. Since 1998, Earth’s surface temperature has remained roughly constant, a substantial shift after three decades of rapid warming (SN: 10/5/13, p. 14). If the climate were really as sensitive to greenhouse gases as Bony and her colleagues think, warming should have continued apace, or even accelerated. Studies of past climate changes also hint that greenhouse gases may have less impact on global temperature than many models predict. Reconciling this evidence with scientists’ latest findings on clouds is one of the main challenges facing the field today.

      Aerosols could also play a joker in the climate game”

      The impression left with your offering is: “the model’s prediction for future temperature rise doubled. ”

      My impresssion after reading the entire article is “uncertainty”.

      • My impression is observations have turned dramatically, and it looks like they will continue to do so. The pause has fooled a lot of smart people, but not Bjorn Stevens.

      • Danny Thomas

        JCH,
        “My impression is observations have turned dramatically, and it looks like they will continue to do so.” Regionally, in some cases I can agree. But some very large regions not so much. Climate changes, the problem is we don’t know (well) how to project it due to uncertainty.

  12. stevenreincarnated

    So at what point does the change in forcing from aerosols become so minute that it isn’t noticable at the regional level much less the global level? That’s when I’ll figure they are getting close.

  13. Why does no one wish to consider the “top down” near IR radiative forcing? It’s all about water with just about a smidgeon of CO2 in a few bands. If you are going to evaluate transient sensitivity of any component in terms of net effect, surely near IR top down forcing must be part of the net as by any rational standard it accounts for half of the greenhouse effect.

    Alternately, you can divide your net effect by two and proceed with your analysis.

    The entire solar/earth spectrum makes a really ugly graphic as it is so asymmetrical in intensity v wavelength, but here it is anyway.

    Saturation matters.

    • The top chart says a lot about the dominance of water.

      • Yes, but they don’t allow water as a forcing, it is “feedback only”. They base this preposterous misconception on the rapid cycling of water through the atmosphere. Photons simply do not care whether the water molecule they excite has been in the atmosphere 3 nanoseconds or 30 years. It’s not like the molecule will fall out as rain before it re radiates the photon. The vast majority of the bottom up portion of the greenhouse effect takes place between the first few millimeters of the ocean surface and the first few meters of the atmosphere where a cold plasma of energy amounting to one TSI is constantly cycling. Hard surfaces on land; ice, rock, soil, and human constructions behave similarly but with vastly higher diurnal amplitude. Soft surfaces on land (plants) get complicated.

        Nic thinks, “However, I think the ratio of CO2 to total GHG forcing is more like 65% now.”

        Wow, I used to think the modelers could quickly fix the models. Now it appears they are so hidebound by Ptolemaic foolishness, it will take a new generation to make any progress.

      • Hopefully, Nic was just commenting within the context of the waterless GHG diagram. He isn’t exactly a stupid person.

        At any rate, not much CO2 in the lower atmosphere has a chance to radiate due to very frequent collisions – same with water. So any IR absorbed by CO2 or water will be converted quickly to heat and increased in any GHG will tend to heat the atmosphere more than before.

        What happens next is probably the convection machine ramps up to a faster pace and dumps the extra heat to space faster than before.

        I do agree that the residence time of water vapor in the atmosphere is meaningless WRT the “greenhouse effect.” Water vapor is around 40,000 ppm as compared to CO2 at 400 ppm. That means on average CO2 represents only about 1% of the composition of greenhouse gasses in the atmosphere. Moving it up to 2% isn’t going to matter one whit.

      • Jim2
        “At any rate, not much CO2 in the lower atmosphere has a chance to radiate due to very frequent collisions – same with water. So any IR absorbed by CO2 or water will be converted quickly to heat and increased in any GHG will tend to heat the atmosphere more than before.”

        Sounds dangerously adiabatic. With you that far. Not sure of any physical reason why adiabatic kinetic interactions decrease photon emission.

      • A CO2 or H2O molecule, once excited, can release energy by two primary paths. 1. Emission of a photon 2. Collision with another molecule. Both molecules can carry away the energy converted from internal excitation to translational kinetic energy.

        In the lower atmosphere, collisions happen so frequently that the molecules don’t have time to emit a photon, i.e. radiate, because the average time for the molecule to emit is much longer than the time between collisions.

  14. Geoff Sherrington

    What is the most current, strong reason for continuing to look at sensitivity?
    Has there been an advance of knowledge that links CO2 and GST? Which is the dependent variable?
    Is not the lesson from the last 15 years that there is no apparent, strong link?
    What more evidence is needed to drop the concept that natural changes and man-made changes are in some significant form of coexistence?
    Geoff

    • Steven Mosher

      What is the most current, strong reason for continuing to look at sensitivity?
      1. It’s an important metric for the total system.
      2. It’s a challenge to pin down.
      3. puzzles are cool.

      Has there been an advance of knowledge that links CO2 and GST? Which is the dependent variable?
      1. false choice silly.
      Is not the lesson from the last 15 years that there is no apparent, strong link?
      1. nope.
      What more evidence is needed to drop the concept that natural changes and man-made changes are in some significant form of coexistence?
      1. an extension of the pause out beyond 25 years.

      making arguments in the form of questions is asking others to do the work for you. you can ask questions but only those you are dedicated to working on yourself.

      • Danny Thomas

        Steven,

        May I question your reason for this statement:”an extension of the pause out beyond 25 years.” as to the significance of that time period? There have been modifications of the impression of it’s significance from 15, to 17, to 20 and now your offering of 25 and I’d appreciate your perspective. In part, the reason for asking is we often hear that “trends” are important and yet the trend is that temps are not rising substantially for a relatively long (but maybe not climate scale) time frame.

      • Steven Mosher

        “May I question your reason for this statement:”an extension of the pause out beyond 25 years.” as to the significance of that time period?
        1. see below

        There have been modifications of the impression of it’s significance from 15, to 17, to 20 and now your offering of 25 and I’d appreciate your perspective.

        2. When Santer first proposed 17 years, I didn’t agree. And said more like 25 years or possibly more. I’m not responsible for other people’s ideas.

        ########################################
        In part, the reason for asking is we often hear that “trends” are important and yet the trend is that temps are not rising substantially for a relatively long (but maybe not climate scale) time frame.

        1. Trends are important.
        2. the trend in the estimate of GMST is just one trend.
        3. The trend in GMST is dependent on the time scale you look at.
        4. A relevant time scale is hard to pin down and depends upon your
        assumptions about the existence of natural ” quasi periodic natural
        cycles.

        Over the next 5 years solar forcing should decrease something on the order of .1 Watts. It will be an interesting time as we will be approaching
        the 25 year window.

        people keep on thinking in terms of some piece of data that will make AGW go away as a theory. That won’t happen. What happens is that theory gets modified to include the observation. There is no final nail in the coffin.

        In a sense people are confused by the notion of scientific test that Popper suggested. His view of things is highly idealized, in short a fairy tale.

        http://en.wikipedia.org/wiki/Duhem%E2%80%93Quine_thesis

      • Danny Thomas

        Steven,
        Thank you for that. Re:”people keep on thinking in terms of some piece of data that will make AGW go away as a theory” I personally have no expectation of this, but only that further discovery will lead to better understanding and that good old mother nature is gonna do what she’s gonna do in response (or in spite of) our contribution as she doesn’t care from where warming comes just that it does and she’s gonna address it at least in part (and may already have). But I’ve got lots of learning yet to do.

      • stevefitzpatrick

        Steve Mosher,
        “people keep on thinking in terms of some piece of data that will make AGW go away as a theory. That won’t happen. What happens is that theory gets modified to include the observation. There is no final nail in the coffin.”

        Of course there is no final nail in the coffin; GHG driven warming is fundamentally true, and rising GHG concentrations in the atmosphere must raise surface temperatures, all else being equal. But I think it is important to draw a distinction between the legitimate scientific arguments (of which there are many) and the consequent policy arguments. I suspect most who participate here are primarily motivated by concern for how the scientific issues impact policy choices. If analyses like Nic has been doing accurately reflect reality, and if the true aerosol influences are substantially lower than has been assumed (the Stevens paper, and arguments by some aerosol specialists) then justifying immediate reductions in fossil fuel use, independent of cost, as some argue for, becomes nearly impossible. So in a sense, the science could well generate nails which will close the coffins of certain policy responses to GHG driven warming, not AGW. If Nic’s TCR and ECS probability curves above reflect reality, then I doubt the voting public will accept draconian policy options.

  15. Nic,

    Do you think you could add the AR5 values (or an avg.) to Fig. 1 and 2 above
    for comparison, perhaps as a dotted line?

    • The AR5 ranges are given in the table. AR5 doesn’t give best estimates, but the CMIP5 models used for the RCP8.5 scenario simulations have a mean ECS of 3.4 C and a mean TCR of 1.9 C, way above the 95% upper uncertainty bounds of my revised-aerosol forcing estimates, of 2.2 C for ECS and 1.65 C for TCR.

  16. David Springer

    @Nic Lewis

    Just to clarify, climate sensitivity is the global average surface temperature response to a doubling of all greenhouse gases not just CO2, right?

    So the effect CO2 emission from fossil fuel consumption is only about half of the calculated ECS/TCS response. In other words if all CO2 emission from fossil fuel consumption were halted it would only reduce anthropogenic global warming by half.

    I think this is a key point that doesn’t get enough exposure. Political and ideological agendas advocate reducing fossil fuel consumption but in point of fact that’s only half the battle and there’s hardly any mention of the other half.

    • David,

      Climate sensitivity is defined as the GMST response to a doubling of CO2, but as you say the actual response is to all greenhouse gases (GHGs). (Ozone is usually separated out from other GHG as it is too short-lived to be well-mixed, and the sensitivity of GMST to ozone forcing may not be quite the same as to other GHGs.)

      However, I think the ratio of CO2 to total GHG forcing is more like 65% now.

      So, your point is fair but not as strong as your diagram suggests.

    • Steven Mosher

      actually
      “Just to clarify, climate sensitivity is the global average surface temperature response to a doubling of all greenhouse gases not just CO2, right?”

      Climate sensitivity is the response to ALL forcing.

      its the change in temperature per change in forcing. it doesnt matter where the extra watts came from.

      So if everything were constant and the sun increased you could get a estimate of sensitivity.

      • David Springer

        Wow. Mosher doesn’t know the definition of climate sensitivity. Nic Lewis does. So do I.

        http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch8s8-6.html

        “Climate sensitivity is a metric used to characterise the response of the global climate system to a given forcing. It is broadly defined as the equilibrium global mean surface temperature change following a doubling of atmospheric CO2 concentration”

        Read more and comment less, Mosher. Or maybe go get a science degree of some sort.

      • Springer, I think Mosher hasn’t figured out the two greenhouse effects yet. Because of that ALL forcings are not created equally.

      • Steven Mosher

        WRONG Springer

        Climate sensitivity is the response to ALL forcing
        Climate sensitivity to C02 doubling is different.

        “Climate sensitivity is the equilibrium temperature change in response to changes of the radiative forcing.[2] Therefore climate sensitivity depends on the initial climate state, but potentially can be accurately inferred from precise palaeoclimate data. Slow climate feedbacks, especially changes of ice sheet size and atmospheric CO2, amplify the total Earth system sensitivity by an amount that depends on the time scale considered.[3]

        Although climate sensitivity is usually used in the context of radiative forcing by carbon dioxide (CO2), it is thought of as a general property of the climate system: the change in surface air temperature (ΔTs) following a unit change in radiative forcing (RF), and thus is expressed in units of °C/(W/m2). For this to be useful, the measure must be independent of the nature of the forcing (e.g. from greenhouse gases or solar variation); to first order this is indeed found to be so[citation needed].

        The climate sensitivity specifically due to CO2 is often expressed as the temperature change in °C associated with a doubling of the concentration of carbon dioxide in Earth’s atmosphere.”

        Elsewhere Nic has used the standard definition. sensitivity is the response to ALL forcing

        from his paper
        ” Climate sensitivity is a metric that is used to
        summarize the global surface temperature response to an externally imposed radiative forcing. ”

        BUT.

        “The
        term „equilibrium climate sensitivity‟ (ECS) refers to the equilibrium change in surface temperature to a
        doubling of atmospheric CO2 concentration. ”

        Most importantly IN HIS OWN CODE he shows that you consider ALL FORCING

        see the table of forcings

        AR5_TabAII1.2ForcFin.tab

        in short.

        Climate Sensitivity is the response to ALL forcing
        ECS is the response to doubling c02 OR the response to 3.71 Watts

      • Steven Mosher

        So, read Nic’s paper springer.
        Look at equation 1.
        ECS is Watts per doubling for c02 * climate sensitivity
        his code shows he considers all forcings.
        his text DISTINGUISHES between

        A) climate sensitivity.
        B) ECS

        ECS = watts for doubling c02 * climate sensitivity

        different but related things.

      • Steven Mosher

        when you get up speed let me know

        http://wattsupwiththat.com/2013/05/21/model-climate-sensitivity-calculated-directly-from-model-results/

        http://wattsupwiththat.com/2013/05/21/model-climate-sensitivity-calculated-directly-from-model-results/#comment-1312826

        ECS = f2x*dT/dF-dQ

        where
        f2x = 3.71 watts ( the watts from doubling c02)
        dT = change in temperature
        dF = change in ALL FORCING
        dQ= change in OHC

        see section 3.1 of Nic’s paper.

        or see this paper
        https://niclewis.files.wordpress.com/2013/09/lewis2013_objective-bayesian_jcli-d-12-00473-1.pdf

        hmm. read the acknowledgments. I’m pretty familar with Nic’s work

      • David Springer

        Which part of IPCC definition “It is broadly defined as the equilibrium global mean surface temperature change following a doubling of atmospheric CO2 concentration” did you not understand Mosher? For an English major you sure seem to have a hard time with the English language.

        In the context of Lewis’s post here he already agreed with me too. You think you know more than him AND the IPCC consensus?

        Maybe try another 1000 screech words split over five comments in a row. What’s the theory there if you can’t make a point with quality make it with quantity?

        ROFLMAO

      • “Steven Mosher May 22, 2013 at 8:09 am
        with the OHC component you are calculating TCR.


        write this down”

        TCR, Transient Climate Response

        ECS Equilibrium Climate “Sensitivity”

        Looks like the definitions were modified to reduce confusion.

      • David Springer

        Mosher is wrong says IPCC, NOAA, MIT, Lewis and Curry, Real Climate, Penn State, and American Geophysical Union.

        http://www.gfdl.noaa.gov/transient-and-equilibrium-climate-sensitivity

        “climate sensitivity, traditionally defined as the average warming at the Earth’s surface due to a doubling of the carbon dioxide from pre-industrial levels.”

        http://newsoffice.mit.edu/2010/explained-climate-sensitivity

        “Specifically, the term is defined as how much the average global surface temperature will increase if there is a doubling of greenhouse gases (expressed as carbon dioxide equivalents) in the air,”

        http://newsoffice.mit.edu/2010/explained-climate-sensitivity

        “Specifically, the term is defined as how much the average global surface temperature will increase if there is a doubling of greenhouse gases (expressed as carbon dioxide equivalents) in the air”

        https://judithcurry.com/2014/09/24/lewis-and-curry-climate-sensitivity-uncertainty/

        “The equilibrium climate sensitivity (ECS) is defined as the change in global mean surface temperature at equilibrium that is caused by a doubling of the atmospheric CO2 concentration”

        http://www.realclimate.org/index.php/archives/2013/01/on-sensitivity-part-i/

        “What is climate sensitivity? Nominally it the response of the climate to a doubling of CO2”

        https://www.e-education.psu.edu/meteo469/node/219

        “The concept of climate sensitivity described in this report, sometimes called the “Charney Sensitivity”, envisions the equilibrium sensitivity of Earth’s climate to CO2 forcing as the equilibrium response of the climate system to a doubling of CO2 concentrations”

        http://blogs.agu.org/wildwildscience/2013/03/20/how-much-will-the-planet-warm-if-we-double-co2/

        “it is usually defined as the amount of global surface warming that will occur when atmospheric CO2 concentrations double”

      • Can I try too?
        Climate Sensitivity is the response to an attack on one’s position on the Climate.
        TCR is a Transient attack Response, it didn’t really hurt.
        ECS is Equilibrium Climate Sensitivity , each person attacks everyone else’s view with vigor.
        ECS seems to be all the go here.

      • “Climate Sensitivity” is a myth.

        Why? Many reasons, among them the fact that it involves predicting the response of “Global Average Temperature”, which also is a myth.

        Another reason: to the extent it exists outside the imaginations of people who don’t understand how hyper-complex inter-coupled non-linear systems work, because the assumption that it’s a single number that doesn’t change with boundary conditions is totally unwarranted.

  17. stevefitzpatrick

    Ken Rice,
    Seems pretty clear to me that any narrowing of the uncertainty range for aerosol forcing is an unmitigated good, since it both acts as a constraint on GCMs (narrowing the range of plausible parameterizations) and narrowing the range of plausible sensitivity values from energy balance based estimates. ARGO has set firm constraints on plausible ocean heat uptake rate; similar measurement based constraints on aerosols will make all efforts to estimate climate sensitivity (by any means) more accurate, and more importantly, more certain.

  18. The most interesting thing to me is that climate adjusts to changes in ways that tends to limit the imbalance. If some portion of the imbalance is related to much longer term responses, then the “sensitivity” to a forcing change is likely to change as the long term response changes.

    That is a rather wicked non-linear issue that requires some less creative paleo that focuses on oceans instead of atmosphere to begin to figure out if it can be sorted out. Some things can only be “solved” to a point and if that point is +/- 0.5 C or greater you are stuck with a real world uncertainty range that no matter how creative the statistics are cannot be reduced.

    As long as there is that dQ associated with the oceans involved, ocean paleo should be in the front row.

  19. There is a workshop on climate sensitivity next week, March 23-27 in Germany. Speakers include Nic lewis, Lennart Bengtsson, James Annan and Bjorn Stevens
    http://www.mpimet.mpg.de/en/science/the-atmosphere-in-the-earth-system/ringberg-workshop/ringberg-2014.html
    HT @ed_hawkins

    The title of Stevens’s talk is
    “Some (not yet entirely convincing) reasons why 2.0 < ECS < 3.5"
    which suggests he does not agree with Nic's argument.

    • Interesting link:

      Conveners:
      “Bjorn Stevens,
      Ayako Abe-Ouche, Sandrine Bony, Gabi Hegerl, Gavin Schmidt, Steve Sherwood and Mark Webb”

      So not only is Stevens not afeared for his job security, he’s deeply embedded within the center of the “climate science community.”

      Notice also that L & C 2014 is a part of the recommended literature.

      • Steven Mosher

        haha..

        you know what I’ve told skeptics.

        There is a debate in science. Do what Nic Lewis has done and you will be invited. He doesnt just question. he doesnt just doubt. he actually DOES WORK.

        Nobody objects to a scientist who asks tough questions and then proposes a better/different answer.

      • Danny Thomas

        Joshua,

        Curious of your defintion of “activist” and if there are those on the AGW side who might fit the bill? It seems that a label (Name + activist = skepticism) yet I see nothing w/r/t quality of “merits of work”. If you chose to answer, can you offer criticism of the work? If not, no answer needed. Thanks,

      • Danny Thomas

        JIM 2,
        I agree and request my post of 12:29pm be deleted. I posted inappropriately and apologize to all.

      • Not even, “doubling or tripling the amount of carbon dioxide,” in the atmosphere will have much impact on our climate, according to professor Geoffrey G. Duffy: “water vapour and water condensed on particles as clouds dominate.” And, cloud condensation cleans up the air, removing particles called aerosols from the atmosphere.

        In an annual global mean, about 80–90% of aerosol particles are removed from the atmosphere by in-cloud and below-cloud scavenging (wet deposition). Remaining part of particles is removed by different ways of dry deposition. (István Lagzi, et al., Atmospheric Chemistry, © 2013 Eötvös Loránd University)

      • That comment makes no sense in light of the elementary fact that a carbon dioxide molecule is not a “particle” in the sense used in that quote. Carbon dioxide and methane, like nitrogen molecules and argon, would be in the class of well-mixed gasses, unlike the water vapor which as your source indicates has a tendency to precipitate out if given opportunity.

      • David Springer

        R. Penner

        Water vapor isn’t well mixed but it is present everywhere in the troposphere in concentration many times that of CO2. Water vapor accounts for about 50% of greenhouse warming and clouds (water droplets) another 25%.

        In fact the story goes that so-called water vapor amplification even accounts for 66% of anthropogenic greenhouse warming. If not for water vapor amplificatiton, which is mythical at this point, there would be little concern over manmade CO2 as a doubling barely causes 1C warming in a dry atmosphere which is not cause for alarm. Only by water vapor amplification hypothetically turning 1C warming into 3C is there cause for alarm.

        Write that down.

      • David Springer

        R. Penner

        Required reading.

        http://www.drroyspencer.com/2014/09/water-vapor-feedback-and-the-global-warming-pause/

        Water Vapor Feedback and the Global Warming Pause
        September 10th, 2014 by Roy W. Spencer, Ph. D
        .
        Global warming is the predicted result of increasing atmospheric CO2 causing a very small (~1-2%) decrease in the rate at which the Earth cools to outer space though infrared radiation. And the since temperature change of anything is always the result of net gains and losses of energy, a decrease in energy lost leads to warming.

        The direct effect of that warming is only about 1 deg. C in the next 100 years, though (theoretically calculated, in response to an eventual doubling of CO2 late in this century). Climate models instead project 2 to 3 times as much warming as that, due to “positive feedbacks” in the climate system.

        But the Earth hasn’t warmed as much as expected by the global warming pundits and their positive feedbacks, especially in the tropics where deep moist convection dominates the atmosphere’s response to forcing.

        Why?

        We know that water vapor is the main atmospheric gas which reduces the Earth’s ability to radiatively cool in the infrared (IR). And, unlike CO2, water vapor varies tremendously due to a variety of processes.

        Increasing surface temperatures cause more evaporation which by itself increases the water vapor content of the atmosphere. Water vapor at low altitudes has indeed increased with warming, as I have shown here (over the oceans):

        So, the simple-minded assumption has been that warming caused by increasing CO2 would cause more water vapor, which will enhance the radiative warming. That’s called positive water vapor feedback, which roughly doubles the amount of warming from the CO2 increase alone in climate models.

        [Yes, I know that more water vapor evaporated from the surface cools the surface…that’s taken into account by the climate models, too.]

        But for many years I have advocated the view that water vapor feedback on the long time scales of climate change might not be positive. Clearly, something is causing the current “pause” in global warming. The three most likely causes of the pause (in my view, not prioritized) are: (1) increasing cloud reflection reducing the solar input, or (2) decreasing water vapor (and maybe cirrus clouds) in the upper troposphere increasing the infrared output, or (3) an increase in ocean mixing sequestering extra heat in the deep ocean. Or, some combination of the three. (I’m not a big fan of other theories, like more aerosol reflection of sunlight from dirty Chinese coal, or problems with the CO2 theory itself. Not that they are necessarily wrong.)

        Our 1997 BAMS paper (Spencer & Braswell, 1997) discussed the importance of middle and upper tropospheric vapor to the IR cooling rate of the Earth. I also blogged about water vapor feedback four years ago. Basically, the bottom line is that it’s the processes controlling upper tropospheric water vapor which have the biggest impact on the IR cooling rate of the Earth.

        As Spencer & Braswell (1997) showed, at low relative humidities often seen in the upper troposphere (below, say, 30%) a tiny change in water vapor content has a huge effect of the infrared cooling rate of the Earth. So you can have large increases in lower tropospheric vapor, but a small decrease in upper tropospheric vapor can completely negate the resulting water vapor feedback.

        A recent paper which claims to have new satellite evidence of positive water vapor feedback uses highly uncertain infrared water vapor channel data (6.7 microns) which has unknown long-term instrument stability, and unknown diurnal drift effects (issues which we have spent 20 years on with the microwave temperature sounders), and unknown cloud contamination effects.

        The important thing to understand is this: the largest control of water vapor feedback is the efficiency of precipitation systems, which controls how much water vapor is detrained into the upper troposphere. This process is what controls the humidity of the atmosphere on a clear day…that clear air is being forced to sink by rising air in precipitation systems, and its humidity (and thus its influence on the IR cooling rate of the clear air to space) can also be traced back to microphysical processes in precipitation systems. Clear air might seem boring, but it has a huge influence on the Earth’s temperature, through its humidity controlling the rate at which the Earth cools to space.

        While climate models can be tuned to produce the average amount of water vapor in the upper troposphere reasonably realistically, we do not understand how precipitation efficiency changes with warming, and so the physics cannot currently be included in climate models for the purpose of predicting climate change.

        On the subject of this uncertainty, a 20-year old paper by Renno, Emanuel, and Stone (1994) concluded:

        “The cumulus convection schemes currently in use in GCMs (general circulation models) bypass the microphysical processes by making arbitrary moistening assumptions. We suggest that they are inadequate for climate change studies.“

        That paper described from a theoretical point of view how high precipitation efficiency causes a cool and dry climate, while low precipitation efficiency causes a warm and moist climate.

        While I’m sure that convective parameterizations are better today than they were 20 years ago, they really can’t address something this complex. Even much more sophisticated cloud resolving models (CRMs) still make rather arbitrary assumptions regarding the conversion of cloud to precipitation. And that which isn’t converted to precipitation re-evaporates and then changes the humidity of clear air.

        It might well be that the limited radiosonde evidence we have of lower tropospheric moistening and upper tropospheric drying (e.g. Paltridge et al., 2009) is telling us that water vapor feedback is not positive, as is currently assumed in climate models. This is basically the reason why Miskolczi (2010) found a constant greenhouse effect…that the observed decrease in upper tropospheric humidity (which is controversial from an observational standpoint) just offset the warming caused by increasing CO2.

        None of the above regarding water vapor feedback is new, and even our 1997 paper examined issues Dick Lindzen was advocating at least a decade before us. I’m presenting it again to remind ourselves of how little we really know about climate change.

        And don’t even get me started on cloud feedback.

      • No mention of the absorption of solar near infrared by water vapour?

      • (in reply to David Springer)

      • David Springer said: ”Water vapor accounts for about 50% of greenhouse warming and clouds (water droplets) another 25%.”

        Springer, couldn’t be further from the truth BUT, that’s what the misleading propaganda would like you to believe! Because you are talking about the ”climate” that comes from I PCC… nothing to do with the ”real” climate

        For ”real” climate you compare Sahara and Amazon basin climates! on same latitude – in Sahara is 10C to 45C in 12h – in Brazil is 29C to 35C =if you calculate the temp for every minute in 24h, both places have same temp BUT: because propaganda only uses the ”hottest minute in 24h” Sahara would be hotter, even though Brazil has the WV and clouds – in other words: -” you are double wrong! If you know what H2O does to the climate – study the two extreme, Brazil and Sahara! Don’t let them pull you by the nose – climate is out there, in nature; what comes from met office and IPCC has nothing to do with any climate. David, look in nature for the truth!!!

    • Paul Matthews: There is a workshop on climate sensitivity next week, March 23-27 in Germany.

      Thank you for the alert.

  20. Yet, if you take the CO2 and temperature change since 1950, you still get an effective transient sensitivity near 2 C per doubling. The aerosols have not decreased in this time, and the sun hasn’t increased, so it is all GHGs as far as the positive forcing goes.

    • you should add, “if 1950 to present is a relevant time frame.” You don’t know.

      • The good thing about this time-frame is that aerosols did not decrease, and may not have changed much in net effect, so it is a “cleaner” period to choose for positive forcing evaluations. For the pre-1950 period, aerosol forcings were changing fast and these changes were more uncertain. Post-1950, we have added 75% of the CO2 contributing to 2/3 of the total industrial period forcing. This is where we see the signal at its clearest.

      • JimD, ” Post-1950, we have added 75% of the CO2 contributing to 2/3 of the total industrial period forcing. This is where we see the signal at its clearest.”

        That signal is the clearest because that is the signal you are looking for. To get that signal tuned in just right you need to use “global” land and forget about amplifications that are inconsistent with CO2 equivalent forcing.

        The factor of the matter is that no ones knows what “normal” aerosol forcing might be. If CO2 wasn’t the main theory, people would be looking at the precessional cycle and the SH imbalance versus the NH imbalance.

        You have a CO2 powered lamp post.

      • captd, that is the signal you and others appear to be trying to avoid by looking away. There it is for you to explain.

      • JimD, “captd, that is the signal you and others appear to be trying to avoid by looking away. There it is for you to explain.”

        That is pretty well explained by the zeroth law of thermo. With the higher latitude land troposphere having lower temperatures and lower specific heat capacity, (less water vapor and lower average pressure), you have an apples and oranges anomaly to energy comparison. If you are looking at this as an energy problem you would start with the tropical oceans. This is why “global” temperature anomaly really isn’t fit for purpose.

    • stevefitzpatrick

      You need to include all GHG forcing, not just CO2.

      • That is why I say effective. There are important proportionate forcing effects from all GHGs which can’t be ignored any more than aerosols can. In fact these far outweigh any aerosol effect according to the Stevens paper. If we double CO2, we are also adding these other GHGs the BAU scenario.

      • stevefitzpatrick

        Jim D,
        What you are saying seems in clear conflict with the best available data. AR5 diagram SPM.5 shows the best estimate for net human forcing (including aerosol effects) in 1950 as 0.57 watt/M^2, while in 2011, it is 2.29 watt/M^2. The difference (1.72 watt/M^2) can be looked at two ways to generate ECS and TCR estimates.

        If we want to estimate ECS, then we have to subtract the current rate of heat accumulation (near 0.6 watt/M^2, including ocean, ice melt, and land surface uptake), leaving 1.12 watts/M^2, if we assume zero heat uptake in 1950. Since the average surface temperature has increased somewhere near 0.7C since 1950 (http://woodfortrees.org/plot/gistemp/from:1930/mean:13/plot/hadcrut4gl/from:1930/mean:13/offset:0.03), an estimate of ECS is then: 0.7/1.12 = 0.625C/watt/M^2, or 2.32 per doubling of CO2. This estimate is based on the assumption that there was no heat uptake in the 1950 period, which seems unlikely considering that there was a continuous rise in sea level through the whole of the 20th century, indicating some combination of ocean heat uptake and melting of land supported ice, both of which would reduce the estimate of 2.32C per doubling if taken into account (that is, some of the 0.57 watt/M^2 in 1950 was probably being accumulated). So the true ECS value based on the IPCC AR5 best estimates of net forcing would be a bit lower than 2.32 C/doubling.

        If we want a first order estimate of TCR, then we just ignore the current heat uptake rate, and consider only change in forcing and change in temperature: 0.7/1.72 = 0.41 degree/watt/M^2, or 1.52C per doubling. That estimate of TCR is almost certainly a bit too high, because the increase in forcing from 1950 to 2011 (1.72 watts/M^2 over 61 years, or 0.0292 watt/M^2/year) is slower than a 1% increase per year in CO2 would give: ~0.052 watt/M^2/year. In other words, this first order estimate of TCR lies somewhere above the true TCR and somewhere below the equilibrium response.

        Any way you look at it, a TCR of 2C per doubling (over 70 years) is not supported by the best available data. If the aerosol influences are in fact lower than AR5’s best estimates (as in the Steven paper), then both ECS and TCR estimates have to be lower than the above values.

      • The temperature change expressed in terms of CO2 takes into account proportionate effects of other GHGs and aerosols from burning fossil fuels. It tells us that doubling CO2 gives us near 2 C. This kind of sensitivity is exactly the quantity the policymakers would need. How much warming for how much more emissions. This rate not only fits the last 60 years, but you can take a plot of log CO2 against temperature from preindustrial times and also fit 2 C per doubling to it as done by Lovejoy, for example. This kind of plot doesn’t pick beginning and end years like Lewis and Curry do, but uses the whole record.

      • Jim, if you truly want to use the whole record why stop at 1860 there is good proxy data and central England records to go back a few hundred more year along a CO2 baseline where, of course, temperature is anything but baseline.

      • Danny Thomas

        Was wondering the same thing w/r/t ending in 2004 and not 2015.

      • Yes, the point of Lovejoy’s paper was that previous centennial scale global variations had a standard deviation near 0.2 C about the preindustrial value, while the current perturbation is 4 standard deviations. This is a post by Lovejoy with a slightly updated version of his graph.
        http://www.cambridgeblog.org/2014/04/is-global-warming-just-a-giant-natural-fluctuation/

      • Danny Thomas

        JIM D,
        From your offering:”This is justified by the tight relation between global economic activity and the emission of aerosols (particulate pollution) and Greenhouse gases. Most notably, this allows the new approach to implicitly include the cooling effects of aerosols that are still poorly quantified in GCMs.”
        Isn’t the topic of this Nic Lewis post an argument counter to this:”Indeed, by bypassing any use of Global Circulation Models (huge computer models), the new study was able to predict the effective sensitivity of the climate to a doubling of CO2 to be: 2.5 – 4.2 oC (with 95% confidence) which is significantly more precise than the IPCC’s GCM based climate sensitivity of 1.5 – 4.5 oC (“high confidence”) an estimate that – in spite of vast improvements in computers, algorithms and models – hasn’t changed since 1979. (This from 2014)
        Also:”While students of statistics know that the statistical rejection of a hypothesis cannot be used to conclude the truth of any specific alternative, nevertheless – in many cases including this one – the rejection of one greatly enhances the credibility of the other.”

        Yet from today (above):”In this context, what is IMO a compelling new paper by Bjorn Stevens estimating aerosol forcing using multiple physically-based, observationally-constrained approaches is a game changer”
        Followed by this quote from Dr. Curry:”The integral of all the feedbacks, after fixing incorrect aerosol forcing, may be surprising (all this is not linear), and it may be pretty much zero.”

        Ah. Climate change conversation. Good thing everything is clear.

      • Danny Thomas, well, it is good that these skeptics of models now think that a paper aimed at helping models is a game changer. Note that Stevens did not say that this affects climate sensitivities or the physics that they should use in the models, only the kinds of forcing that they should be putting into the models. Most models did not warm enough from 1920-1950, so he found a way to improve that and the skeptics are applauding. Perhaps now by tuning the recent aerosols in a different way we can explain the pause in the same way as Stevens did here and make the models fit even better, yes?

      • Danny Thomas

        JIM D,
        I don’t percieve the “skeptics” view of modeling as not being an important tool. Instead, I percieve the ineffective nature of the specifically GCM’s in their current state being used to proffer “draconian” measures to address specific emissions as being the issue.
        Did I misunderstand Dr. Curry’s comment:”The integral of all the feedbacks, after fixing incorrect aerosol forcing, may be surprising (all this is not linear), and it may be pretty much zero.” which to me indicates that aerosol forcing indeed turns out to be “pretty much zero” this will be a strong indicator that the 1920-50 (or the consequent cooling) warming to which you refer will more likely be considered “natural” indicating less sensitivity to CO2? Then, by extension, under the same understanding much of the “current” warming may also be natural as aerosols were factored in, yes (or at least more than what is being suggested by IPCC)? My impression is that “skeptics” are applauding that this development might lead to a more open eyed consideration of alternative explainations. Did I miss something?
        And as far as “applauding” it’s kind of interesting that this has been a relatively quiet discussion either indicating digestion of ramifications is occurring, this is something of a surprise, or something else of which I’ve not yet fathomed. It’s quite clearly counter to the offerings you’re putting forth w/r/t Lovejoy (I haven’t found that paper yet). Open to further education.

      • You have to remember what Stevens has is a heuristic model to relate aerosol forcing to SO2 emissions. It is not physically based and relies on a functional fit with empirical constants. You take it for what it is. We are left with 0.8 C of warming, half of which is explainable from CO2 with no feedback, and the other half that occurs coincidentally at the same time, most would regard as connected to the first half being a feedback along the lines expected. Others want to resist the most obvious explanation.

      • Danny Thomas

        JIM D,
        “We are left with 0.8 C of warming, half of which is explainable from CO2 with no feedback, and the other half that occurs coincidentally at the same time, most would regard as connected to the first half being a feedback along the lines expected. Others want to resist the most obvious explanation.”

        But up until this time it’s been the “obviously” CO2″ (same time same channel.) Obviously aerosols at this sensitivity (now maybe not). Obviously soley man (or maybe nature in part). Obviously Global warming (then CC). “You take it for what it is.” I take it as a step in the journey to discovery and as stated previously think we should be evaluating all things all the time as there is (apparently) no “smoking gun” single factor but an interaction of many. Am I wrong?

      • The CO2 remains the dominant term in the forcing. IPCC estimates put it about twice the aerosol negative forcing, and now Stevens would put it at more than three times the aerosol forcing, making it an even more dominant component. I am not sure why the skeptics are excited about this.

      • Danny Thomas

        JIM D,
        Again, feel free to correct me if I’m wrong, but if the aerosol effect is effectively zero, then “aerosol negative forcing (IPCCx2, Stevens x3)” times zero = what?

        “”Did I misunderstand Dr. Curry’s comment:”The integral of all the feedbacks, after fixing incorrect aerosol forcing, may be surprising (all this is not linear), and it may be pretty much zero.””

      • I don’t know where Judith gets zero from. It is not in Stevens’ range of possibility.

      • stevefitzpatrick

        That plot does yield a value of ~2C per doubling, but is not close to a transient sensitivity as defined by the IPCC, because it covers far more than 70 years and far less than a total increase in forcing of 3.7 Watts/M^2. When you choose to ignore all the other forcings (land use, halocarbons, nitrogen oxide, methane, ozone, aerosols) they you are defining a different sensitivity than everyone else uses…. we might call it JimD’s sensitivity… and then go on to suggest it is informative. It isn’t. Future changes in all those different forcings almost certainly will not track their past relative growth rates; halocarbon forcing grew rapidly over most of the 20th century, but is currently falling and are projected to continue fall over the next century. Methane grew rapidly for much of the 20th century, but is currently growing much more slowly,

        The IPCC and others have adopted a few standard ways to describe climate sensitivity, and those have little to do with JimD’s sensitivity. You are mostly adding noise by talking about climate sensitivity in terms that no one else uses. Worse, your personal sensitivity value does not even allow rational prediction of future warming, because it is not based on net forcing at all.

      • I would not claim this as my sensitivity. Sean Lovejoy suggested it, but even previously we have seen Vaughan Pratt here, and Bengtsson and Schwartz use these methods to obtain such combination sensitivities where CO2 is only the dominant component perhaps accounting for 80% of the full forcing change in the period. Given the fit of Lovejoy’s line to 150 years of data, I would say it is better to use this for projecting the future than selected subsets of this trend, just as a central estimate with a plus or minus range.

  21. stevefitzpatrick

    Nick Lewis,

    Thanks for an interesting post, and for providing the link to the Stevens paper.

    I have one small suggestion: the differences between TCR and ECS would be clearer if the two probability distributions used the same x-axis scale width (eg 0C to 4C). Perhaps better would be a combined graphic showing both, using the same color codes for each curve, but heavy lines for ECS and thin lines for TCR.

  22. This data does not support aerosol forcing as a major contribution to the climate warming pause. Just another lame excuse.

    • stevefitzpatrick

      Salvatore,
      That graphic is only stratospheric aerosols, which are almost 100% from volcanic eruptions. The relevant NASA graphic for man made aerosols is at: http://data.giss.nasa.gov/modelforce

      Which is not to say I agree with the NASA aerosol estimates (they are far higher than even the IPCC AR5 best estimates, and IMO, grossly overstated), but you ought not be looking at stratospheric aerosols when talking about human aerosol influences.

  23. In-fact the data shows from 1990 -present aerosol optical depth has been on a steady decline having no impact on global temperatures.

    • stevefitzpatrick

      Jim D,
      The next time you write something like “effective transient sensitivity near 2 C per doubling” I will understand that it has nothing whatever to do with the TCR or ECS values that are commonly used in climate science, (and what was discussed in Nic’s post!) and not waste time responding.

  24. Some history/sociology of climate science for contest. In the 1990 IPCC FAR, it was concluded that clouds were one of the main sources of uncertainties, along with oceans and polar ice sheets.

    In 2003, the U.S. Climate Change Science Program highlighted aerosols as a primary challenge.

    Internally to the climate field, this was sort of a palace coup by the aerosol folks (largely led by Jim Hansen who was pushing for a satellite to measure aerosols).

    The funding then went more to those working on cloud microphysics and aerosol/cloud interaction, rather than cloud dynamics. See my previous post on the current status of cloud dynamics (its snoozing) http://www.mpimet.mpg.de/en/science/the-atmosphere-in-the-earth-system/ringberg-workshop/ringberg-2014.html

    Bjorn Stevens has his heritage mostly in cloud dynamics, but also in cloud microphysics, so he bridges both communities.

    We have made enormous progress in cloud-aerosol interactions and aerosol indirect process (for a summary see my new book with Khvorostyanov) https://judithcurry.com/2014/09/04/thermodynamics-kinetics-and-microphysics-of-clouds/

    Now its time to work on the much more difficult problem of cloud dynamics (this is at heart what Graeme Stephens recent paper is about, also)

    • stevefitzpatrick

      Judith,
      That shift in priorities seems consistent with an increasing focus on man-made (rather than natural) influences on climate starting around 2000. In hindsight, one could argue that it was putting the cart before the horse, since unperturbed sensitivity is so critical an issue. Of course the US$500+ million failure of the Glory launch makes the change in focus toward man-made aerosol effects even less productive than it might have been. At some point the model parameterizations of clouds will improve, but it will be a decade (or three!) late.

    • Danny Thomas

      Dr. Curry,

      I’d like to put in a request for a follow up to this offering after Ringberg?

  25. God point Steve and thanks for the charts.

  26. The point was made that the original paper was rejected by Nature. https://judithcurry.com/2015/03/19/implications-of-lower-aerosol-forcing-for-climate-sensitivity/#comment-685166 Over on climateaudit Nic said that “Some of the modelling groups will no doubt be loathe to accept Stevens’ findings.” For what reason? What are the main arguments that can be expected as to (a) why the paper is weak or unconvincing, or (b) why Nic’s additional analysis is not convincing?

    • Steven Mosher

      “Over on climateaudit Nic said that “Some of the modelling groups will no doubt be loathe to accept Stevens’ findings.” For what reason? What are the main arguments that can be expected as to (a) why the paper is weak or unconvincing, or (b) why Nic’s additional analysis is not convincing?”

      Go to climate audit. ask the person making the argument. Don’t make other people do the work for defending or explaining Nic’s speculation.
      It’s unreasonable, unfair, and generally annoying.

  27. Nic’s post and B. Steven’s new paper provide an additional example of ‘consensus’ bias/momentum in AR5 WG1 chapter 7 (clouds and aerosols). Figure 7.19 and Table 7.4 both show that the CMIP5 models examined have net aerosol forcing greater than -1, while all the model/ satallite at least partly observational methods are less than -1. In fact, the two different papers deriving estimates just using Ceres/Modis satellites got -0.67 and -0.45, closely bracketing Steven’s new inferred best aerosol estimate of -0.5.
    Rather than question the CMIP5 models despite the by then evident pause/hiatus that AR5 obscured elsewhere (essay Hiding the Hiatus), the lead authors went with an expert judgement that including the model range of values. In other words, even tho the chapter clearly shows a model/observational aerosol discrepancy, nothing was made of it.
    Nor was any mention made of Chylek et. al. 2007 J. Geophys Res. paper that showed if observed aerosol forcing trend 2000-2006 was combined with the observed GHG forcing trend (mainly CO2) then climate sensitivity would be halved. Which is what Nic about calculated in Lewis and Curry, and recalculates more exactly here.
    Both the aerosol information and the sensitivity consequences were in plain view to AR5, but ignored.

    • stevefitzpatrick

      Rud,
      It is the history of the Millikan oil drop experiment writ very large. When a consensus is formed (Nobel Prize winner Millikan was right about the charge on the electron), it is very difficult to change that consensus, as Thomas Kuhn noted back in the 1960’s. Eventually, the sensitivity range will change; and even AR5 shows a small movement downward. It just isn’t going to happen too quickly. Einstein went to his grave rejecting quantum mechanics; if Einstein could be so very resistant to change, can we expect better of aging climate scientists? I think not. Only gradually accumulating evidence, like what this thread has been covering, along with a host of retirements, will change the paradigm.

  28. Pingback: Aerosol forcing | …and Then There's Physics

  29. Climate science: bringing greater certainty to uncertainty till even greater uncertainty is quantified with even greater certainty.

    Stop now. Just stop it.

  30. David Springer

    Over at Climate Audit, Nic Lewis reports on the publication of a very important paper in Journal of Climate.
    http://bishophill.squarespace.com/blog/2015/3/19/climate-sensitivity-takes-another-tumble.html

    Bjorn Stevens has created a new estimate of the cooling effects of pollution (“aerosols”) on the climate. Readers will no doubt recall that to the extent that aerosol cooling is small the warming effect of carbon dioxide must also be small so that the two cancel out to match the observed temperature record. Only if aerosol cooling is large can the effect of carbon dioxide be large.

    Stevens’ results suggest that the aerosol effect is even lower than the IPCC’s best estimates in AR5, which were themselves much lower than the numbers that were coming out of the climate models. He also suggests that the number is less uncertain than previously thought. This is therefore pretty important stuff.

    Stevens chose not to calculate the effect on climate sensitivity but, being a helpful chap, Nic Lewis has done so for us, plugging the new numbers into the equations he recently used to calculate a decidedly low estimate of climate sensitivity and transient climate response based on the AR5 estimates. The effects, particularly on the upper bounds, are startling:

    Compared with using the AR5 aerosol forcing estimates, the preferred ECS best estimate using an 1859–1882 base period reduces by 0.2°C to 1.45°C, with the TCR best estimate falling by 0.1°C to 1.21°C. More importantly, the upper 83% ECS bound comes down to 1.8°C and the 95% bound reduces dramatically – from 4.05°C to 2.2°C, below the ECS of all CMIP5 climate models except GISS-E2-R and inmcm4. Similarly, the upper 83% TCR bound falls to 1.45°C and the 95% bound is cut from 2.5°C to 1.65°C. Only a handful of CMIP5 models have TCRs below 1.65°C.

    Remember folks, the IPCC’s official upper bound is 4.5°C, but Stevens’ results suggest that ECS can’t be above 1.8°C.

    Jim Hansen, Bob Ward, Kevin Trenberth, Michael Mann and Gavin Schmidt, your climate alarmism just took one helluva beating.

  31. I just thought of another implication for lower aerosol forcing. Refer back to the new mannian method of redefining the AMO, PDO
    https://judithcurry.com/2015/03/05/2-new-papers-on-the-pause/
    https://judithcurry.com/2014/05/19/critique-of-manns-new-paper-characterizing-the-amo/
    https://judithcurry.com/2014/09/28/two-contrasting-views-of-multidecadal-climate-variability-in-the-20th-century/

    Mann’s method relies on using external forcing and climate model simulations to deduce the multidecadal internal variability. If the external forcing is way off (e.g. aerosols), then there will be corresponding errors in the deduced internal variability. Using lower aerosol forcing would bring Mann’s AMO back to be closer in line with the canonical AMO (e.g. what was used in the stadium wave).

    • I agree. I have thought from the start that the Mannian AMO approach in Steinman, Mann and MIller (2015) would fall apart if models with realistically-low total aerosol forcing were used. I was going to check this, but Mann hadn’t posted his data at that point.
      The 2014 Mann AMO paper is all smoke-and-mirrors, as I showed in my post at Climate Audit soon after it cameout.

  32. Pingback: Sunday Jog Through The Climate Blogosphere | The Lukewarmer's Way

  33. I see that ATTP made several comments at Bishop Hill, pointing out that the results here are hard to square with the general results from long-term paleo studies (which seem to support an ECS of around 3). I believe James Annan made the same point in his debate with Nic Lewis. Anyhow, I’d like to see if this could be addressed; I haven’t seen a clear post on the subject, not here and not at climateaudit. (I have seen once that Dr. Curry called those results “dubious”, but she didn’t explain why.) I understand that neither site specializes in long-term paleo, but I at least would love to see a clear presentation: if you think those results are dubious enough to think that Nic Lewis-type calculations are right, please explain why.

    • There are some good comments at aTTP’s blog. I think Annan is around 2.5.

    • MikeR,
      My point was intended to be a little subtler. If you have multiple methods for determining the same thing and one (or more) starts to diverge from the others, it becomes important to understand why. It seems clear that if one uses Bjorn Stevens’s results to update Nic Lewis’s method, it is now quite different to what other methods suggest. Understanding this discrepancy is important.

  34. While reading the actual Stevens paper, I noticed the following, which stimulated some thoughts (below):

    To the extent that changing patterns of emissions are important for the global forcing, it would be more appropriate to express Faer as a function of the source strength of the different patterns of emissions, something that comprehensive models are designed to do. Two of the three models (GFDL-AM3 and GISS-E2-R) analyzed by Shindell et al. (2013) for the period between 1980 and the present day indeed show that, starting in the 1990s, a multiple (rather than single) pattern based approach might be necessary to encapsulate the global forcing, as the rise of SO2 emissions in South- and East-Asia give rise to a forcing from aerosol cloud interactions that more than offsets the reduction in forcing caused by declining North American and European emissions. The response of these two models explains the scatter in the comprehensive modelling estimates at high sulfate 159 burdens in Fig. 2 and is the basis of the claim by Shindell et al. (2013) that, despite a reduction in Qa, Faer becomes more negative over the past thirty years. However, the signal underlying this claim is very small compared to the uncertainties in the modelling, and is not robust – an equal number of studies show no change in forcing between 1980 and 2000, e.g., the blue points in Fig. 2, which are taken from Carslaw et al., as well as results from the CSIRO model, which was the third one analyzed by Shindell et al.. A more recent study even shows that there is a strong decrease in the magnitude of Faer over the same period (Kuhn et al. 2014).

    The reference to shifts in opposing directions of aerosol production in South-East Asia and Europe/North America reminded me of a subject that, IMO, has received far to little attention: the role of the Himalayas/Tibetan Plateau complex in driving the evolution of the climate.

    For instance, increasing aerosol pollution appears to have strong interactions with convective activity associated with the summer mid/upper-level anticyclone over this area [Fadnavis et al. (2013)]. This means that a “rise of SO2 [and especially insoluble] emissions in South- and East-Asia” should not be seen as somehow “balancing” “declining North American and European emissions”.

    Even the assumption that their direct radiative effects will somehow “balance” is highly questionable, considering the different trajectories and differential access to the stratosphere via the Tropical Tropopause Layer (TTL) and the Tropical Easterly Jet (TEJ) [Fueglistaler et al. (2004)].

    Given (or, perhaps, despite) how little is known regarding cloud dynamics at all, much less the effect of aerosols of various types on the dynamics of convective systems, it seems very implausible that simple changes in load would have balancing effects, given the potentially very different convective systems they interact with. From Chen et al. (2012):

    The results show that (1) the dominant origin of the moisture supplied to the TP [Tibet Plateau] is a narrow tropical–subtropical band in the extended Arabian Sea covering a long distance from the Indian subcontinent to the Southern Hemisphere. […] (3) In contrast to the moisture origin confined in the low level, the origin and fate of whole column air mass over the TP is largely controlled by a strong high-level Asian anticyclone. The results show that the TP is a crossroad of air mass where air enters mainly from the northwest and northeast and continues in two separate streams: one goes southwestwards over the Indian Ocean and the other southeastwards through western North Pacific.

    This is important for several reasons: it’s quite possible that emissions from India have a different effect from those from China, given their much higher chance of influencing convection in this critical area. (Also, depending on the role of “whole column air mass” in carrying aerosols, there might be substantial differences among the effects of aerosols released in Southern, Northern, and Northwestern China.)

    In addition, any aerosols drawn into the convective system have a much greater chance of influencing the transport of water vapor into the Stratosphere [Fueglistaler et al. (2004)]:

    Our analysis emphasizes the importance of particular pathways for tropical TST, with the western Pacific being the dominant source of stratospheric air in general and being the place, in particular, where ~70% of tropical TST [troposphere-to-stratosphere transport] assumes its final water mixing ratio.

    Overall, the enormous number of unknowns, both known unknowns and unknown unknowns, involved in the effects of aerosol loads from these sources render the results of any modelling highly questionable.

    After all, “Global Average Temperature” is a very uninformative metric, not really related to the actual effects experienced by anybody. There could be any number of different result states, with different impacts on “humanity”, all with the same “Global Average Temperature”.

    References:

    Fueglistaler et al. (2004) Tropical troposphere-to-stratosphere transport inferred from trajectory calculations by S. Fueglistaler, H. Wernli, and T. Peter Journal of Geophysical Research: Atmospheres Volume 109, Issue D3, 16 February 2004

    Fadnavis et al. (2013) Transport of aerosols into the UTLS and their impact on the Asian monsoon region as seen in a global model simulation by Fadnavis, S.; Semeniuk, K.; Pozzoli, L.; Schultz, M. G.; Ghude, S. D.; Das, S.; Kakatkar, R. Atmospheric Chemistry and Physics, Volume 13, Issue 17, 2013, pp.8771-8786

    Chen et al. (2012) On the origin and destination of atmospheric moisture and air mass over the Tibetan Plateau by Bin Chen, Xiang-De Xu, Shuai Yang, and Wei Zhang Theoretical and Applied Climatology December 2012, Volume 110, Issue 3, pp 423-435

  35. Pingback: Weekly Climate and Energy News Roundup #173 | Watts Up With That?

  36. Dr. Strangelove

    “Stevens derives a lower limit for total aerosol forcing, from 1750 to recent years, of −1.0 W/m2.”

    Nic
    I doubt it. Mt. Pinatubo eruption emitted 20 million tons of SO2 and cooled global temperature by 0.4 C. The aerosol forcing of that one-time emission is more negative than -1.0 W/m^2. Anthropogenic SO2 emission is around 60 million tons EVERY year.

    “Temperatures have already started to drop, both at ground level and in the lower atmosphere, says James K. Angell of NOAA in Silver Spring, Md. Angell told Science News his analyses of weather balloon data show that the first half of 1992 was 0.4 C cooler, overall, than the first half of 1991. He notes that the volcano’s effect may be greater than suggested by these observed temperature shifts, since this year’s El Nino warming would normally raise average temperatures by 0.2 C”

    http://www.thefreelibrary.com/Mt.+Pinatubo's+cloud+shades+global+climate.-a012467057

    • Dr. Strangelove

      “Stevens derives a lower limit for total aerosol forcing, from 1750 to recent years, of −1.0 W/m2.”

      Nic
      I doubt it. Mt. Pinatubo eruption emitted 20 million tons of SO2 and cooled global temperature by 0.4 C. The aerosol forcing of that one-time emission is more negative than -1.0 W/m^2. Anthropogenic SO2 emission is around 60 million tons EVERY year.

      “Temperatures have already started to drop, both at ground level and in the lower atmosphere, says James K. Angell of NOAA in Silver Spring, Md. Angell told Science News his analyses of weather balloon data show that the first half of 1992 was 0.4 C cooler, overall, than the first half of 1991. He notes that the volcano’s effect may be greater than suggested by these observed temperature shifts, since this year’s El Nino warming would normally raise average temperatures by 0.2 C”

      http://www.thefreelibrary.com/Mt.+Pinatubo's+cloud+shades+global+climate.-a012467057

      • The Mt. Pinatubo eruption injected aerosols into the stratosphere, where they stay for of the order of a year before falling to the surface. By contrast, anthropogenic aerosol emissions stay in the troposphere, and have a lifetime there of only a few days. So their cooling effect is a small fraction of cooling from volcanic aerosols, for an equal amount of emissions.

  37. Pingback: Battig: Climate Sensitivity – the Victimization Game | Jack's Newswatch

  38. Pingback: Climate sensitivity: Ringberg edition | Climate Etc.

  39. Pingback: Climate sensitivity: Ringberg edition | Enjeux énergies et environnement

  40. Pingback: Science Under Siege: Max Planck Institute Study Shows Climate Models Severely Overstate Warming