Earth’s Energy Imbalance

by Judith Curry

Jim Hansen has just posted his latest draft paper, entitled “Earth’s Energy Imbalance and Implications.”    This is quite a meaty paper, and will be of particular interest to those of you wondering “where’s the missing heat?

Earth’s Energy Imbalance and Implications

James Hansen, Makiko Sato, Pushker Karecha, Karina von Schuckmann

Abstract. Improving observations of ocean temperature confirm that Earth is absorbing more energy from the sun than it is radiating to space as heat, even during the recent solar minimum. This energy imbalance provides fundamental verification of the dominant role of the human-made greenhouse effect in driving global climate change. Observed surface temperature change and ocean heat gain constrain the net climate forcing and ocean mixing rates. We conclude that most climate models mix heat too efficiently into the deep ocean and as a result underestimate the negative forcing by human-made aerosols. Aerosol climate forcing today is inferred to be ‒1.6 ± 0.3 W/m2, implying substantial aerosol indirect climate forcing via cloud changes. Continued failure to quantify the specific origins of this large forcing is untenable, as knowledge of changing aerosol effects is needed to understand future climate change. A recent decrease in ocean heat uptake was caused by a delayed rebound effect from Mount Pinatubo aerosols and a deep prolonged solar minimum. Observed sea level rise during the Argo float era can readily be accounted for by thermal expansion of the ocean and ice melt, but the ascendency of ice melt leads us to anticipate a near-term acceleration in the rate of sea level rise.

The paper is lengthy, 47 pages of single space typing, with 22 figures.  The comprehensiveness of the paper is evidenced by the section headings:

1. Climate forcings
2. Climate sensitivity and feedbacks
2.1  Fast feedback climate sensitivity
2.2 Charney climate sensitivity and aerosols
2.3  Slow climate feedbacks
2.4  Climate sensitivity including slow feedbacks
3.  Climate response function
4.  Green’s function
5.  Alternative response functions
6.  Generality of slow response
7.  Implication of excessive ocean mixing
8.  Ambiguity between aerosols and ocean mixing
9.  Observed planetary energy imbalance
9.1  Non-ocean terms in planetary imbalance
9.2  Ocean terms in planetary imbalance
9.3  Summary of contributions to planetary imbalance
10.  Modeled versus observed energy imbalance
11.  Is there closure with observed sea level change
12.  Why did planetary energy budget decline the past decade?
12.1  Greenhouse gas climate forcing
12.2  Solar irradiance forcing
12.3  Stratospheric aerosol forcing
12.4  Simulated surface temperature and energy imbalance
13.  Discussion
13.1  Human-made climate forcing versus solar variability
13.2  Climate response function
13.3  Aerosol climate forcing
13.4  Implications for climate stabilization
13.5  Implications for sea level
13.5  Implications for observations

JC comments:  I haven’t had time to digest all of this, but if there is a more comprehensive analysis of the Earth’s energy budget, I don’t know what it is.

Moderation note:  this is a technical thread, comments will be moderated for relevance.

306 responses to “Earth’s Energy Imbalance

  1. Richard Wakefield

    “…the ascendency of ice melt leads us to anticipate a near-term acceleration in the rate of sea level rise.”

    Which isnt happening.

    http://wattsupwiththat.com/2011/04/17/doing-it-yourself-the-latest-global-sea-level-data-from-jason-shows-a-sharp-downtick-and-downtrend/

    What does this mean for his entire premise?

    And what about Schwartz’s paper on the degree of sensitivity and lack of understanding of aerosols contribution?

    http://www.ecd.bnl.gov/pubs/BNL-90903-2010-JA.pdf

    Oh, and there is the problem with the “warming”. Summer TMax isnt increasing.

  2. One must comprehend before getting comprehensive.
    ==============

  3. This sort of thing:

    The correct answer defines the terms of humanity’s ‘Faustian aerosol bargain’ (Hansen and Lacis, 1990). Global warming has been limited, as aerosol cooling partially offsets GHG warming. But aerosols remain airborne only several days, so they must be pumped into the air
    faster and faster to keep pace with increasing long-lived GHGs. However, concern about health effects of particulate air pollution is likely to lead to eventual reduction of human-made aerosols. Thereupon the Faustian payment will come due.

    If Sophie’s +2 W/m2 is close to the truth (aerosol forcing -1 W/m2), even a major effort to clean up aerosols, say reduction by half, increases the net human-made forcing only 25 percent. But Connor’s aerosol forcing (-2 W/m2) means that reduction of aerosols by half would double the net climate forcing. Given global climate effects already being observed (IPCC, 2007), a doubling of the climate forcing suggests that humanity may face a grievous Faustian payment

    Which is fairly early on the the paper, doesn’t make him look very scientific. It’s possible to present the scientific information and argument without the political editorializing, leave alone the fact that Hansen is such a lame political writer.

    He’s not helping himself with that cutsie stuff.

    • Agreed. Everything out of that man’s (literal and figurative) mouth is a foregone conclusion flailing about for a plausible justification.

  4. Much to digest, but I am getting the strong impression it will be low in nutrition.

  5. David L. Hagen

    From a very quick glance at the paper, in Figure 14b Hansen et al. show about 0.5 W/m2 warming for the ocean. In Figure 15 they show a positive heat warming of about 0.8W/m2 declining to about 0.6 W/m2.

    I find this curious, considering that Lucia Liljegren at the Blackboard now finds statistically strong evidence of cooling temperatures over the decade since 2001 in contrast to IPCC’s warming global model predictions. See:

    http://rankexploits.com/musings/2011/hadley-march-anomaly-0-318c-up/

    It is worth noting that the projected trend is well outside the uncertainty intervals estimated using ARIMA; . . . Currently, these results indicate a fairly strong rejection of the hypothesis that the multi-model mean and HadCrut agree. This does not reject the notion that some individual models might be correct, but it strongly suggest that the mean over all models is high. That means: At least some models are biased high relative to HadCrut for the current period.

    Similarly, Hansen et al. dismiss Svensmark et al. (2009) evidence for galactic cosmic rays impacting clouds, citing Calogovic et al., 2010; & Kumala et al., 2010). They do not address the preliminary 2010 experimental data coming out of the CLOUD experiment. See:

    Results from the CERN pilot CLOUD experiment, J. Duplissy et al. Atmos. Chem. Phys., 10, 1635–1647, 2010 http://www.atmos-chem-phys.net/10/1635/2010/ http://centaur.reading.ac.uk/7222/1/266_acp-10-1635-2010.pdf

    The experimentally-measured formation rates and H2SO4 concentrations are comparable to those found in the atmosphere, supporting the idea that sulphuric acid is involved in the nucleation of atmospheric aerosols. . . .Overall, the exploratory measurements provide suggestive evidence for ion-induced nucleation or ion-ion recombination as sources of aerosol particles.

    Similarly, Laken et al. find:

    a statistically robust relationship is identified between short-term GCR flux changes and the most rapid mid-latitude (60–30 N/S) cloud decreases operating over daily timescales; this signal is verified in surface level air temperature (SLAT) reanalysis data. . . . The influence of GCRs is clearly distinguishable from changes in solar irradiance and the interplanetary magnetic field.

    Cosmic rays linked to rapid mid-latitude cloud changes
    B. A. Laken D. R. Kniveton1, and M. R. Frogley
    Atmos. Chem. Phys., 10, 10941–10948, 2010 http://www.atmos-chem-phys.net/10/10941/2010/ doi:10.5194/acp-10-10941-2010

    I understand them to find a a decrease in the Galactic Cosmic Ray flux of 0.79% of the 11 year solar cycle caused a significant 1.9% decrease in the cloud cover giving a 0.05 K change in temperature, over a period of four days.

    Let the games continue and may the best physics win.

    • When one is obsessed with CO2 and apocalypse, as Hansen most assuredly is, every question is answered the same: CO2 and ‘worse than predicted’.
      Pesky things like no data to support the answer are barely distractions.

    • Hansen cites Kumala et al. 2010 as if it’s evidence against Svensmark’s hypothesis, but it is not (even if it is misleadingly titled “Atmospheric data over a solar cycle: no connection between galactic cosmic rays and new particle formation”).

      In the Kumala study, the measurements of nucleation events over the solar cycle were taken with instruments 2-8 m off the ground. Only high energy CRs can penetrate to have an effect at such a low altitude and CRs of those energies are hardly affected by the 11-yr solar cycle. Therefore, one wouldn’t expect to see a solar cycle influence on ion-nucleation at that height.

    • A couple more papers that Hansen has passed over w.r.t. his flimsy rejection of Svensmark’s hypothesis:

      “Concerning CR activity at different altitudes, latitudes and energies : “Cosmic ray induced ionization in the atmosphere: Spatial and temporal changes” (Usoskin, Gladysheva and Kovaltsov 2004, DOI:10.1029/2004GL019507)

      From the abstract :

      We find that the time evolution of the low cloud amount can be decomposed into a long-term trend and inter-annual variations, the latter depicting a clear 11-year cycle. We also find that the relative inter-annual variability in low cloud amount increases polewards and exhibits a highly significant one-to-one relation with inter-annual variations in the ionization over the latitude range 20–55 deg S and 10–70 deg N. This latitudinal dependence gives strong support for the hypothesis that the cosmic ray induced ionization modulates cloud properties.

      “Empirical evidence for a nonlinear effect of galactic cosmic rays on clouds”
      Harrison and Stephenson 2006
      doi: 10.1098/rspa.2005.1628 Proc. R. Soc. A 8 April 2006 vol. 462 no. 2068 1221-1233

      This is another study which provides support for Svensmark’s CR-cloud hypothesis, independent of the satellite cloud observations, is Harrison and Stephenson 2006. They use daily instrumental insolation data gathered in the UK, and neutron monitor counts from Climax in Colarado. They find a significant correlation (at the 5% level) for 9 of the 10 sites examined, several of them significance at < 0.1% . The sites with higher percipitation have lower diffuse fraction changes, and the site that just failed the significance test had the highest rainfall of the set. They speculate (with some physical reasoning) that high percipitation has an effect.

      From the abstract :

      Across the UK, on days of high cosmic ray flux (above 3600×10^2 neutron counts/h, which occur 87% of the time on average) compared with low cosmic ray flux, (i) the chance of an overcast day increases by (19±4) %, and (ii) the diffuse fraction increases by (2±0.3) %. During sudden transient reductions in cosmic rays (e.g. Forbush events), simultaneous decreases occur in the diffuse fraction.”

      At Cambridge and Jersey, where the correlation was significant at <0.1% level, the % change in the diffuse fraction for overcast days was 5% and 4.7% respectively. The text notes that "At Reading [another location in the UK, not one of the sites] , the measured sensitivity of daily average temperatures to DF for overcast days is -0.2K per 0.01 change in DF (for 1997–2004)".

      • Typos, sorry : “significance at < 0.1%" should be "significant at < 0.1%".
        And the first open quotation of the post should be deleted/ignored.

  6. Interesting paper, one of Hansen’s better ones for being a little more modest and less strident than usual. But he still cannot do basic arithmtic, reepating tet again he has made in virtually every paper he has written over the last 15 years and more: “Human-made CO2 emissions are increasing just above the range of IPCC scenarios (Rahmsdorf et al., 2007), but the CO2 increase appearing in the atmosphere, the ‘airborne fraction’ of emissions, has continued to average only about 55 percent (Supplementary Material, Hansen et al., 2008), despite concerns that the terrestrial and oceanic sinks for CO2 are becoming less efficient (IPCC, 2007).” As I have shown (E&E 2009), along with Knorr (GRL 2009), the Airborne Fraction has demonstrably averaged only 44-46% since 1850 (Knor) or since 1959 (Curtin, using the raw CDIAC data in Le Quere et al http://www.globalcarobonproject.org). Why does Hansen routinely describe the non-airborne fraction as the airborn?

    That is perhaps a minor point but if Hansen cannot get that right what can he be trusted with? Certainly not anything to do with atmospheric water vapor. The only passing mentions are to the alleged increase thereof supposedly due to rising temperature as a positive feedback and then he redeems himself by correctly noting that “The latent energy associated with increasing atmospheric water vapor in a warmer atmosphere is an order of magnitude too small to provide an explanation for the high estimates of atmospheric heat gain”. But that still does not excuse the complete absence of any mention of non-anthopogenic water vapour arising from the much more significant natural primary evaporation.

    Svante Arrhenius (1896) devoted almost as much space to natural “aqueous vapour” as he does to CO2,and his Table 3 brings the two together. But the IPCC AR4 WG1, along with Hansen has no mention of it – in fact WG1′s Chapter 9 by Hegerl and Zwiers et al., which claims to quantify and differentiate between natural and human causes never once mentions water vapor at all (unlike J. A. Curry – any relation?! – in her and P J Webster’s book Thermodynamics of Atmospheres and Oceans 1999). Truly AR4 along with Hansen et al. owe everything to Bernard Madoff and no more do science than he did investment.

    • I am a little skeptical when someone whose (one) climate publication is in Energy and Environment calls AR4 (WGI, nonetheless) and James Hansen the equivalent of Bernie Madoff. I looked at the E&E paper in question and I find its statistical analysis of food production to be rather unconvincing. But it does show the airborne fraction between 1959 and 2006 averaging 42%.

      What is the full citation of the Hansen 1988 paper to which the 55% figure is attributed, and is there any other explanation for the disagreement besides “Hansen made a big error”?

  7. This energy imbalance provides fundamental verification of the dominant role of the human-made greenhouse effect in driving global climate change.

    Is it just me? That seems to be a big leap. There is an energy imbalance, therefore it is caused by human-made greenhouse gasses.
    The connection escapes me.

    • My original reply evaporated into the ether- so i’ll try again.

      It’s not just you- he outright dismisses any possibility whatsoever of any unknown or misunderstood natural factors playing a role.

      I’m not trying to suggest that this is definitley the case- but he certainly cannot justify the “fundamental verification ” verification statement.

      On a wider note- has this paper been published yet?
      I’m an industry scientist, but my research is bringing me closer and closer to academic-based publishing every day- and i must say that these sort of ‘excessive’ statements litter the academic publications i’ve encountered (so much so i plan to write a few papers demolishing some highly held academic ones (in non-climatic fields mind you) for poor methodology and improper conclusions).

      Do any other industry scientists/engineers feel this way, or is it just my cGMP background getting the better of me again?

      • It isnt just you. Industry scientists publishing information on topics that will be used as the basis for business decisions are operating under an expectation of high truth content. If you are wrong very often, people lose money and you lose your job.

        Government/academic publishing operates under an entirely different set of incentives. Information used to attract grants and garner political power doesnt need to be correct. It merely needs to fit the desired paradigm and be presented with assertion of certainty. Being wrong is rarely noticed, being punished for acting fraudulently is rare as long as the paradigm that you are supporting is dominant, and refusing to make conclusions beyond the scope of the data is what gets you fired.

    • It seems to me that aerosols are just a huge fudge factor for these simulations. Instead of concluding that the CO2 forcing may have been over-estimated (sacrilege!), he assumes the aerosols have been significantly under-estimated.

      Therefore, the CO2 forcing continues to be a huge threat to humanity (even worse than we thought with the new aerosol forcing!).

      Let me summarize:

      Air…pollution…is…saving…us…from…climate…disaster.

      • I’ll take that a step further. Feedback is is just a big fudge factor if they can’t actually calculate it from first principles or measure it. It’s always nice to have a big knob to twist, and feedback is the biggest knob in all of climate science.

      • ChE,

        I read that “nudging” (AKA fudging) models is now back in vogue. It was the subject of a talk at the European Geosciences Union conference I believe. It actually is valid for models if the “nudge” factor is added to the nonlinear feedbacks (natural forcing). Some seem to forget that “less than 10%” part of the puzzle.

      • Far better to blame it on aerosols than, perish the thought, cloud albedo.

      • Yes of course. A high negative effect from aerosols is allways touted as the reason temperatures failed to rise between WW2 and 1975. So naturally he uses the same reason to explain a lack of any temp increase in the past decade as well.

        The only alternative is to admit the models overestimate overall CO2 sensitivity substantially. He will never, ever even mention that possibility.

      • There’s a fair amount of data on aerosol cooling during the 1950-1980s interval, including several studies by Martin Wild et al – see, for example Global Dimming and Brightening. These include measurements demonstrating reduced transmittancy of solar irradiance to the ground under both clear sky and all-sky conditions. This captures some of the direct aerosol cooling of the era, but may not fully quantify the indirect contribution of aerosols through increased cloud formation or persistence. It does appear that at least some of the models underestimated these effects, although exact quantitation remains difficult.

      • Fred,

        Looking at Fig22 it seems that Hansen has neglected some of the detail of dimming and brightening over the 20th century as suggested by scientists such as the one’s you mention. I see just a fairly straight line in Fig22c, no sign of the reversal to brightening in 1980 or dimming again post-2000 that seem to help to explain the shifts in trends around these periods. In fact I would suggest that Hansen seems to be giving more weight to volcanic forcing (and it’s lag) to explain these details of the mid to late 20th century temperature.

    • “Is it just me? That seems to be a big leap. There is an energy imbalance, therefore it is caused by human-made greenhouse gasses.
      The connection escapes me.”

      This approach is common in many disciplines. When presented with something they do not know, “the experts” rather that admit they do not know, blame “human activity” for creating the problem.

      The solution that follows typically involves “sacrifice” on someone’s part. In Hansen’s situation, the “sacrifice” we need to make is more taxes. That will solve the problem.

      By making the cost of fossil fuel high enough, we will stop CO2 pollution. As a bonus we will probably get rid of a lot of people which is a good thing, as the world has too many people.

    • TimTheToolMan

      The only thing that is fundamental about this is their assumption that the warming is anthropogenic in nature.

      AGW : Its warming therefore its man made.
      Skeptic : But its warmed in the past and not been “man made”
      AGW : This time the warming is man made.

      Its frustrating that AGWers cant or wont grasp the uncertainty inherant in their assumption.

      • Tim

        Yeah.

        The IPCC logic actually goes as follows.

        1. Our models cannot explain the early 20th century warming.

        2. We know that human CO2 caused the (statistically indistinguishable) late 20th century warming.

        3. How do we know this?

        4. Because our models cannot explain it any other way.

        Max

  8. The final paragraph is unintentionally funny:

    No practical way to determine the aerosol direct and indirect climate forcings has been proposed other than simultaneous measurement of the reflected solar and emitted thermal radiation fields as described above. The two instruments must be looking at the same area at essentially the same time. Such a mission concept has been well-defined (Hansen et al., 1992) and if carried out by the private sector without a requirement for undue government review panels it could be achieved within a cost of about $100M.

    IOW, keep NASA out of it.

  9. The Temperature of the earth has been and still is stable withing plus or minus one degree for ten thousand years. A very, very, few times were plus or minus two degrees. There is no energy imbalance. We are currently inside the plus or minus one, close to one and likely to go down toward the normal or below, again, as have we have done about half the time in the past ten thousand years. That is what a normal is. you spend about half the time above it and half the time below it. People, analyze the data! It is stable.

  10. David L. Hagen

    Uncertainties: Hansen et al. note:

    We have inferred indirectly, from the planet’s energy imbalance and global temperature change, that aerosols are probably causing a forcing of about ‒1.6 W/m2 in 2010. Our estimated uncertainty, necessarily partly subjective, is ± 0.3 W/m2, thus a range of aerosol forcing from ‒1.3 to ‒1.9 W/m2.

    I have yet to see a climate science paper that actually delves into the full uncertainty analysis and separately reports the statistical Type A and bias Type B as defined by international protocol. See:
    Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results, Barry N. Taylor and Chris E. Kuyatt, NIST Technical Note 1297 1994 Edition

    http://www.nist.gov/pml/pubs/tn1297/index.cfm

    • David,
      If your point about basic bias analysis is true, this is significant. If you are incorrect, it will be interesting to see how those who claim to be dealing with bias have actually done so.

  11. A paper by Hansen – I wouldn’t expect anything different than to say that the Earth is warming, and CO2 is the primary cause.
    But the oceans are NOT warming (as recorded by the ARGO data), and the oceans are the greatest store of heat on the planet. In addition, the below is a link to the abstract of a paper that shows downwelling spectral radiation has decreased over the past 14 years

    http://journals.ametsoc.org/doi/abs/10.1175/2011JCLI4210.1

    (the entire article is not available unless one is a journal subscriber.)
    So how do both of these affect Hanses’s ” energy imbalance”, along with ‘more CO2 causes warming’? .

  12. There is a misleading discussion about Lagrange points and 6.5 w/M2 energy imbalance. The incoming solar irradiance value was adjusted down to concur with SORCE data for incoming solar irradiance. The absolute value of outgoing fluxes is uncertain. The changes over time – flux anomalies – are known with much greater accuracy.

    ‘We compare the stability of the CERES Terra SW TOA flux record from CERES Terra SSF1deg-lite-Ed2.5 with SeaWIFS Photosynthetically Active Radiation (PAR) 6 using the approach described in Loeb et al. (2007). Briefly, deseasonalized CERES SW TOA flux anomalies are plotted against deseasonalized anomalies in SeaWIFS PAR. The SeaWIFS PAR anomalies are then multiplied by the slope of the regression line (-6.09 W m-2
    per E m-2 day-1) in order to place the two records on the same radiometric scale.

    Results, plotted in Fig. 2, show agreement in monthly anomalies to 0.26 Wm-2, and agreement in the overall slope to < 0.3Wm-2 per decade at the 95% confidence level.

    After late 2008, SeaWIFS spacecraft anomalies resulted in significantly reduced sampling resulting in a far noisy comparison. Stability in CERES Terra LW TOA flux is compared with that from the AIRS OLR data product in Fig. 3. Monthly anomalies are consistent to 0.16 Wm-2, and agreement in the slope is better than 0.3 Wm-2 per decade at the 95% confidence level

    http://ceres.larc.nasa.gov/documents/DQ_summaries/CERES_SSF1deg-lite_Ed2.5_DQS_v2.pdf

    Hansen makes some clunky and inaccurate arguments about CERES because the flux anomalies show that all the minor warming in the CERES period to 2009 occurred as a result of cloud changes associated with ENSO.

    So let's see what happens to temperature as the cool phase of the Interdecadal Pacific Oscillation intensifies.

    The emphasis on suphides seems a little misplaced as well. There are 2 aspects. Black carbon when mixed with suphides intensifies warming as the light is bounced around and absorbed by black carbon. The other problem is to disentangle significant natural sources of sulphides from phytoplankton in oceans. This should change decadally as well given the nutrient dynamics from upwelling.

  13. Nicola Scafetta

    Just a note about the 60-year cycle in the temperature that GISS does not get!

    Hansen’s interpretation is based on his GISS GCM. Does this model get the data?

    From Hansen’s paper note this interesting passage (page 35)

    “Note that, unlike Fig. 19b, real-world climate and planetary energy imbalance include unpredictable chaotic interannual and interdecadal variability. A climate model with realistic interannual variability (but muted El Nino variability) yields unforced interannual variability of global mean energy balance of 0.2-0.3 W/m2 (Fig. 1, Hansen et al., 2005). We eliminate this ‘noise’ in our calculations, ….”

    Is this a legitimate operation? The impression is that the “interannual and interdecadal variability” that the Giss Model does not get is interpreted as “noise” and is “eliminated” from the discussion.

    Now, note just a few details in the figures about the natural “variability”, which is interpreted as noise that “needs” to be ignored because the model does not get it.

    1) Fig 8b:

    period 1880-1910- Temperature (black) shows a cooling, the models (green & red) show a warming. The volcano cooling is far too deep relative to what the temperature shows!

    period 1910-1940- Temperature shows a significant warming. The models (green & red) too show a warming, but the rate is half of that of the tempeature.

    period 1940-1970- Temperature shows a cooling. The models (green & red) too show a warming, until volcano eruption interrupts it in 1960.

    period 1970-2003- The volcano spikes of the models are at least 50% deeper than what observed in the temperature

    2) Figure 11a,b,c show the same discrepancies of above between the model and the data. The overestimation of the volcano effect during 1880-1910 is particularly evident as it is everywhere else. The failure to reproduce correctly the warming trend from 1910 to 1940 is also evident.

    3) Figure 22a, shows more or less the same problems between models and data.

    Moreover, note that Hansen is using his Giss Temp, the discrepancy with the CRU temperature record would be even larger. One difference is that CRU shows a slight cooling since 2002, while GISS-temp shows a still warming.

    4) Figure 16. The straight line in the figure is misleading. It is evident that since 2002 the data about sea level change are showing a deceleration. This is due to the 60-year cycle (turning down) that has been observed in sea level change data since 1700 .(Jevrejeva et al, 2008. Recent global sea level acceleration started over 200 years ago? Geophys. Res. Lett. 35, L08715.)

    Thus, is it legitimate to call “noise” what the model does not get?

    • When you are in love with your model and know that it is the “truth”, facts are merely insignificant distractions. It isn’t the “truth” that is inconvenient. Just the facts.

    • Excellent points. It’s not sufficient to characterize real variation in the data as “natural variation” or statistical noise. The hockey stick of Mann and co does the same thing: characterizes all variation 1100-1850 as noise, instead of seeking to understand the processes that cause that variation.

    • Regarding Nicola’s point about volcanic cooling evidently being over-estimated in the models, does this not strongly imply that the aerosol negative forcing in the models has actually been overestimated, contradicting Hansen’s assertion about the underestimated impact of aerosols?

      • That sounds like a very good point to me! After all they are both negative forcings caused by atmospheric particulates.

  14. intrepid_wanders

    I am sorry, I laughed at every point made after the “Faustian Bargain”. It must be quite frustrating, writing a paper that (based on the thesis) says that if we were to stop burning “dirty coal” instantly, global temperatures will sky-rocket for hundreds of years (based on the “residence of CO2″). Coal must be the ultimate balance of Ying and Yang. Still no water vapor thesis or expanded forcing of the sun (other than the insulting x10 intercept adjustment) This is destined for AR5, it will fit in well with the WWF Biodiversity Report and be just as informative.

  15. Thanks for posting this draft. It gives a great summary of the current state of knowledge in very easily understood terms, and makes a very credible effort at how the imbalance or ‘missing energy’ can tell us something more about the climate system components. This paper is written at a level that most should be able to follow quite well, and the arguments are very clearly put. I certainly recommend reading its major parts, as I have done, and I want to read the rest later.
    He notes that the imbalance is expected in a forced slowly responding climate system, and its value may be telling us that climate models have too slow a response, which he says could be due to excessively efficient deep ocean mixing, only achieving the correct surface temperature change by having underestimated the aerosol effect by maybe 40%, but still within the current range of uncertainty for that effect.
    It really explains how two uncertainties, aerosols and ocean circulation, can offset each other in various ways, and uses the estimated imbalance as an extra constraint in addition to the GHG forcing and surface temperature changes that are better known.
    I’ll be interested to see how the science community responds to this.

    • Richard Wakefield

      Here’s where that ‘missing energy’ can go. Increase surface air temperature a bit and it rises, and moves as a frontal system, colliding with a cold frontal system, creating a low pressure cell, with winds, which interacts with the ground as friction. That energy is disipated as wind friction.

      Any one done a study on how much energy is lost due to wind friction?

      • That becomes heat again. It doesn’t vanish.

      • Richard Wakefield

        No, it can also turn into kenetic energy, no heat at all.

      • Friction loss == heat.

      • Richard Wakefield

        So there is no (heat) energy transfer into kenetic energy, loss of heat, in a hurricane/typhoon system? How much heat energy is required to make huge sea swells? No heat transfer into kenetic energy in tornados?

        What have you got against heat anyway?

      • Mr. Second Law doesn’t like heat at all. He considers it second class energy.

      • Merely a thought: The mechanism you describe could also cause more precipitation, which in winter would be snow. Snow cover causes higher albedo, which would cool.

        So while it won’t get rid of heat it still could influence climate. We’re not talking about a thermodynamically closed system.

      • Go study thermodynamics please!

    • It really explains how large uncertainties have created a very unreliable model that has zero value for future planning. 40% error margins! I support the effort to understand the climate, but the rhetoric that has been generated based on these model predictions is totally unfounded.

      I did not read the paper yet, but I have to wonder how seriously he considered that the the CO2 forcing was too large, a reasonable conclusion given the data over the last two decades. Hansen’s admitted (I respect him for this) and demonstrated activism have to be considered a real source of bias.

      If Scafetta’s predictions based on planetary forces are accurate, Hansen is going to have a very miserable next 30 years as he explains why warming is missing.

  16. Based on the comments so far, it appears this paper is a target rich environment.

  17. JC,

    I am very interested in this paper, but don’t have time to read it myself (well, at least not read and understand). I was wondering if, when you are done comprehending the paper if you would post an update with your thoughts. This way I get to at least hear people’s opinions of the paper. Thanks.

    -Adam

    • Bill Collinge

      Adam – I second that. I can read this over the coming days but would be much guided by JC’s guidance. -Bill

    • Yes, looks like this thread needs some focus here, I will write more on this later today.

      • steven mosher

        It’s a weird mix of solid science, inaptly chosen metaphors, and thinly veiled speculation. Personally, I’d like to see a focus on the sensitivity stuff from hansen’s paleo perspective. It’s the most important topic and I think some math details need to be filled in for old slow guys like me, especially since its independent of the models.

        Hansen’s metaphors are bothersome but I don’t think we should focus on them.

      • Bill Collinge

        encore – it must be in the paleo analysis that it becomes clear to Hansen that the anthropogenic aerosol (negative) forcing may be underestimated, as opposed to, say, an otherwise-forced increase in cloud cover or even a cloud-mediated negative feedback slowing of the CO2 warming?

      • steven mosher

        I think hansen’s paleo work is important for several reasons.

        1. A host of contrarians seem to accept the paleo record and what it teaches us about the extremes of earth’s climate. So we are using evidence they accept: example: bob carter.

        2. There is skepticism about models. Hansen’s argument is that the BEST evidence for sensitivity bounds is the paleo, not models.

        3. the math is tractable for most engineering types.

        4. It could change my mind about being a luke warmer And I think its a good thing to look at the best evidence against one’s cherished positions.

        So, I think a more detailed, more accessble and more comprehensive presentation would be a good thing.

        It’s way better than hockey stick stuff and way better than model descriptions.. if hansen is right

      • David L. Hagen

        It appears to me that Hansen highlights aerosol cooling and dismisses cloud variation. My impression from Roy Spencer and others is that cloud variation is highly uncertain and could dominate heating/cooling.

        Clouds are foundational to the causation/consequence (chicken/egg). i.e.,
        are clouds driven by CO2 which drives ocean temperatures- which modulates clouds;
        OR
        Does solar modulation of galactic cosmic rays drive clouds which drives ocean temperature which drives CO2?

        Hansen’s dismissing the primary competitive theory / evidence without serious examination does not give me much confidence in is conclusions!

      • Clouds are part of the feedback, not the forcing, so they can’t be compared with CO2, solar, aerosol, and volcanic forcing terms.

      • David L. Hagen

        Jim D
        Your statement summarizes the “consensus view” of the chicken/egg causation/consequence debate.

        If solar/planetary activity modulate galactic cosmic rays which modulate clouds, then that combination modulates solar insolation absorbed/reflected, and thus is an intermediate “forcing” similar to aerosols.

      • In that case the GCRs would be the forcing not the clouds themselves.

      • In the Paleo, He seems to be taking the temperature difference between Glacial Maximun and Interglacial, deducting a value for the ice albedo feedback and saying the rest id due to CO2, (which typically increases about 100PPM over the warming phase). Seems plausible as a rough estimate but
        -Are the estimates for ice albedo feedbacjk reasonable, (noting that the smaller an ice feedback he uses, the bigger the CO2 sensitivity he gets)?
        -That warming takes 8-10 thousand years, which surely raises some issues about relating it to a centennial time scale.
        Some people say paleo provides strong evidebca of CAGW, I would have thought it was strong evidence against CAGW, so would be interested in your views on that.

  18. Call me cynical but the role of aerosols in climate science seems to be as the remainder in any equation, here it gets re-estimated upwards to account for a lack of energy in the system. The worst thing to happen to climate scientists will be when the actual role of aerosols is properly estimated and it can’t be revised for each new climate state.

    The certainity of the abstract language looks worrying.

    • HR, you are right but the tactic is not going to fly much longer. Papers by Petr Chylek and Stephen Schwartz have shown the aerosols do not have as much cooler power as originally thought. Hansen appears to be grabbing at straws here. It looks to be an act of desperation, not science.

      • Interesting, because I think Schwartz and Hansen are saying similar things here.

      • It should be emphasized that one should not take any comfort with the fact that the aerosols may be negating much of the greenhouse gas forcing–in fact just the opposite. Because the atmospheric residence time of tropospheric aerosols is short (about a week) compared to the decades-to-centuries lifetimes of the greenhouse gases, then to whatever extent greenhouse gas forcing is being offset by aerosol forcing, it is last week’s aerosols that are offsetting forcing by decades worth of greenhouse gases. Because the greenhouse gases are long-lived in the atmosphere, their atmospheric loadings tend to approximate the integral of emissions. Because the aerosols are short-lived, their loading tend to be proportional to the emissions themselves. There is only one function that is proportional to its own integral, the exponential function. So only if society is to make a commitment to continued exponential growth of emissions can such an offset be maintained indefinitely. And of course exponential growth cannot be maintained forever. So if the cooling influence of aerosols is in fact offsetting much of the warming influence of anthropogenic greenhouse gases, then when society is unable to maintain this exponential growth, the climate could be in for a real and long-lasting shock.

        I said it months ago. Sell natural variability; buy sulphate aerosols.

      • Sorry, the bold is a quote from Schwartz’s website.

      • I’m not sure how increasing the influence of aerosols in brightening world is supposed to explain a lack of warming. Shouldn’t we expect exactly the opposite effect?

  19. I think we have been too readily explaining the slow changes over past decade as a result of variability–that explanation is wearing thin. I would just suggest, as a backup to your prediction, that you also do some checking on the sulfate issue, just so you might have a quantified explanation in case the prediction is wrong. Otherwise, the Skeptics will be all over us–the world is really cooling, the models are no good, etc. And all this just as the US is about ready to get serious on the issue. …We all, and you all in particular, need to be prepared.”

    http://bit.ly/eIf8M5

  20. I have one observation regarding Earth’s energy balance.

    However tiny or negligible geothermal flux is (~0,1 W/m2, but maybe underestimated), it is positive – the heat is being transfered from earth interior to the surface continuously.

    Now, if we define our (physical) system as:

    Atmosphere + oceans + upper ~10 m of earth crust (including oceanic), the system boundaries are:

    Inner boundary – earth crust, ~10 m under the surface (and the oceanic floor),
    Outer boundary – TOA (top of the atmoshere).

    Assuming there are no changes in the internal energy of the system (dU=0, annually or decadally averaged), the heat entering the system at the inner boundary (geothermal flux) must leave the system at the outer boundary (TOA). Averaged net heat flux at TOA must be negative (the system is losing energy at TOA).

    Again, the system (atmoshere, oceans and upper 10 m of crust) receives heat from earth interior and dissipates it over TOA, despite insolation! We are being heated from the bottom and are losing heat at the top.

    So, neglecting geothermal flux for being tiny does not seem right – however tiny, it is positive, contrary to the net heat flux at TOA, which is negative (heat loss), again assuming dU=0.

    • Eksperimentalfysiker

      It may well be that the geothermal flux is tiny – on average. It is more interesting what the fluctuations look like on annual, decadal,…,millennial timescales. Does anyone have any insight to offer?

    • Agreed, Edim. But I’ve been told more than once that it’s insignificant and therefore not accounted for by the models or in any other way. It’s one of the many ways that climate science fails to meet engineering standards.

      • There might be some fundamental misunderstanding. The geothermal flux IS insignificant compared to insolation or earth thermal radiation, the two components of the net flux at TOA.

        However, it is VERY significant compared to the net flux at TOA, which in average is equal in magnitude to it (assuming dU=0).

        In fact, the point remains the same even when internal energy U is somewhat variable (~climate change).

        dU = Qgth + Qtoa-net
        Qtoa-net = Qsol – Qearth = dU – Qgth
        Qgth > 0, Qearth < 0 (per definition)

        Internal energy of the system can go up or down slightly (or more when we have natural changes like glacials/interglacials), my point is that there is continuous geothermal flux from the earth interior, which must mean that heat is being added to the system (positive heat flux at the inner boundary, however tiny).

        Therefore, at the outer boundary (TOA), to keep the internal energy somewhat constant (not changing too much and not increasing/decreasing continuously), the net flux must be negative (averaged) – the added heat must be lost at TOA.

        So, the net flux at TOA is negative (earth radiation is greater than insolation) and very tiny – “almost” equal to the geothermal flux – the difference is dU.

      • Edim –
        The geothermal flux IS insignificant compared to insolation or earth thermal radiation

        Agreed. But as you say –

        However tiny or negligible geothermal flux is (~0,1 W/m2, but maybe underestimated), it is positive – the heat is being transfered from earth interior to the surface continuously.

        - it is constant, it may be underestimated, it’s definitely part of the heat transport process to get to TOA – and IIRC Trenberth’s “missing heat” was not that much greater in comparison to the entire energy budget. If one ignores several of those “insignificant” sources, the sum of the parts “could” become significant. Not saying that’s true or even probable, only that it’s not impossible and therefore should not be completely ignored.

      • Jim,

        We agree on that.

        I have another point. All the arguments I’ve seen sofar go like:

        “geothermal is insignificant, it is less than 0,1 W/m2 and averaged insolation is ~250 W/m2…”

        I think it’s apples to oranges. On one side we have net flux and on the other side only one component of the net flux. It’s not even wrong.

        My point is not about the magnitude of the geothermal flux (does not matter how tiny), but about its direction – it’s OUTWARDS.

        The consequence is that direction of the net flux at TOA must be outwards too and its magnitude must be ~Qgth, otherwise internal energy changes too much.

      • Thank you – you’re the firs person I’ve found who finds “anything” interesting about the subject.

        :-) :-)

      • Jim,

        Thank you. There are so many interesting things about this and other subjects regarding science of climate changes!

        A great pity established science is so cowardly and corrupted.

  21. TC to JAC 19 April:

    In reply to Labmunkey on Earth’s Energy Imbalance
    April 19, 2011 at 3:31 am
    who said Hansen “outright dismisses any possibility whatsoever of any unknown or misunderstood natural factors playing a role.”

    The biggest apparent “unknown” natural factor is basic atmospheric water vapor (not that from so-called feedback from claimed rising temperature), which has disappeared down both the IPCC’s and Hansen’s borehole (he fails to mention it).

    Thus although water vapour as such is recognised to be a very powerful radiative forcing agent (twice as much so, pro rata, as CO2 according to Dessler 2010), it is not even mentioned in any of the many lists of radiative forcings in AR4, on the grounds that the only relevant WV is the feedback from rising temperature!

    If serious Hansen would recognise that atmospheric water vapour existed naturally before any rising temperatures due to human causes, and that the increase in GMT of 0.75oC since 1900 is not enough to increase water vapour at all even though it could very slightly increase the vapour holding capacity of the atmosphere.

    So why do Hegerl and Zwiers (AR4 WH1, Chapter 9) and now the Hansen et al. analyses of human and natural influences on temperature ignore by far the biggest, natural levels of water vapour?

    Even anthropogenic increases in atmospheric H2O are ten times larger than of [CO2] and for a serious account of the interactions between natural atmospheric water vapour and CO2, go back to Arrhenius 1896 (e.g. Table III), a paper that was actually first published in Edinburgh, now home to Gabi Hegerl who thinks that if Scotland warmed by 2 oC it would be even more uninhabitable than it is now (for me) with its mean annual temperature of less than 10 oC!

    The absurdity of the IPCC-Hegerl-Zwiers view becomes evident when one recalls that they consider only the increased WV attributable to the radiative forcing of 2.6 W/sq. meter that has yielded the O.7 oC since 1900 is relevant while the evaporated WV due to the sun’s constant incoming RF of 238 W/sq. meter is not!

    As I noted here earlier, at least Hansen shows a glimmer of grasping this although still ignoring natural WV (which is not a constant by the way).

    • TRC,
      The AGW community ahs finessed things in such as a way as to claim that H2O is no longer a forcing.
      They now claim it is a feedback.
      Amazing.

      • Marlowe Johnson

        Ok I’ll bite hunter. when/why was water ever considered to be a forcing agent rather than a feedback? how do you define these terms, and how do your definitions differ from convention?

  22. The mantra of flux balance seems to be a fundamental axiom of climate science. To this old guard member, the core problem involves free energy dissipation and, as has been understood for generations in the physical sciences, dissipation is described as a bilinear product of a generalized extensive flux and a generalized intensive potential gradient – not the product of an average flux and an average gradient. For a steady-state system, according to thermodynamics, entropy is ever increasing, free energy ever decreasing, while internal energy remains constant. The distinction, U-F=TS, suggests that climate science might benefit from a refresher course in thermodynamics.

    • This is a classic in climatology. They have the same lapse when calculating global average of CO2 flux across ocean surface. They took multi-year wind averages from Naval data, and constructed average maps of pCO2 from 35 years of scattered oceanographic expeditions, and presented their product (local CO2 flux is a product of local pCO2 and stagnant film thickness, which non-linearly depends on local wind speed). Every one in science knows that is not equal to * if functions wildly fluctuate. And they do.

  23. “Earth’s Energy Imbalance and Implications” by James Hansen, Makiko Sato, Pushker Karecha, Karina von Schuckmann, 47 pages of single space typing, with 22 figures, is fundamental GISS by GISS. The length of the paper and number of section headings suggests it is comprehensive, but the attempt to guide the reader into the realm of GISS sanctioned science is transparent. The “implications” addressed are a traditional AGW rehash and very rather non-new. No doubt many will spend a great deal of time on this paper devouring every bite, I forecast they will find little of true value to justify their time. The significance of “Earth’s Energy Imbalance and Implications” by James Hansen, et al, is that it “implies” that ocean temperature observations are “improving”.

  24. It seems to be a strange piece of scientific research – “Now We Are Going To Tell You Everything We Know About Climate(change)” kind of thing?!

  25. Judith,

    This paper is more of a desperation of a scientist to hold NASA funding together with rubber band science.
    Sure would like to know where the water mass came from when he referred that the oceans “was about” 75 meters higher when all the ice melted.
    Just calculating the overall ocean distance per square Km with all available ice material doesn’t even come close to that number.
    Ice core records for one region does not make global sense to slap it to the entire planet.

  26. Hansen states: “Groundwater mining, reservoir filling, and other terrestrial processes also affect sea level. However, Milly et al. (2010) estimate that groundwater mining has added about 0.25 mm/year to sea level, while water storage has decreased sea level a similar amount, with at most a small net effect from such terrestrial processes. Thus ice melt and thermal expansion of sea water are the two significant factors that must account for sea level change.”

    I’d be interested in seeing the Milly reference if anyone has a free link to it. The moths in my wallet don’t wish to be disturbed. The results are not similar to this study

    http://www.agu.org/pubs/crossref/2010/2010GL044571.shtml

    which indicates a much higher contribution of ground water to sea levels ( .8mm/y vs .25mm/year).

  27. IN PUBLIC:

    Continued failure to quantify the specific origins of this large forcing is untenable, as knowledge of changing aerosol effects is needed to understand future climate change.

    IN PRIVATE:

    I think we have been too readily explaining the slow changes over past decade as a result of variability–that explanation is wearing thin. I would just suggest, as a backup to your prediction, that you also do some checking on the sulfate issue, just so you might have a quantified explanation in case the prediction is wrong. Otherwise, the Skeptics will be all over us–the world is really cooling, the models are no good, etc. And all this just as the US is about ready to get serious on the issue. …We all, and you all in particular, need to be prepared.

    http://bit.ly/eIf8M5

  28. I just saw a talk on ocean levels measured via satellites by Lee Fu from JPL a few weeks ago. What I learned makes me very skeptical of the claims summarized in the above abstract concerning acceleration of sea level rise.

  29. Hansen et al wrote, “A recent decrease in ocean heat uptake was caused by a delayed rebound effect from Mount Pinatubo aerosols…”

    A “delayed rebound effect from Mount Pinatubo aerosols”? First, it’s difficult to find the impact of Mount Pinatubo on the Ocean Heat Content data for many of the ocean basins. And for those that do respond, like the South Atlantic and the South Indian Ocean, the resulting dip in OHC is quickly followed by a rebound. For those who wish to check, I included Aerosol Optical Depth data in the graphs of the following OHC post:

    http://bobtisdale.wordpress.com/2009/09/05/enso-dominates-nodc-ocean-heat-content-0-700-meters-data/

    Second, during the ARGO era for OHC data (since 2003), the decrease in OHC occurred primarily in two basins, the South Pacific and the North Atlantic:

    And looking at a map of the change in OHC since 2003, much of those decreases occurred in the low to mid latitudes of the South Pacific and North Atlantic:

    Those illustrations are from the following post:

    http://bobtisdale.wordpress.com/2011/03/25/argo-era-nodc-ocean-heat-content-data-0-700-meters-through-december-2010/

    I did a quick scan of the paper for discussions of ENSO. Hansen et al (2011) appear to treat ENSO as noise and fail to account for the distribution of warm and cool waters within the Pacific and Indian oceans that occur as a result of ENSO.

    I also find no mention of the impacts of changes in Sea Level Pressure, which contributed significantly to the rise in the OHC of the North Atlantic and North Pacific since 1955. In addition to the first post linked above, refer to:

    http://bobtisdale.wordpress.com/2009/10/04/north-atlantic-ocean-heat-content-0-700-meters-is-governed-by-natural-variables/

    And to:

    http://bobtisdale.wordpress.com/2009/12/30/north-pacific-ocean-heat-content-shift-in-the-late-1980s/

    Last, looking at their Figure 22, bottom cell left, Hansen et al continue to use outdated total solar irradiance data. It appears to be one of the early Lean versions that has been scaled incorrectly, with some new TSI data tacked onto the end.

    • It’s difficult to see where he gets the solar variation data for Fig22 from, it’s not referenced. But I’d agree that it looks like an earlier, more variable estimate. I suspect the importance of using that data is the variability in the first half of the 20th century. It helps to explain early 20th century temperature variation while not contributing to late 20th century warming. I’m sure if you used some of the more recent, less variable TSI reconstructions then the early 20th century becomes more problematic when taking Hansens overall approach.

  30. Bob,

    “Last, looking at their Figure 22, bottom cell left, Hansen et al continue to use outdated total solar irradiance data. It appears to be one of the early Lean versions that has been scaled incorrectly, with some new TSI data tacked onto the end.”

    I am amazed how selective use of out dated data can be so effective for so many. Why make an effort to solve the puzzle, when you can just pick a proxy to fit your current rationalization?

  31. Speaking of “pick a proxy” what ever happened to the Paleoclimate Reconstruction (PR) Challenge? You know, the one were they were going to invite “real” statisticians?

  32. Re Hansen, Earth’s Energy Imbalance, 4/18/11

    Hansen, et al.’s abstract opens with two provocative sentences unsupported in the body of their paper. The improving observations are merely possibilities with several major provisos. See ¶8, p. 25. They say, Measurements of Earth’s energy imbalance will be invaluable for policy and scientific uses, if the observational system is maintained and enhanced. Bold added, ¶13, p. 41. This translates into a frank statement that the energy imbalance conjecture should not be used for policy without better data.

    The authors establish a criterion for the accuracy in estimating Earth’s energy imbalance, and then argue that the accuracy cannot be achieved with present instruments. ¶13.6.1, p. 44.

    The authors urge that an energy imbalance causes the planet to warm until planetary energy balance is restored. P. 1. They say, The temporary imbalance between the energy absorbed from the sun and heat emission to space, causes the planet to warm until planetary energy balance is restored. P. 1. Why do they claim the imbalance is temporary? The passive voice, is restored, hides whatever mechanism might be restorative. Being spokesmen for the AGW movement, they later conclude that CO2 must be reduced 30 ppm, to a level [of] approximately 360 ppm, to restore balance. ¶13.4, p. 43.

    Stability is observable in the real climate. No better evidence exists than the paleo reductions from Vostok ice cores. The climate has a history of stabilizing in a warm state about 2ºC to 4ºC above the present, and again in a cold state about 9ºC to 10ºC below the present. A key question climatologists need to answer is why that is so. Hansen, et al., do not, and instead rely on their message that only man can fix what man clearly did not cause.

    Hansen et al. say, [A]s Earth becomes warmer the atmosphere holds more water vapor. Water vapor is an amplifying fast feedback, because water vapor is a powerful greenhouse gas. ¶2, p. 5. The first sentence is true, given by thermodynamic principles embodied in the Clausius-Clapeyron relationship, plus reasonable assumptions about relative humidity. The second sentence is true to the extent that water vapor is only a greenhouse gas, but false because an even more powerful feedback is cloud cover that increases with water vapor concentration.

    The discussions in the paper about the Svensmark and aerosol effects are insufficient and misleading. In the formation of clouds, the global average atmosphere has a surplus of either CCNs or water vapor. Assuming they are in exact balance has no physical basis for an event otherwise of probability zero. Good evidence exists that cloud formation is water vapor limited. Clouds form with regularity depending on temperature and humidity, not on CCN count. Furthermore, if the atmosphere were CCN limited, there ought to be instances where the atmosphere acts like a cloud chamber, which likely has never been observed, and cloud seeding would have met with more success. As a result of the water vapor limitation, the Svensmark and aerosol effects would at most be second order, and water vapor concentration would have the first order effect on cloud extent. Clouds burn off rapidly to amplify solar radiation, and build slowly in response primarily to sea surface temperature. These are dominating positive and negative feedbacks missing from the IPCC and Hansen, et al. model.

    Hansen, et al. say, Feedbacks do not come into play coincident with a forcing. Instead they occur in response to global temperature change. ¶2, p.4. The first sentence is true. The second sentence is a legally correct half-truth, but scientifically misleading, if not wrong. The most powerful feedback in climate is total cloud albedo, which is negative with respect to surface temperature but positive with respect to solar activity. It is the only known cause for observed solar amplification. The authors report that observed global temperature change in recent decades reveals a response in phase with solar irradiance change, with amplification up to a factor of two greater than expected from the direct solar forcing. Stott, et al. (2003) also reported a previously unknown amplification effect between 1.34 and 4.21. IPCC dismissed Stott, et al. on other grounds. AR4, ¶2.7.1, p. 188. The amplification is shown in the paper SGW in my journal.

    Hansen et al. say only that we bear in mind that there remains a possibility that moderate amplification of the direct solar forcing exists. Then they drop the subject.

    A cause is lacking for the climate to seek the energy balance widely conjectured in the AGW model. The climate has been warming for the last 20 kyrs, a prolonged state of imbalance. Climatologists need to postulate a cause and effect mechanism for their reliance on Earth seeking to rectify an imbalance. The missing cloud albedo does that. System feedbacks react to changes in solar activity or surface temperature, but not to an imbalance. The system has no way to detect an energy imbalance.

    The paper might have been a validation of Hansen’s famous prediction that Earth was approaching a tipping point. Earth has been at T-minus 10 years and holding for a quarter century. He might have argued that the observed imbalance was the threshold of his tipping point. But that might have upset his conjecture about the climate seeking some kind of equilibrium. The latest paper never mentions tipping points. It is a blatant pitch for another grant of about $100M, and perhaps a precursor for AR5. ¶13.6.2, p. 47.

    Earth’s climate has no preferred warm state. It can be put into balance using the Kiehl & Trenberth model at any temperature, from 0K to about 293K by simply allowing water vapor to vary with temperature. The climate is thus free to wander, making it sensitive to variations in solar radiation. The global surface temperature over the entire instrument record is predictable from the best available model for solar radiation with an accuracy comparable to IPCC’s smoothed model for that temperature. The prediction includes substantial lags, implying long term global energy imbalances as the solar emissions wax and wane on climate time scales.

  33. …it is now widely agreed that the strong global warming trend of recent decades is caused predominantly by human-made changes of atmospheric composition…

    Wrong!

    The strong global warming trend of recent decades from 1970 to 2000 is nearly identical to the warming that started a century ago from 1910 to 1940 as shown in the following graph:

    http://bit.ly/eUXTX2

    In addition to the strong global warming trend of recent decades being unprecedented, there has been little warming since 2000 as shown in the following graph:

    http://bit.ly/h86k1W

    Instead of questioning AGW, they are now blaming the lack of warming in the last decade on aerosols and global warming in the pipeline.

    This is extremely bizarre.

    • Nicola Scafetta

      “Instead of questioning AGW, they are now blaming the lack of warming in the last decade on aerosols and global warming in the pipeline. This is extremely bizarre.”

      well said, Girma!

    • Hansen has had a strong theme of deeply religious imagery and motives for years.
      He has been ‘saving the Earth’. He is worried about the sins of today being visited on his Grandchildren. He speaks in terms of evil and good. The best that can said for this essay is that it is his waterloo, his point from which those who recognize the simple truth that tworld is not suffering from a great calamity, start to push back on this ridiculous and dangerous social movment.
      Another way to look at this essay of his is that he is floundering around looking to distract from the failure of his prophecies of the 1980′s. The prophecies have failed, so he has to keep his schills looking ahead, never actually reviewing what he said in the past.
      Is it not telling that he did not write an essay reviewing how correct he was over the past ~23 years?
      Instead he has to talk about delayed feedbacks and deals with the devil. The worst thing for the AGW promoters is to actually hold them accountable for poast predictions.

      • “Another way to look at this essay of his is that he is floundering around looking to distract from the failure of his prophecies of the 1980′s…. Instead he has to talk about delayed feedbacks and deals with the devil.”

        I think there’s a grain of truth to that, Hunter, but it’s exaggerated. The most salient contribution of the 1980′s was a model projecting global temperature over the subsequent years. The projection overstated the warming, but if one averages the slope over the entire interval rather than a selected decade, the overstatement is not very large. More to the point, perhaps, and relevant to Hansen’s apocalyptic perspectives, the overpredicted warming of his model was based on a high climate sensitivity estimate of 4.2 C/CO2 doubling. The current modal estimate is 3C, and if his model had used parameters consistent with the 3C estimate, the match with observations would have been excellent. At the time of his model projections, many others saw 3C as more realistic, and so his inclination to predict something more severe than mainstream estimates was in evidence at that time.

        The feedback discussion in the paper is cogent and incontrovertible in principle, leaving the magnitudes as items of uncertainty. Unfortunately, I believe he sometimes discussed the difference between forcings and feedbacks in a confusing manner. In essence, long term feedbacks (not really “delayed” but rather slow-paced) exert effects on climate that are the equivalent of forcings when one is evaluating shorter term response. The best example is rising CO2 as a long term feedback to warming of the oceans – its effect on climate over a period of decades can be used as an estimate of the effects of CO2 forcing from anthropogenic emissions.

      • Fred,
        At least you are looking at this reasonably,a nd I apologize for coming on so strong.
        The 3o per doubling appears far too high as well.
        Since CO2 seems to follow heating- which means it leads cooling- I still think the climatocracy is far from finding the door to open and enter into the land of usefulness.
        Hansen, except for his suprisingly reasonable take on nuke power, has been a one trick poiny on cliamte for a long, long time.

      • Fred, if Hansen states the long term climate sensitivity is 3C and only 60% of that is realised after 100 years the question remains: how much of the recent warming is a response to previous forcing and should not be included in the transient response to recent forcings? It is a reasonable question and one that should have been answered before scary stories of the future were told, yet I can’t find any attribution of recent warming to previous forcing either due to a long ocean lag time or large long term climate sensitivities. It seems to all get lumped in with the transient response. Certainly there must be some. We had warming in the late 19th century and mid 20th century. What happened to the long term climate sensitivity from those time periods?

      • Steven – If you look at the slope of the response curves in Hansen’s paper, or in similar articles, you’ll notice that the response to the 60 percent mark is fairly steep, but the slope levels off considerably thereafter. The remaining 40 percent consumes many centuries, or even millennia, although it is asymptotic, so that there is no precise definition of where equilibrium occurs. The shape of these curves is fairly similar for higher and lower climate sensitivity values, and it is mainly the timing of the first 60 percent that differs – the long tail is very much stretched out in either case.

        What this means is that we are not situated in a very steep part of the increased solar forcing from early in the twentieth century that followed the lesser solar activity of preceding centuries, and that the more recent declines in solar irradiance of recent decades, atlhough modest, are on a steeper part of their own curve that is operating in the negative direction. The net effect of these opposing tendencies is probably small, whereas CO2 forcing that has persisted for more than 100 years is manifesting both earlier and more recent effects in the same direction. This does leave a bit of uncertainty, but it would be hard to attribute more than a small fraction of the combined effects of CO2 and solar irradiance to the latter. The contributions of aerosols, other solar phenomena, and internal climate dynamics are separate matters that have been discussed elsewhere, and their main relevance is the principle that high sensitivity to any of these moieties implies high sensitivity to anthropogenic and solar irradiance forcing as well.

        I’m not sure exactly why Hansen has chosen to emphasize the 60 percent point as a marker. However, time constants for response to a perturbation are sometimes expressed in terms of the “e-folding time” needed to reach 1/e or about 63 percent of the distance to equilibrium, which is useful, particularly for functions with an exponential decay toward equilibration, since an actual equilibration time is not calculable.

      • If only 60% is being realized in 100 years, then I think the better interpretation is that real sensitivity is about 1.8o, not 3.
        1oo years is simply hand waving and hype.

      • Alexander Harvey

        Fred:

        Unless I am mistaken, this paper goes beyond e-folding time and rests its arguments squarely in the evaluation of the systems characteristic response function.

        This may not seem much but it is a critical difference in approach and puts the use of simplified models on a much sounder footing.

        All of section “13.2. Climate response function” is key to the approach and the recommendation in section 3:

        “It would be useful if response functions as in Fig. 7 were computed for all climate models to aid climate analysis and intercomparisons. Also, as shown in the next section, the response function can be used for a large range of climate studies.”

        shows their thinking, as does their readiness to move away from AOGCM (ModelE) ocean characteristics if they are not borne out by real world data.

        In terms of papers that highlight the simplified model approach, this could well be a landmark (although it is not the first) and if that be the case a lot of e-folding type arguments will look very last century, which I think is a good thing.

        Alex

      • Thanks – That’s an interesting perspective, Alex, and it certainly seems like the response function will prove to be a useful analytical tool, and perhaps more relevant to actual climate response than simplistic assumptions about exponential decay. Does it explain Hansen’s choice of 60 percent distance to equilibrium as a benchmark? Probably not. As an arbitrary marker, 60 percent is useful because it will generally involve time intervals of interest to us (decades), and represents a point on the curve that conveniently separates steeper and shallower portions.

        In my naive and literal-minded way, I’m always interested in the physical meaning of a mathematically described function. Do you have any sense of how that applies to the response function Hansen utilizes and how much that would differ from exponential? It’s easy to visualize exponential decay as a response proportional to the existing quantity of whatever is decaying regardless of the starting value. I’m curious as to how the climate might respond differently from that behavior depending on where it was in the past.

      • Alexander Harvey

        Fred:

        Somewhere on this site I have writen oodles of stuff on response functions and LTI (linear time invariant) systems.

        Some of it is just my wittering on but there is good stuff on forming flux series / response function convolutions and how one can determine the system statistics from the response function.

        The most important thing is that it is analytic, tractable and completely general withing the limits set by being LTI.

        I will see what I can find.

        I pressed for luck, reloaded page, did it again but it is still not going to work.

        Alex

        Alex

      • Alex – thanks. I’m particularly intrigued by the concept that the past history of the temperature response to an energy imbalance affects its response to the remaining imbalance at time t, and the extent to which this dependence on the history reflects the influence of feedbacks initiated at an earlier time.

      • I bet this won’t indent either, but here goes.
        Only the response function of a well mixed slab ocean of finite depth would be exponential. The real ocean has different effective depths depending on what time-scale is being considered. For example the ocean easily changes several degrees in the course of a year but not the annual mean over decades. Let’s think about this. The effective depths taking part are different, and as the depth increases, its time-scale increases. This is why the ocean response is not exponential with a single time scale, but actually has a varying time-scale lengthening with the forcing time scale.

      • Jim D- That’s a good point. I think Susan Solomon et al pointed that out in their 2010 PNAS paper – short-lived forcings (e.g., volcanic eruptions) equilibrate almost exclusively with the upper ocean mixed layer, and a time constant derived from that layer alone suffices to describe the response, whereas persistent forcings from GHGs with a long lifetime, such as CO2, require the additional response of the deep ocean for accurate quantitation. At best, I suppose one could try to decompose the response into a fast and slow one, as Isaac Held has done, even though the dichotomy is slightly artificial – The Recalcitrant Component of Global Warming.

      • Alexander Harvey

        Fred:

        I haven’t been able to find what I was looking for but on reflection there is so much in the paper that I think it might be a distraction right now.

        But I have put a peg in the ground on the value (and limitations) of LTI systems and I can return to it anon. The formalism of feedbacks under LTI is also generalised (of which the more recognised equations are a special case) as they take into account the lags between the temperature signal and the generation of the feedback flux.

        The more I read of this paper the more I suspect that it is going to be divisive. It is another Hansen masterstroke, the sort of stuff he does so well, and needs his clout to get it publshed and headed. I really do think it is radical. On the face of it, it seems to be a tidying up excercise, bringing a whole lot of threads together under one heavyweight umbrella but I find the criticism the ocean models get quite something.

        It is my prejudice that certain aspects of the standard model have left it floundering around and hostage to fortune for some while. I think he has tried to drag it all back into shape. The new shape leans heavily on a thermally lighter ocean and hence on more sulphate forcings. That alone may be enough to upset a lot of people on all “sides” as at a climate blogopolitical level he has both given and taken from both “sides”.

        Given what I have read so far I am all for Hansen on this, and FWIW I had already bought into the Faustian bargain as a caution. For me that has always followed from my prejudice for a light ocean.

        If he be proved right on the sulphate issue that is not without its consequences. It could imply that we are already in an aerosol geoengineering scenario out of which we might not be able to ease our way.

        I have been fairly consistent on this point, in that I have long said that we should first attempt to remove the sulphates lest we find that we cannot when coal is finally tackled. Sulphates are the big unknown and Hansen has put that point smack in the middle of the debate where I think it belongs.

        Of course, if it isn’t the oceans and it isn’t the sulphates then the sensitivity is lower. Fix the sulphates and we might know that answer too.

        If we share Connor’s dilemma, we are “very rude word”. All it would take would be our shutting down emissions for a week or two and the temperatures would start going upstairs like a bat out of hell and then suddenly a thermally light ocean doesn’t look so friendly.

        Alex

      • Alexander Harvey

        Fred:

        I haven’t been able to find what I was looking for but on reflection there is so much in the paper that I think it might be a distraction right now.

        But I have put a peg in the ground on the value (and limitations) of LTI systems and I can return to it anon. The formalism of feedbacks under LTI is also generalised (of which the more recognised equations are a special case) as they take into account the lags between the temperature signal and the generation of the feedback flux.

        The more I read of this paper the more I suspect that it is going to be divisive. It is another Hansen masterstroke, the sort of stuff he does so well, and needs his clout to get it publshed and headed. I really do think it is radical. On the face of it, it seems to be a tidying up excercise, bringing a whole lot of threads together under one heavyweight umbrella but I find the criticism the ocean models get quite something.

        It is my prejudice that certain aspects of the standard model have left it floundering around and hostage to fortune for some while. I think he has tried to drag it all back into shape. The new shape leans heavily on a thermally lighter ocean and hence on more sulphate forcings. That alone may be enough to upset a lot of people on all “sides” as at a climate blogopolitical level he has both given and taken from both “sides”.

        Given what I have read so far I am all for Hansen on this, and FWIW I had already bought into the Faustian bargain as a caution. For me that has always followed from my prejudice for a light ocean.

        If he be proved right on the sulphate issue that is not without its consequences. It could imply that we are already in an aerosol geoengineering scenario out of which we might not be able to ease our way.

        I have been fairly consistent on this point, in that I have long said that we should first attempt to remove the sulphates lest we find that we cannot when coal is finally tackled. Sulphates are the big unknown and Hansen has put that point smack in the middle of the debate where I think it belongs.

        Of course, if it isn’t the oceans and it isn’t the sulphates then the sensitivity is lower. Fix the sulphates and we might know that answer too.

        If we share Connor’s dilemma, we are in mucky creek with no paddle. All it would take would be our shutting down emissions for a week or two and the temperatures would start going upstairs like a bat out of hell and then suddenly a thermally light ocean doesn’t look so friendly.

        Alex

      • Alexander Harvey

        Fred I have tried to post to your 9:33PM but something in my repsonse doesn’t cut it.

        It was long and I can’t see what causes it to be spiked so I give up.

        Alex

      • Alex (12:23 AM) – Thanks for trying. It may be temporarily caught in the spam filter. I hope it turns up, because your comments are always worth reading.

      • Alexander Harvey

        Fred:

        I have no idea where this post will end up but it is in response to part of 8:52 PM.

        Long tail responses tend to have a type of asymptopic approach where point of intercept between the current course and the asymptote gets longer in proportion to the distance already travelled.

        To be clearer, in the exponential approach to the x-axis the point of intercept is always the same distance in front of the current position, e.g. if it starts at x=0 and the tangent points at x=1 then when it gets it position above x=1 it point to x=2 above x=2 it points to x=3, etc.

        In a long tailed case each time it advances by 1 along the x-axis the intercept advances by more than 1.

        It is that sort of property that guarantees that it doesn’t have an e-folding time in a strict sense, but it can be thought of as having a characteristic time that scales with the period of investigation. Look at short term phenomenon and you measure a fast response time, look at a long term phenomenon and you measure a slow response time.

        If you have a that type of response function you find that experimenters that are using e-folding models to extract time constants get allsorts of different results depending on the time scale of the experiment. That should sound familar, you see it all the time, and it makes me so cross.

        Alex

      • Alexander Harvey

        Fred for some reason my response to you did not link correctly please see below.

        Alex

      • Alexander Harvey

        Fred:

        That still didn’t work and I don’t know why, please see above.

        Alex

      • I always try to remember to click on the “Reply” button above one more time just before clicking on “Post Comment”. It’s superstitious, but it seems to work.

      • JCH – Now you tell me!

      • This time, JCH, I clicked twice on reply. Let’s see what happens.

      • Fred, I don’t doubt for a minute that you are right and the slow response portion of the climate sensitivity is unimportant. Not even important enough to include in the calculations. I was just pointing that out in my own odd sort of way.

      • Fred Moolten, 4/19/11, 7:39 pm, Energy imbalance

        By the response curves in Hansen’s paper, or in similar articles do you mean his Figure 7, p. 18? This curve is explicitly a percentage, and it behaves as you describe, rising rapidly (in about 200 years) to 60% and then slowly over the remaining 40%. And as you say, it was also explicit in being taken from another article. However, for Figure 7, Forcing was instant doubling of CO2, and not as you suggest part of the increase solar forcing from early in the twentieth century.

        The cause of these slope changes is IPCC’s physically unrealizable model for the uptake of CO2. It is the basis for proclaiming CO2 to be long-lived in the atmosphere (a necessary assumption to make MLO data global, and to make the MLO bulge attributable to humans). IPCC says,

        Carbon dioxide cycles between the atmosphere, oceans and land biosphere. Its removal from the atmosphere involves a range of processes with different time scales. About 50% of a CO2 increase will be removed from the atmosphere within 30 years, and a further 30% will be removed within a few centuries. The remaining 20% may stay in the atmosphere for many thousands of years. AR4, Executive Summary, The Carbon Cycle and Climate, p. 501.

        You should find the three-branch, 50%–30%–20% response IPCC describes here as consistent with Figure 7 as your two-branch, 60%–40% approximation.

        IPCC provides an algebraic formula for this response function in terms of the decay in time of a pulse of CO2, the forcing being the complement of the decay. This decay is in the uptake of CO2 from the atmosphere. AR4, Table 2.14, fn. 1, p. 213. The formula is

        a_0 + sum[a_j*exp(-t/tau_j); j=1,3]

        where the (a, tau) coefficient pairs are {(0.217, [∞]), (0.259, 172.9 yrs), (0.338, 18.51 yrs), (0.186, 1.186 yrs)}. The constant term, a_0, is consistent with Figure 7, tending to confirm the equation represents the chart.

        This recipe IPCC attributes to the revised version of the Bern Carbon cycle model used in Chapter 10 of this report (Bern2.5CC; Joos et al 2001). It is also attributed to Prentice et al., 2001; Archer, 2005; AR4 ¶7.3.4.2; and AR4, ¶10.4. See also AR4, ¶7.3.1.2, p. 514. The model is four-branch, 21.7%–25.9%–33.8%–18.6%.

        The equation represents a physical model in which each of four processes has its own reservoir of size a_j. Whether the concept for these reservoirs comprises separate pools or pipelines, it does not exist in the real ocean. The most rapid uptake, the solubility pump will exhaust the atmospheric CO2 with its characteristic time constant of 1.186 years. That it is because it has no way to know that it should turn off when it has absorbed 18.6% of the atmospheric CO2.

        IPCC’s equation for the residence time of CO2 in the atmosphere is correct. It is high school physics. IPCC does not use the equation in the main body of its reports, but it appears in the TAR and AR4 Glossaries. The residence time is the average lifetime of a molecule, and is equal to the e-folding time, tau in the equation above. Using various estimates of reservoir sizes from AR4 and TAR, the mean residence time is between 1.5 and 3.5 years, and for the equation, it is 1.186 years. It is not decades to centuries, nor is it 5 to 35 millennia.

        The equation is supposed to represent the various sequestration processes involving the three ocean pumps, the solubility pump, the organic carbon pump, and the CaCO3 counter pump. This is cartooned in AR4, Figure 7.10, p. 530 with a couple of arrows backwards, chemical reactions occurring on unionized molecules, and solution substituting for dissolution or solubility, but otherwise, good enough. IPCC paces the processes by applying the stochiometric carbonate equations (AR4, Eqs. 7.1, 7.2, p. 529), but implicitly by referencing Zeebe, et al., applying the reaction coefficients valid only at thermodynamic equilibrium. The equilibrium condition is represented by the Bjerrum Plot, shown in Zeebe but not by IPCC. However, the surface layer fails all three equilibrium tests, chemical, thermal, and mechanical, for thermodynamic equilibrium. Consequently, the surface layer is the buffer holding excess CO2, not the atmosphere. This buffer allows Henry’s Law to operate unimpeded, and the Revelle Factor is invalid. Furthermore, adding CO2 to the atmosphere does not cause acidification of the ocean.

        By these machinations, IPCC makes anthropogenic CO2 accumulate in the atmosphere, while it has natural CO2 fluxes proceeding at 15 times the rate of ACO2. Because ACO2 and natural CO2 are merely different mixes of 12CO2:13CO2:14CO2, absorption rates for each of the isotopic types should provide that natural CO2 is relatively unabated while ACO2 accumulates. The set of equations in terms of individual rates for the three isotopes has no solution.

        The conjecture of acidification, the global representation of MLO data, and Hansen’s response curve of Figure 7 are all invalid, compounding several errors from the physics.

        IPCC’s model for solar effects, the hydrological cycle, and the carbon cycle does not emulate what is observed from the real world. That the GCMs have no predictive power cannot be surprising.

      • SOMETHING OLD, SOMETHING NEW, SOMETHING BORROWED, SOMETHING BLUE
        FM YOU HAVE NO CLUE

      • I presume you refer to Hansens famed 1988 paper? Well if he had calculated a sensitivity of 3.2C instead of 4.2C the match with observations would still have been quite poor. Descibing it as “excellent” is ridiculous.

        The yawning gap between prediction and reality over the past decade, would just be slightly narrower.

      • Fred Moolten, 4/18/11, 5:41 pm, Energy imbalance

        You wrote, The current model estimate is 3C, and if his model had used parameters consistent with the 3C estimate, the match with observations would have been excellent. At the time of his model projections, many others saw 3C as more realistic, and so his inclination to predict something more severe than mainstream estimates was in evidence at that time.

        The feedback discussion in the paper is cogent and incontrovertible in principle, leaving the magnitudes as items of uncertainty.

        The number 3C is a system gain, the ratio of the temperature rise to a doubling of CO2. As I showed on 4/20/11 at 12:33 pm, Hansen got the denominator wrong. This is his Figure 7, and relies on the physically unrealizable Bern equation, among other bits of false physics. The numerator is the temperature rise from GCMs, and the problem here is that those models have cloud cover parameterized and constant. This means that the modeled total cloud albedo does not change with surface temperature or solar activity. This kills the most powerful feedback in climate, negative with respect to temperature and positive with respect to solar radiation.

        The AGW model is open loop with respect to the dominating parameter of total cloud albedo (I have to use the phrase total cloud albedo because IPCC misappropriated the term cloud albedo to stand for specific cloud albedo, reflectivity per unit area.) The attempt to fit an open-loop model to closed loop measurements is doomed to be a perpetual tail chase. The radiative forcing paradigm is an attempt to simplify thermodynamics by eliminating flow variables, but this also renders RF models unable to assess closed loop gain.

        Science does not mandate any type of model. If RF or any other paradigm makes non-trivial predictions that can be validated, the model is a success. However, when IPCC and Hansen assume a variable to be constant, they have made a mistake likely to ruin their model.

        Climate sensitivity should be much less than IPCC and Hansen estimate. Far from being short term and irrelevant (as you urged on 4/20/11 at 7:51 pm), Lindzen, et al. (2009) results from ERBE (not ENSO) are validating for the lesser sensitivity prediction, and tend to invalidate the IPCC/Hansen model.

        Your conclusion that The feedback discussion in the [subject] paper is cogent and incontrovertible in principle is false on both counts. That paper and IPCC err on the origin and implementation of feedback.

        Moreover, any claim that a model, even a law, is incontrovertible raises an alarm to a scientist.

      • “The prophecies have failed, so he has to keep his schills looking ahead, never actually reviewing what he said in the past.”
        Yes, that’s the way science works. Or would you expect Hansen to present a settled climate science in 1980? And BTW it’s not true, Hansen did actually review his early papers, I’ve read some time ago.
        Is it possible, that the problem you have with Hansen is quite simple – you don’t like him?

      • Craig Loehle

        “hold them accountable for poast predictions.” Actually, I think their predictions are “toast” not “poast”.

    • It’s impossible for them to question anything about AGW. They have to add epicycles to keep the theory. It is easier for a camel to go through the eye of a needle than for a bureaucrat scientist to leave the herd. From the history of science we know it never happens.

  34. This is a fascinating paper, which seems written more as a manifesto than a scientific treatise. I doubt that it will appear in a high impact journal in anything like its present form.

    Hansen is brilliant, insightful, and extraordinarily well-informed. He is also opinionated. Here, he takes on some formidable adversaries, including the IPCC (which, unlike Hansen, perceives a need for models as aids to good climate sensitivity estimates from paleoclimatologic data), and Kevin Trenberth (who has remained uncertain whether climate budget closure is achievable from known OHC data – Hansen does, however, invoke very recent data from Von Schuckmann and Le Traon not yet addressed by Trenberth to support his closure argument). To some extent, the disagreements reflect different perspectives on uncertainty rather than irresolvably different certainties. In this regard, Trenberth and the IPCC take a more conservative approach to conclusions drawn from indirect rather than direct observational data. A prime example of Hansen’s indirect reasoning is his attribution of recent declines in planetary imbalance in part to the subsidence of the “rebound” imbalance from the
    1991 Pinatubo eruption.

    There are too many indirect calculations in the paper to convince me of the quantitative validity of estimates for current energy imbalance or of long term climate sensitivity significantly exceeding the 3 C/CO2 doubling based on fast feedbacks. On the other hand, the conclusion that the current imbalance is positive (i.e., a warming influence), and that long term feedbacks based on ice sheet albedo and GHG changes exceed the short term values responsible for current 3 C climate sensitivity estimates seems well grounded in observational data and is probably largely correct. Whether or not that matters much for intervals as short as the next 89 years to 2100 is conjectural – it probably makes little difference.

    As of this time – about 2:40 PM EDT on April 19 – the comments so far don’t appear to me to refute many of Hansen’s principal conclusions in the article, and its weaknesses lie more in the tendency to place too heavy a weight of conjecture on too small a base of certainty than in contradictions with established fact. Some of the comments may have misunderstood or overlooked items in the article or its references that addressed the points raised. As just two examples, Hansen’s “airborne fraction” of 55 percent refers only to the fraction of fossil fuel CO2 emissions remaining airborne – neglecting land use contributions – and concerns about runaway climates were explicitly addressed as currently unfounded. The latter appears to be something of a retrenchment by Hansen of earlier more apocalyptic assertions. He may be mellowing.

    • “On the other hand, the conclusion that the current imbalance is positive (i.e., a warming influence), and that long term feedbacks based on ice sheet albedo and GHG changes exceed the short term values responsible for current 3 C climate sensitivity estimates seems well grounded in observational data and is probably largely correct.”

      Fred, can you seperate the forcing and the feedback since the LIA for me so I may know how much of recent warming is forcing with the expected feedback later and how much is just feedback with no future consequences? Pointing me to a paper that does this would be satisfactory.

    • “Hansen is brilliant, insightful, and extraordinarily well-informed. ”

      or he simply re-writes the science with each new climate state he faces while always maintaining a catastrophic CO2 scenario. I’m trying to work out how much of this is new or at least a departure from the ‘consensus’. There seems to be alot.

  35. The Hansen et al. paper has been pretty effectively deconstructed by several posters here.

    Here’s my summary in a nutshell: lots of words, lots of theoretical deliberations, lots of model stuff, lots of paleo-climate interpretations, lots of guesses piled on top of assumptions, but nothing really new.

    Quickly going through the abstract

    Improving observations of ocean temperature confirm that Earth is absorbing more energy from the sun than it is radiating to space as heat, even during the recent solar minimum.

    What about the recently observed cooling of the upper ocean since ARGO measurements were installed in 2003?

    This energy imbalance provides fundamental verification of the dominant role of the human-made greenhouse effect in driving global climate change.

    This conclusion is not substantiated by the observed facts: neither the atmosphere nor the upper ocean have warmed since 2001 or 2003 respectively.

    Observed surface temperature change and ocean heat gain constrain the net climate forcing and ocean mixing rates. We conclude that most climate models mix heat too efficiently into the deep ocean and as a result underestimate the negative forcing by human-made aerosols.

    First conclusion may be correct, but the second one is simply an unsubstantiated supposition.

    Aerosol climate forcing today is inferred to be -1.6 ± 0.3 W/m2, implying substantial aerosol indirect climate forcing via cloud changes.

    This is no more than a guess, as the authors point out later.

    Continued failure to quantify the specific origins of this large forcing is untenable, as knowledge of changing aerosol effects is needed to understand future climate change.

    This is more or less an admission that the impact of anthropogenic aerosols is unknown.

    A recent decrease in ocean heat uptake was caused by a delayed rebound effect from Mount Pinatubo aerosols and a deep prolonged solar minimum.

    Another guess. Sort of goes in the direction of “our predictions of upper ocean warming were correct, except for…” [see Nassim Taleb’s The Black Swan]

    Observed sea level rise during the Argo float era can readily be accounted for by thermal expansion of the ocean and ice melt, but the ascendency of ice melt leads us to anticipate a near-term acceleration in the rate of sea level rise.

    The “observed sea level rise” can be fully attributed to changing the method and scope of measuring sea level. The old tide gauge method, which was usd for decades, shows no late 20th century acceleration of sea level rise to the rates reported by IPCC, but a slight deceleration instead (and around half the rate measured by satellite altimetry).

    The “ascendancy of ice melt” is a bogus assumption, based on GRACE; continuous long-term satellite altimetry measurements over a period of more than 10 years show no mass loss of either the Greenland or Antarctic ice sheets, but rather a slight mass gain of both.

    Humanity is potentially vulnerable to global temperature change, as discussed in the Intergovernmental Panel on Climate Change (IPCC, 2001, 2007) reports and by innumerable authors.

    Here the “advocate Hansen” shows his stripes. The purported human vulnerability to global temperature change is pure fantasy, regardless of who “discussed” it. The GMTA record shows multi-decadal swings of several tenths of a degree and an overall warming since 1850 of around 0.7C. We appear to have entered a cycle of no warming or slight cooling since 2001, and it is unknown whether or not this is the start of a prolonged cycle of slight cooling as has been observed twice in the long-term record.

    Although climate change is driven by many climate forcing agents and the climate system also exhibits unforced (chaotic) variability, it is now widely agreed that the strong global warming trend of recent decades is caused predominantly by human-made changes of atmospheric composition (IPCC, 2007).

    Agreed? By whom? There are several solar studies, which tell us that roughly half of the observed 20th century warming can be attributed to the unusually high level of solar activity (highest in several thousand years), with most of this occurring in the first half of the century. Certainly the authors of these studies do not “agree”. In addition, the most recent decade shows no warming at all, despite record increase in CO2. This has been attributed to “natural variability”.

    The basic physics underlying this global warming, the greenhouse effect, is simple. An increase of gases such as CO2 makes the atmosphere more opaque at infrared wavelengths. This added opacity causes the planet’s heat radiation to space to arise from higher, colder levels in the atmosphere, thus reducing emission of heat energy to space. The temporary imbalance between the energy absorbed from the sun and heat emission to space, causes the planet to warm until planetary energy balance is restored.

    The GH theory is well known as are the model-based assumptions on climate sensitivity. But what is not well known is how our planet’s climate acts and reacts in the real world.

    The planetary energy imbalance caused by a change of atmospheric composition defines a climate forcing. Climate sensitivity, the eventual global temperature change per unit forcing, is known with good accuracy from Earth’s paleoclimate history.

    “Good accuracy”? Hmmm… Paleo-climate data can be interpreted to give essentially any desired result. Real-time physical observations (Spencer. Lindzen), rather than simply model simulations or questionable paleo-climate interpretations, have shown that the net overall feedback with warming is negative, IOW that our planet has a natural thermostat mechanism, which leads to a low overall climate sensitivity. This appears to be largely attributable to the impact of clouds. These are poorly modeled in the IPCC models, as recent cloud super-parameterization studies have shown (these also show the net negative cloud feedback confirmed by the Spencer observations. [Hansen does not even reference either of the two recent papers mentioned above. Why?]

    However, two fundamental uncertainties limit our ability to predict global temperature change on decadal time scales.

    First, although climate forcing by human-made greenhouse gases (GHGs) is known accurately, climate forcing caused by changing human-made aerosols is practically unmeasured.

    GHG forcings are not known accurately, if one includes feedbacks. IPCC has conceded “clouds remain the largest source of uncertainty” (Spencer’s observations have cleared up some of this uncertainty, but Hansen ignores these). In addition the Earthshine project has shown that clouds have acted as a separate climate forcing by changing our planet’s albedo and reflecting incoming radiation (warming from 1985 to 2000 and cooling after 2000). Hansen also ignores these observations.

    The authors concede, “climate forcing caused by changing human-made aerosols are practically unmeasured”. Amen.

    Aerosols are fine particles suspended in the air, such as dust, sulfates, and black soot (Ramanathan et al., 2001). Aerosol climate forcing is complex, because aerosols both reflect solar radiation to space (a cooling effect) and absorb solar radiation (a warming effect). In addition, atmospheric aerosols can alter cloud cover and cloud properties. Therefore, precise composition-specific measurements of aerosols and their effects on clouds are needed to assess the aerosol role in climate change.

    No doubt, the knowledge of how aerosols act in actual real life is unknown. The IPCC model simulations on this are simply based on assumptions.

    Second, the rate at which Earth’s surface temperature approaches a new equilibrium in response to a climate forcing depends on how efficiently heat perturbations are mixed into the deeper ocean. Ocean mixing is complex and not necessarily simulated well by climate models. Empirical data on ocean heat uptake are improving rapidly, but still suffer limitations.

    ARGO has given the first real improvement to earlier measurements (the old expendable XBT devices were known to introduce a warming bias as team leader, Josh Willis conceded). The ARGO measurements have shown no warming since they were commissioned.

    We summarize current understanding of this basic physics of global warming and note observations needed to narrow uncertainties.

    Appropriate measurements can quantify the major factors driving climate change, reveal how much additional global warming is already in the pipeline, and help define the reduction of climate forcing needed to stabilize climate.

    “Warming in the pipeline” is an unsubstantiated postulation based on the circular logic of an assumed climate sensitivity and actual GMTA measurements that show less warming than should have occurred if the assumed climate sensitivity had been correct, with the difference simply relegated to “the pipeline” (rather than adjusting the assumed climate sensitivity).

    “Stabilizing climate” is a ridiculous goal. Our planet’s climate has never been “stable” and there is nothing we can do to perceptibly change our planet’s climate. The authors are apparently dreaming of a “silver bullet” to “fix” something that they assume is “broken”.

    This study is simply intended to give some credibility to the postulation that our planet’s climate is highly sensitive and that the observed warming (from CO2) would actually have been much higher if it were not for human aerosols and the warming still “hidden in the pipeline”.

    Sorry. No sale.

    Max

  36. Judith Curry

    I think you can see from the many posts here that a (pardon the expression) “consensus” (with the exception of a few Hansen fans or groupies) is building on the (in)significance of the Hansen et al. essay , despite its length.

    My question: Why would Hansen write such a paper with nothing new in it?

    Is this simply part of the “advocate Hansen” (as opposed to the “scientist Hansen”) at work, trying to keep the CAGW premise alive when it appears to be crumbling left and right?

    I can’t for the life of me figure out why Hansen would write a paper with no new scientific findings, which simply rehashes the old postulations. Whom is he hoping to impress or influence with this paper?

    Certainly no one who has been following this all.

    It’s a puzzle to me.

    Max

    • My thinking is that this paper was not written for a science audience at all.
      It is written for an audience that wants ‘sciencey’ tings they can say to keep their AGW faith alive.
      His target is the believer community, opinion leaders and politicians who may be hearing too much from them denialist scum and the Koch brothers.

  37. As soon as climate scientist starts talking about energy imbalance and radiative forcing, their arguments descend in to realms of physics fantasy.

    Energy is conserved absolutely. But energy does not have to be conserved as heat. There is no such thing as conservation of thermal radiation.

    No amount of calculation will solve this problem. Climate science as a discipline needs to properly understand basic physics and thermodynamics and seriously consider Gerlich and Tscheuschner. Next prove that trace gases can actually experimentally modify atmospheric energy transfer in a measurable manner.

    • The Earth is not isolated system, and therefore its energy is not conserved globally. The energy comes in and goes out, some temporarily stored, some temporarily released. There is not much to seriously consider in G&T, other than their analytical solution of formal averages of non-conductive non-rotating ball in space.

      • Energy is conserved in the universe and its behavior is well described in standard laws of physics. Heat energy is well described by the laws of thermodynamics. Transfer of energy between bodies of the universe is described by these laws, which have yet to be proven to be wrong. Other equations for more complex effects may be mathematically derived from the basic laws. “Greenhouse” theory is not one of them.

  38. @ judith curry

    Thanks for the link to the Hansen et al. paper.
    It helped me to clarify the various meanings of climate sensitivity and I learned about non-GCM approaches to evaluate it.

    The style is – hm – different and interesting. Hansen is a good narrator, but did he wrote it for scientists or well-informed layman as I am? I would assert, a climate scientist should already know most of it.

  39. Craig Loehle

    It is disturbing to me that the entire argument turns on a quantity (sulfates) which is unmeasured and which works in an unverified and vague manner. like pixie dust. The supposed history of sulfates follows no logic of general pollution emissions.

  40. I’m sorry Judith. I really should read this paper. But due to Hansen’s track record, that would be like devoting a lot of time to reading a financial report from Bernie Madoff.

  41. ‘The difficulty with the satellite approach becomes clear by considering first the suggestion of measuring Earth’s reflected sunlight and emitted heat from a satellite at the Lagrange L1 point, which is the location between the sun and Earth at which the gravitational pulls from these bodies are equal and opposite.’

    Strawman – no one does this.

    ‘The precision achieved by the most advanced generation of radiation budget satellites is indicated by the planetary energy imbalance measured by the ongoing CERES (Clouds and the Earth’s Radiant Energy System) instrument (Loeb et al., 2009), which finds a measured 5-year-mean imbalance of 6.5 W/m2 (Loeb et al., 2009). Because this result is implausible, instrumentation calibration factors were introduced to reduce the imbalance to the imbalance suggested by climate models, 0.85 W/m2 (Loeb et al., 2009).’

    Although absolute values remain problematical – outgoing radiative flux is reported as anomalies that are known with much greater accuracy. Here is the CERES data from KT at Pielkes Sn’s site – http://pielkeclimatesci.wordpress.com/2010/04/27/april-26-2010-reply-by-kevin-trenberth/

    You can see a net increase with SW and LW contributions. The only way Hansen can be right is if you believe that cloud cover doesn’t change.

  42. Alexander Harvey

    Well I say! Bloody Marvelous! :)

    Nice one Jim!

    I haven’t read it all by any means, but it looks fine, hits sweet spots. It also has a big knob to turn and has in effect reneged on privious attempts to model the repsonse function.

    It does look a bit like it has thrown Trenberth under a bus, surprise, surprise.

    To be honest, if it still looks good after a read or two, I am not sure that I could hope for more.

    Special attention should be paid to their response function as it is critically important for determining the statistical properties on the system, and particularly the form of the natural variability.

    Their handling on natural variability may be a weak point.

    I will have to check but first off the response function equivalent filter colour is not red but near pink, that being the case a stochastic version based on their response function may give more long term (multidecadal) variance than we are used to seeing.

    There seems to be an error here:

    “The likely answer becomes apparent upon realization that the surface temperature change depends upon three factors: (1) the net climate forcing, (2) the equilibrium climate sensitivity, and (3) the climate response function, which depends on the rate at which heat is transported into the deeper ocean (beneath the mixed layer).”

    The response function embodies of the climate sensitivity so it is just 1) and 3) above as indicated by their equation:

    T(t) = ʃ R(t) [dF/dt] dt

    which incidentally is not I think formally correct and might be better stated as a convolution anyway.

    Around the time that RealClimate did their annual model comparison update I did enquire as to how come ModelE seemed to have lost about half its oceanic heat uptake since last year. I didn’t get a reply then perhaps I have now?

    Anyway,

    Bravo Jim!

    Alex

    • Alex – I don’t see where he has thrown Kevin Trenberth under a bus. Could you please elaborate? GISS, in my reading, long ago indicated no missing heat by their estimation.

      One way I keep track of this stuff is to borrow Roger Pielke’s tracking:

      3. IF the diagnosed radiative forcing of +0.16 Watts per meter squared in the upper ocean plus the 0.07 0.095 Watts per meter squared below that level (assuming the rates did not change in the later part of the cuurent decade) are robust in the final analysis, the total of 0.23 0.255 Watts per meter squared is significantly below the 0.6 Watts per meter squared predicted by Jim Hansen from the GISS model for the time period 1993 to 2003 (see).

      • Alexander Harvey

        JHC,

        I don’t know about a long time ago, I noticed that ModelE seemed to have got a lot less heat thirsty a couple of months back but I think a the begining of last year it was still needing about 1W/m^2 of OHC uptake.

        Put it this way, the 0.9W/m^2 (~2004) wasn’t due to Trenberth or the satellite data but came out of the models and I think modelE in particular dating back to a 2003/2004 Hansen paper.

        Fig 19 comparing Trenberth to this Hansen result which has no missing heat does not leave Trenberth sitting pretty. What Hansen seems to have done is push the heating back to a peak around 2001 which keeps his 2003/4 paper more or less in play whilst still kicking the missing heat anomaly into touch.

        Alex

      • Alexander

        the 0.9W/m^2 (~2004) wasn’t due to Trenberth or the satellite data but came out of the models and I think modelE in particular dating back to a 2003/2004 Hansen paper.

        I think you’ll see that it came from Hansen et al. 2005 (the “hidden in the pipeline” paper he co-authored with Josh Willis and others).

        http://www.sciencemag.org/content/308/5727/1431.full.pdf

        Our climate model, driven mainly by increasing human-made greenhouse gases and aerosols, among other forcings, calculates that Earth is now absorbing 0.85 ± 0.15 watts per square meter more energy from the Sun than it is emitting to space.

        There he calculates a figure of 0.82 W/m^2 (using circular logic and his models), which he then rounds up to 0.85 W/m^2 and K+T round up again to 0.9 W/m^2 and use as a “plug number” for their energy balance “cartoon”.

        Max

      • Alexander Harvey

        JCH,

        perhaps more significanct is this:

        “GISS modelE-R, for example, achieves only 60 percent response in 100 years. At least several other climate models used in IPCC (2001, 2007) studies have comparably slow response. Diagnostic studies of the GISS ocean model show that it mixes too efficiently, which would cause its response function to be too slow. Therefore we tested alternative response functions that achieve 75 percent and 90 percent of their response after 100 years. In each case we let current human-made aerosol forcing have the magnitude that yields closest agreement with observed global warming over the past century.”

        which simply says:

        maybe ModelE gets the oceans wrong and AOGCMs get it wrong to, so let us pick an ocean that agrees with the data.

        Alex

      • Alex

        Yeah. Or maybe: our models tell us that we should have seen warming of 1.2C. We have only observed warming of 0.65C. But since our model assumptions cannot possibly be wrong, the observations must be. So let’s assume that the difference is “hidden in the pipeline”. That should do the trick.

        Max

  43. How much of this paper is based on factual climate data?

    How much is based on guess-work/opinion?

    Even the semi-evidence-based paleoclimate data is a subjective analysis of the evidence.

  44. However, two fundamental uncertainties limit our ability to predict global temperature change on decadal time scales.

    I disagree.

    Based on historical global mean temperature anomaly (GMTA) patterns, it is possible to predict the GMTA on decadal time scales within the year-to-year natural variability.

    In the following graph, all the GMTA for 130 years lie in the yellow shaded region within the year-to-year variability of 0.28 deg C.

    http://bit.ly/cO94in

    Study of the GMTA shows that the year-to-year natural maximum variability is about 0.29 deg C. For example, from 1956 to 1957, the GMTA increased by 0.28 deg C from –0.35 to –0.07 deg C. In contrast, from 1963 to 1964, the GMTA decreased by 0.30 deg C from 0.00 to –0.30 deg C.

    http://bit.ly/f2Ujfn

  45. GLOBAL MEAN TEMPERATURE PREDICTIONS

    However, two fundamental uncertainties limit our ability to predict global temperature change on decadal time scales.

    I disagree.

    Based on historical global mean temperature anomaly (GMTA) patterns, it is possible to predict the GMTA on decadal time scales within the year-to-year natural variability.

    In the following graph, all the GMTA for 130 years lie in the yellow shaded region within the year-to-year variability of 0.28 deg C.

    http://bit.ly/cO94in

    Study of the GMTA shows that the year-to-year natural maximum variability is about 0.29 deg C. For example, from 1956 to 1957, the GMTA increased by 0.28 deg C from –0.35 to –0.07 deg C. In contrast, from 1963 to 1964, the GMTA decreased by 0.30 deg C from 0.00 to –0.30 deg C.

    http://bit.ly/f2Ujfn

  46. Something is wrong with the posting here.

  47. steven mosher

    It would be interesting to see what the models with the ‘right” sensitivity projected? That is, if the paleo record is the best constraint, then one should throw out models that have higher sensitivity’s or lower sensitivity’s. hmm

    • Steve – If you visit AR4 WG1, chapters 8 and 9 (particularly 9), you’ll find that both paleo data and more recent evidence leave the range of climate sensitivity rather broad – typically within the oft-quoted 2 – 4.5 C/CO2 doubling. Hansen’s current mid-range value of about 3 C for fast feedbacks is consistent with this evidence. Interestingly, his 1988 model that somewhat overpredicted warming in the ensuing decades (slope 0.27 C/decade vs 0.18 C for Hacrut data), was based on a sensitivity of 4.2 C, whereas the same model would have matched the observational warming record very well if the parameters used involved a sensitivity of 3 C.

      The paleo data on the Last Glacial Maximum cited in chapter 9 were translated into climate sensitivity ranges through modeling, because gaps in the observational data and differences between that climate (with its much more extensive ice sheets) and today made it difficult to draw exact analogies simply by calculation.

      • Fred Moolten: the same model would have matched the observational warming record very well if the parameters used involved a sensitivity of 3 C.

        Yet according to the climate scientists at RealClimate, “Model development actually does not use the trend data in tuning”. [from the RealClimate FAQ].

        Or as Gavin Schmidt wrote to Judith at Collide-a-scape on the ‘Curry Agonistes’ thread:

        However, Judy’s statement about model tuning is flat out wrong. Models are not tuned to the trends in surface temperature. The model parameter tuning done at GISS is described in Schmidt et al (2006) and includes no such thing.

        In the same thread, regarding tuning parameters to adjust climate sensitivity, he wrote (responding to some questions from myself) :

        This means that our ability to globally tune to fit anything other than gross features of the climatology is impossible. We cannot tune for climate sensitivity for instance, even if we wanted to, nor can I force any particular metric to suddenly match much better to observations (I wish!).

        Therefore your speculation above is, according to Gavin, an impossibility.

        Also, despite what Gavin said, Dr. Curry was not wrong – climate models do appear to be tuned to surface trends, by using different estimates of anthropogenic forcing (Kiehl 2008) :

        It is found that the total anthropogenic forcing for a wide range of climate models differs by a factor of two and that the total forcing is inversely correlated to climate sensitivity. Much of the uncertainty in total anthropogenic forcing derives from a threefold range of uncertainty in the aerosol forcing used in the simulations./blockquote>

        Knutti 2008 found that Kiehl’s finding extends to latest CMIP3 models as well.

        In other words, modelers appear to have picked different forcings data (particularly aerosol) for different models to complement the climate sensitivity of each model, in order be able to reproduce the observed temperatures. Given that the forcing datasets, inputs to the models, should be picked according to our best estimates from observations, and not according to the properties of the model, this is questionable scientific practice.

      • Sorry – last two paragraphs are not part of the quotation from Kiehl.
        I can at least use this oppurtunity to include the proper references :

        Kiehl, J. T. (2007), Twentieth century climate model response and climate sensitivity, Geophys. Res. Lett., 34, L22710, doi:10.1029/2007GL031383.

        Knutti, R. (2008), Why are climate models reproducing the observed global surface warming so well?, Geophys. Res. Lett., 35, L18704, doi:10.1029/2008GL034932.

      • Oneuniverse – The RC FAQ is correct on this point, as is Kiehl – models are tuned to starting climates, but it is not legitimate to retune them so that they will match trends. Hansen’s 1988 model, therefore, won’t be rerun with new parameters, but if, hypothetically, it had been parametrized in a way to accommodate a climate sensitivity of 3 C, its match to observations would have been good. My point was focused on the climate sensitivity value, and not the virtues or flaws of the model itself. The choice of parameters for starting climates is still a topic of some disagreement, but I believe the point being made is that models have often yielded similar results because they differ from each other in ways that tend to cancel out – e.g., with stronger positive forcings offset by stronger negative ones. This issue is still far from completely resolved, but it does not contradict the above points.

      • “So which forcing scenario came closest to the real world? Given that we’re mainly looking at the global mean surface temperature anomaly, the most appropriate comparison is for the net forcings for each scenario. This can be compared with the net forcings that we currently use in our 20th Century simulations based on the best estimates and observations of what actually happened (through to 2003). ”

        This is from Real Climate. This is the argument you are using. It makes the assumption the current models are correct. Not bad for an early model, and since it wasn’t that bad it is certain the models now are even better. Ignore the fact that scenario B was Hansen’s linear forcing argument and that scenario A was the projection that matches the criteria set forth in the paper. What matters is there is a line on the graph that came close to what happened. Excuse me for a minute, I need to make a few random lines on a graph so that in 20 years I can decide which one was my projection.

      • “Scenario B is perhaps the most the most plausible of the three cases.” – James Hansen, page 9345, 1988 paper: Global Climate Changes as Forecast by Goddard Institute for Space Studies Three-Dimensional Model

      • if, hypothetically, it had been parametrized in a way to accommodate a climate sensitivity of 3 C, its match to observations would have been good.

        Your hypothetical is emphatically ruled out by Gavin as impossible. (“We cannot tune for climate sensitivity for instance, even if we wanted to.”). (By the way, I’m not endorsing what Gavin says).

        I believe the point being made is that models have often yielded similar results because they differ from each other in ways that tend to cancel out – e.g., with stronger positive forcings offset by stronger negative ones.

        No, it’s not about “stronger positive forcings offset by stronger negative ones” – according to Kiehl, and Knutti, it’s about the forcings (particularly aerosol) chosen for different models “fortuitously” compensating for their different climate sensitivies of the models, allowing them all to simulate the surface air temperatures – this is, as Kielh puts it, a “curious aspect” of the different models.

      • I believe you misinterpreted what Gavin was saying. One can’t tune for climate sensitivity, but it’s possible to parametrize models in a manner that changes climate sensitivity as one of their emergent properties.

        I haven’t reread Kiehl recently so my comment was tentative as to whether it applied to his model comparisons, but it makes sense that an inverse relationship between sensitivity and forcings would tend to create a cancelling effect. The literature also reflects the tendency of some models to utilize higher negative aerosol forcing in a manner that offsets positive CO2 forcing.

      • I didn’t misunderstand Gavin. I was just pointing out that what you’re proposing contradicts what he wrote. Adjusting parameters to improve emergent properties of the model is called tuning eg. the equilibrium temperature of the model is sometimes tuned in this way to make it realistic.

        Also, you’d need to show that the parameters leading to a 3C sensitivity would are realistic parameters, otherwise the hypothetical model is no good.

        re: Kiehl and Knutti

        Choosing forcing datasets should be made according to our best knowledge of what those forcings were – the fact that they vary inversely to the sensitivty of each model strongly suggests that they were chosen to allow the models to simulate observed temperatures.

        We know that in reality all these very different sets of estimated forcings can’t all be correct – there has been only one historical set of forcings – therefore the forcings that deviate from the true historical set are wrong to various degrees, and so are the models that they’re matched to, since they wouldn’t be able to hindcast correctly given the true forcings. (And we still don’t know whether the small remaining subset, matched to estimated forcings close to the true forcings, are actually able to predict the future state of the climate.)

      • I’m afraid we’ll have to disagree, oneuniverse, because it is always possible to reparametrize models. What can’t be done is simply to insert a new climate sensitivity value, but with new parameters, the sensitivity value can be changed. Climate sensitivity is not an input to GCMs, but an emergent property, and will change based on adjustments to the models.

      • Fred, please re-read what I wrote, which was that you’re in disagreement with Gavin, and that I didn’t endorse his statement (and that you’d need to show that the new parameters are realistic).

      • I don’t disagree with Gavin – he’s a modeller and I’m not. My original statement is that a Hansen 1988 model parametrized in a way that yielded a climate sensitivity of 3C (e.g., by adjusting cloud parameters) would have matched observations well. Perhaps, though, we don’t disagree as completely as you may think, because I don’t claim the modeller knows exactly what knobs to turn to get to 3C, but only that if it had happened that way, the match to observations would have been good. I was making a point about the relationship of climate sensitivity to model accuracy, and not about the mechanics of model construction.

      • Fred –
        My original statement is that a Hansen 1988 model parametrized in a way that yielded a climate sensitivity of 3C (e.g., by adjusting cloud parameters) would have matched observations well.

        That’s conjecture. You don’t know that without actually doing what you’re proposing. If models were that predictable, they wouldn’t be needed.

      • Fred, if you reparametrize the model so as to change the 4.2C sensitivity to 3C, it will almost certainly be unable to hindcast the temperatures correctly (remember the forcings will be unchanged), thereby invalidating the model.

      • I don’t think it’s highly conjectural, Jim. Climate sensitivity describes the relationship between forcing and temperature change. The forcings are derived from input data that would not have changed and basic radiative transfer principles, and so the main differences would have involved feedbacks, leading to a modestly shallower slope for temperature rise. The exact degree of matching is conjectural, but I think an improved match would have been very likely.

      • oneuniverse – If you look at the slopes prior to 1984, they were all so shallow (i.e., net forcings were so small) that climate sensitivity differences should have had little effect. I don’t see how the model would have been invalidated.

        These are interesting things to think about, but are we spending too much time on them?

      • I don’t think it’s highly conjectural, Jim.

        I think it is. I’ve worked with linear models that weren’t that predictable without being thorough testing. As well as with non-linear models that were nowhere near predictable under any circumstance, again without testing – and lots of it.

        Climate sensitivity describes the relationship between forcing and temperature change. The forcings are derived from input data that would not have changed and basic radiative transfer principles, and so the main differences would have involved feedbacks, leading to a modestly shallower slope for temperature rise.

        Feedback effects are not always predictable. Which is my point. If you change anything as you indicate here –

        My original statement is that a Hansen 1988 model parametrized in a way that yielded a climate sensitivity of 3C (e.g., by adjusting cloud parameters) would have matched observations well.

        - then the model is no longer what it was and the output is no longer predictable the way you think it is until you validate that output. Not even for a linear model. This is NOT a comment on this particular model, but on ALL models except perhaps the simplest ones. And even there…. From personal experience, it’s the nature of the beast.

        The exact degree of matching is conjectural,

        Yes, very much so.

        but I think an improved match would have been very likely.

        I think it’s possible. But I wouldn’t bet my lunch or my paycheck on it. Of course, YMMV

      • Eh? The temperature rise from 1958-1987 (period of observed temperatures used in Hansen 1988) is approximately 2/3 that of the subsequent rise (to the present).

        Looking at the forcings, there’s little difference in slope between 1958-1987 and 1987 to present, particularly for your preferred scenario .

      • Fred – my last post was in reply to your comment
        If you look at the slopes prior to 1984, they were all so shallow (i.e., net forcings were so small) that climate sensitivity differences should have had little effect. I don’t see how the model would have been invalidated.

      • To oneuniverse – Projection started in 1984 and prior slopes used to confirm model parameters were shallow.

        To Jim – The magnitude of feedbacks in the model is fixed by the values for the forcing and the climate sensitivity as a linear function over the narrow range used, so there are no hidden surprises likely in that regard as long as a particular climate sensitivity is involved.

      • Fred –
        Your own words –
        so the main differences would have involved feedbacks,

      • Fred, the forcing pre-1984 are significant, although you want to dismiss them.

        You don’t know whether your hypothetical parameters will be physically realistic, and you don’t know whether these new parameters will introduce any biases into your model (likely yes – that’s why Gavin said tuning is limited – read his full comments) – yet you seem confident that the model will hindcast ok and forecast even better.

        I think you really need to re-evaluate whether your confidence in your assertion is justified.

      • Oneuniverse and Jim – I’ll sign off at this point, because I realize that to address the latest comments adequately, I would end up repeating myself. I doubt that there are many bystanders waiting for more commentary, and so I thank you for your perspectives.

      • You never answered why you’re confident that you can parametrize Hansen’s model so that it has 3C sensitivity, without bias, possessing realistic equilibrium temperature and radiative balance.

        Still, thanks and goodnight.

      • Fred, by the way, although you say you’d be repeating yourself, you never responded in the first place to the issue of how you know your hypothesised parameterizations will be sensible (apart from my last post, I mentioned it twice, 2nd para of 6.16pm and 2nd para of 10.29pm) – you’ve dodged it three times.

      • Fred Moolten

        You wrote:

        Hansen’s current mid-range value of about 3 C for fast feedbacks is consistent with this evidence. Interestingly, his 1988 model that somewhat overpredicted warming in the ensuing decades (slope 0.27 C/decade vs 0.18 C for Hacrut data), was based on a sensitivity of 4.2 C, whereas the same model would have matched the observational warming record very well if the parameters used involved a sensitivity of 3 C.

        “Somewhat overpredicted” is an understatement, Fred. He was off by more than 2x.

        Hansen’s “business as usual” (Case A) called for warming of 0.40C per decade (not 0.27C)

        Just to straighten this point out.

        Max

      • The appropriateness of Scenario B rather than A has been discussed quite a few times previously. One can mount arguments on either side of the issue in general, but as a means of evaluating the magnitude of climate responses to observed forcings, B is superior, and therefore more relevant to estimating what value for climate sensitivity would have best matched observations. Whether scenario B best predicted the effects of unforced variability is an issue separate from climate sensitivity to CO2.

      • Fred, do you not see how the only way your reasoning works is if you assume the current models are correct? How do we know they are correct?

      • Steven – We should probably start by agreeing that the 1988 model used by Hansen et al is outdated. Beyond that, though, models are evaluated for internal consistency, accuracy of data input, and skill in matching observed trends. If the models fail any of those tests, they are wrong. How many models are wrong? All of them, because it is a practical impossibility, and probably a theoretical one as well for them to achieve 100 percent accuracy. They become useful when the range of probable error is small enough for predictions to guide planning. A good analogy is with weather models. Despite their limitations in terms of either accuracy or predictive horizon, they are extremely useful whether you are planning a picnic or conducting a billion dollar business affected by the weather. They are also much better than they were a couple of decades ago.

        Current GCMs are also evolving, but there does not appear to be any way they can meet the tests I cited above and still yield climate sensitivity values that are much higher or lower than the canonical range. You can construct them to have much lower or higher sensitivity, but then you must use unrealistic input data for them to reproduce observed trends. There is some wiggle room, but it is not huge.

      • Fred, the only, and I do mean the only, importance of the 1988 projections to me is the resistance to admitting they weren’t right. If the answer I heard was: sure they were wrong but that was long ago and we’ve learned much since then; I would say, all I could say, is good answer. I don’t understand why this isn’t the answer I hear but it seems more political than scientific to insist what was obviously wrong was actually right. The right answer is: scenario A is the appropriate scenario to compare to observations and the following reasons are why we believe it was in error.

      • Fred Moolten, 4/20/11, 5:16 pm, Energy imbalance

        At 4:51 pm, Steven asked, How do we know [the current models] are correct? You answered, We should probably start by agreeing that the 1988 model used by Hansen et al is outdated. Whatever you mean by the 1988 model, Steven needs to know that Hansen’s current model, the 2011 version, is invalid. See my posts at 7:39 pm and on 4/19/11 at 1:35 pm.

        You continued, Beyond that, though, models are evaluated for internal consistency, accuracy of data input, and skill in matching observed trends. If the models fail any of those tests, they are wrong. These are true, but they are not ultimate tests for validity in scientific models. They are minimum requirements for a model to be a scientific model. Models like the GCMs have a surplus of variables, so they can (and must) be made to match within tolerances all the data in their domain. In the schema for models of conjecture, hypothesis, theory, law, even a conjecture that fails your tests is invalid.

        If a model passes these tests and in addition makes a non-trivial prediction, with tolerances, it would be recognized as a valid hypothesis in science.

        If such a prediction is demonstrated accurate with fresh facts, the model is in the zone of a being theory. (Just for completeness, when all predictions implied by the model have been validated, the model graduates to a law.)

        In my experience, a model that is less than a theory may not be used ethically for public policy.

        Based Hansen’s Figure 7, his model, essentially the AGW model, is less than a conjecture, being an à priori model (not based on data) and contradicted by laws of physics. AGW is invalid.

      • Fred Moolten

        I really hate to have to correct you once more, but

        Scenario A was for “business as usual”, which is precisely what we have had.

        Scenario B was for “CO2 frozen at 1988 rates”, which is obviously NOT what we had

        Scenario C was for “drastic reduction in CO2 emissions starting in 1990”, which also is NOT what we had

        The appropriate “scenario” to compare with the actual trend is “scenario A”.

        Max

      • That is incorrect, Max, for reasons explained above and in more detail previously. Unless many other readers feel it’s important to repeat the explanations, I’ll refer you to the previous discussions.

      • Fred,
        Please do explain again why Hansen did not say or mean what he said and meant.

      • It is ridiculous to claim that Scenario A is inappropriate. CO2 levels have kept rising at much the same pace as they did in Scenario A. They did not stop rising in 1988 (or any other time).
        Hansen’s predictions are wrong by a steadily widening margin. Even if we assumed his Scenario a model was retuned to 3C per century, (instead of 4C), his prediction would still be considerably too high. So would just about any other GCM run at that time. (Hansens wasnt the only climate model in 1988).

      • Fred Moolten

        You state correctly that AR4 WG1, chapters 8 and 9 (particularly 9) leave the range of climate sensitivity rather broad – typically within the oft-quoted 2 – 4.5 C/CO2 doubling.

        But they do not leave it “broad” enough, as has been found subsequent to the publication of AR4, in that they have not included the more recent physical observations of both Spencer and Lindzen, which put the lower end of the range for 2xCO2 at 0.6C.

        So, taking into account this more recent information, the “range” should now be 0.6 – 4.5C for 2xCO2.

        Max

      • Neither Spencer nor Lindzen address long term climate sensitivity to CO2, nor is there any reason to believe that the short term, ENSO-based data underlying their conclusions have much relevance. There are many reasons to be confident their conclusions about the short term data are wrong, but even if they were correct, it would have little relevance to the sensitivity of the climate to persistent changes in atmospheric constituents (or persistent solar irradiance changes).

      • And this is precisely what I discussed in a previous comment – that you make sweeping judgments with little or no evidence and expect agreement because you said so.

      • Jim – You’re right that I haven’t tried to address the issues in detail here, although I and others have done so previously in other threads. It’s hard to decide in those circumstances how much repetition is warranted.

        However, if you read Spencer/Braswell, Lindzen/Choi, and Dessler, I believe you can confirm that long term CO2 sensitivity was not addressed, and two of the papers (S/B and D) acknowledged that their conclusions (which differed from each other) could not be extrapolated to long term CO2 responses.

      • Fred, so now you’ve moved from “because I said so” to “because I said so earlier”.. :)

        Spencer & Lindzen found what appeared to be powerful short-term negative feedbacks which are not represented in the models. Spencer and Braswell found evidence for Lindzen’s hypothesized “iris” effect.

        Fred, you say : “even if they were correct, it would have little relevance to the sensitivity of the climate to persistent changes in atmospheric constituents (or persistent solar irradiance changes).”

        If the models don’t model significant features of the climate system correctly in the short term (significant with respect to SW/LW flux in this case), it’s in no way clear that the long term computation of climate will be correct.

      • CO2 can be a strong preservative. It can permeate, killing all life and intellectual curiosity. Toxic in some ways, but look how it preserves belief.
        ===========

      • Fred Moolten

        You are getting “wrapped around the axle” in your logic here.

        Spencer & Braswell or Lindzen & Choi dis not discuss theoretical deliberations such as the purported “long-term” effect from CO2 on our climate as estimated by model simulations based on theoretical deliberations and highly controversial interpretations of some selected paleo-climate data.

        They simply reported actual physical observations from ERBE and CERES satellites, which show that net overall cloud feedback is negative and that climate sensitivity is likely to be around 0.6C for 2xCO2.

        This new empirical data came out after AR4 was published, so obviously could not have been included in the AR4 report. AR4 conceded “cloud feedbacks remain the largest source of uncertainty”, and these more recent observations have simply cleared up some of this “uncertainty”.

        BTW, model studies using super-parameterization for clouds have shown the same net negative feedback for clouds, with the same impact on climate sensitivity.

        Let’s see if IPCC picks up these new data for its AR5 report or sticks with its old assumptions despite the new information.

        Max

  48. Let’s assume, just for the moment, that Hansen is correct. That’s a stretch for many, but stick with me.

    He’s saying that climate models “underestimate the negative forcing by human-made aerosols” and that as a result “Aerosol climate forcing today is inferred to be ‒1.6 ± 0.3 W/m2, implying substantial aerosol indirect climate forcing via cloud changes.”

    The aerosols in question that increase cooling via indirect climate forcing are sulfates.

    EPA and environmental groups have maintained since the 1980s that sulfates kill you. However, that was when air pollution epidemiology was in its infancy and sulfate was the only type of small particle with widespread measurement data available. You couldn’t see if sulfate lost importance when other pollutants, such as PAHs and catalytic metals in the air were also included in the models.

    Compared to the other important pollutants in air, such as PAHs and black carbon and many other biologically active carbon species, it is a bit implausible that sulfate per se would cause problems. This is because ammonium sulfates are not biologically active, and in fact ammonium sulfates (the sulfate that is in the air) are part of the internal mechanisms of our cells; PAHs and black carbon and benzene are not.

    It is far more complicated that than, but this is a good starting point. How is it that ammonium sulfate would end up being almost as harmful as PAHs and benzene and formaldehyde and black carbon? Air pollution experts have theories, but to show the theory to be at least somewhat likely, you need to show statistical associations with sulfate in models which now can include many different types of small particles, not just the sulfates as in years past.

    Many recent epidemiology studies, which now include black carbon (a major emission from diesels) among the variables examined, find statistical associations with the black carbon in almost every case, but in far fewer cases for sulfate.

    So suppose, suspending 25 years of incessant “sulfate kills” messaging, that EPA and environmental groups jumped the gun 25 years ago, and now find it difficult to adjust to the new studies. If that is true, then we will be reducing the very emissions which according to Hansen are keeping the world from rapid warming. And we will be doing so on the arguably false assumption that sulfates must be reduced because they kill.

    It is a long story, but I encourage readers to look into it. Here are three references to start with ( you can get the first two free by googling a bit):

    1. Bell ML, Ebisu K, Peng RD, Samet JM, Dominici F. 2009. Hospital admissions and chemical composition of fine particle air pollution. American Journal of Respiratory and Critical Care Medicine 179(2), p. 1115-1120.

    Peng, R.D. et al. 2009. Emergency Admissions for Cardiovascular and Respiratory Diseases and the Chemical Composition of Fine Particle Air Pollution. Environmental Health Perspectives. 117 (6)

    3. Lipfert, F. et al. “Air Pollution and Survival within the Washington University-EPRI Veterans Cohort: Risks Based on Modeled Estimates of Ambient Levels of Hazardous and Criteria Air Pollutants.” Journal of the Air & Waste Management Association. Vol. 59 April 2009, pp. 473-487.

  49. This study, based on ARGO measurements, refutes Hansen’s “ocean warming” assumptions.

    http://www.pas.rochester.edu/~douglass/papers/KD_InPress_final.pdf

    • How deep is your ocean?

      • How deep is your ocean?

        Here’s what the ARGO home page tells us:

        Argo is a global array of 3,000 free-drifting profiling floats that measures the temperature and salinity of the upper 2000 m of the ocean. This allows, for the first time, continuous monitoring of the temperature, salinity, and velocity of the upper ocean, with all data being relayed and made publicly available within hours after collection.

        Max

      • Max, the Knox & Douglass study is for 0-700 m.

      • Thanks, one universe. That’s “deep” enough for me.

        It’s also “deep enough” for Hansen (he used a range of 0-750m in his “hidden in the pipeline” study).

        So Knox & Douglass and Hansen et al. are both discussing the same “upper ocean”.

        Max

      • Hansen actually uses the Von Schuckmann ea (2011) study for his ARGO-only data – their study considers the full 0-2000m, and finds net energy gain over 2003-2008.

        However, data for the lower depths are considered to be less reliable, and there are four studies (although one is an unpublished study by Willis), including Knox and Douglass, which find probable cooling for 0-700m for the same period, which is unusual if 0-2000m has warmed. Hansen should have mentioned the two published studies available to him that find cooling at 0-700m.

      • It’s also unusual that the error estimate for Von Schuckmann’s 0-2000m is smaller than those for the four 0-700m studies. As Knox and Douglass say, “Why the von
        Schuckmann case is an “outlier” is worthy of further
        study.”

      • Why? The first 700 meters is included in von Schuckmann’s work, and I believe her work ends later in 2008.

      • Why? Because vS’s error estimate is lower, although it’s including more unreliable data, and because, if all the studies are accepted, naively interpreted, the top 700m didn’t warm, but 700-2000m did, which presents an unusual physical situation. Further study seems to be in order. (And yes, check the work of 700m studies too).

      • Well, we don’t know that von Schuckmann agreed that the upper 700 meters is cooling. The ARGO data for that layer that she used extended later into 2008.

        A couple of months ago Bob Tisdale, during a discussion of the K&D paper, posted here that the upper layer is now warming, and Josh Willis sent Roger Pielke Sr. an unpublished analysis that also appears to show slight warming in the upper 700 meters.

        I have found two comments by scientists about von Shuckmann’s work below 700 meters. One indicates to me she may have overestimated warming; the other indicates to me she may have underestimated warming.

        It’s a draft paper. Make your case and send it to Hansen. Maybe he’ll include it.

      • Either there’s an unusual physical situation, or the studies are contradictory – either way, further study is in fact required. Not sure why you’re against it.

  50. JHC, why is scenario B the most plausible? Is it for the same reasons scenario A must eventually be on the high side of reality? What were those reasons again? Something about running out of fossil fuels or reducing co2 emissions wasn’t it?

      • Steven

        Thanks for link to Hansen 1988 paper.

        It clearly states:

        Scenario A assumes continued exponential trace gas growth [as actually occurred at a CAGR of a bit over 0.4% per year], scenario B assumes reduced linear growth of these gases, and scenario C assumes a rapid curtailment of trace gas emissions such that net climate forcing ceases to increase after the year 2000 [obviously neither B nor C actually occurred, so the scenario to compare with the actual record is "A"]

        (regardless of what Fred claims).

        Max

      • Max,
        Checking the numbers from the Appendix B (which requires a little effort as they have not been reported directly), one can conclude that the actual development of CO2 concentrations in the atmosphere has been rather close to the scenario B while the growth of scenario A exceeds significantly the later development.

        As far as the paper favors scenario A it is in error concerning the CO2 levels. The point of Fred, as I have understood it, is based on this observation. Therefore he has said that in judging the accuracy of the climate model we should forget scenario A and compare with scenario B, which is based on a more correct assumptions of emissions.

      • Pekka, scenario A is the correct scenario regarding emissions. The argument boils down to this. You have a cause and effect paper. If this much emissions then this much warming. Those are the two most important factors in a cause and effect paper: the cause and the effect. If your effect turns out to be less then expected because you miscalculated the amount of co2 that would stay in the atmosphere or because you miscalculated the amount of methane feedback it doesn’t mean the emissions (the cause part of the cause and effect paper) were reduced. It means you made errors.

      • Steven,
        I did some calculations based on the numbers given in the appendix. It stated, how the annual increases in the CO2 concentrations develop up to 2010. For scenario B it was stated that the annual growth is constant 1,9 ppm from 2010, and it is stated, how this value is reached over earlier years. From that information, it’s possible to deduce that the level concentration is 390.1 pp in 2009 while the measured value is 387.35 ppm. Thus even the scenario B has a higher concentration than the observed one.

        There is some uncertainty on the initial value used, which means that my calculation may have a small deviation from the original, but to the extent that the relative ordering would be changed. The scenario A has a stronger growth in the concentration leading to about 4.5 higher value in 2009, which is definitely significantly too high. Thus the CO2 forcing is bound to be essentially too strong in scenario A, while it’s closer to the correct value in scenario B. This is a starting point for the climate model, and this makes comparing with scenario A results meaningless for judging the climate model.

        From other comments of this thread we can read that Steve McIntyre has reached similar conclusions. He has done a more comprehensive study comparing also other GHG’s.

        http://judithcurry.com/2011/04/18/earths-energy-imbalance/#comment-63945

      • Pekka, the concentration of co2 in the atmosphere was not the criteria set forth in the paper. The emissions of co2 were. If you want to argue that A was wrong because it miscalculated how much co2 would stay in the atmosphere that is perfectly legitimate. It does not change the emissions which are in line with those specified in scenario A.

      • Pekka, let me say this a different way. The amount of co2 that stays in the atmosphere from a given amount of emissions is part of a model (I assume). If you have this part wrong then you don’t say the amount of emissions changed. You change this calculation in the model. Just because the error was at the very beginning of the process doesn’t mean it is not still an error.

      • Steven,
        There is no evidence that CO2 persistence was part of the model. On the contrary, it’s clear from the paper that the GHG concentrations are taken as starting point for the model calculations.

      • “Scenario A assumes that growth rates of trace gas emissions typical of the 1970s and 1980s will continue indefinitely; the assumed annual growth averages about 1.5% of current emissions, so the net greenhouse forcing increases exponentially.”

        Section 4.1 sets forth the criteria for the scenarios. It clearly states emissions. If persistance wasn’t even considered wouldn’t you consider this a fairly significant error? I just have to assume it was and the calculations were wrong since this would be a much more charitable assessment.

      • The paper refers to the changes in several places. In one of them a reference to emission growth rate can be found, but even that is contradicted immediately by telling that they do actually jump directly to concentration. All other places are more clear and specify concentrations.

        Actually I find all this discussion rather irrelevant. Everybody must agree that the model was incomplete and is outdated. Even a better agreement with global average temperatures would not be very significant, when the model is known to be incapable of describing other major features of climate and oceans. It could well be argued that agreement on one factor is more luck than skill. This whole point is more a historical curiosity than real evidence on any significant issue.

    • He said it was the most plausible, not me. I believe his reason is he did not think growth in GHGs would keep up with the recent rate – years prior to 1988.

      fig. 20, page 36

      He said Scenario A eventually escapes reality, which is apparently where Max latched onto it.

      • Steve McIntyre examined the scenarios closely, and came to the conclusion that actual forcings have been a little below scenario B.

      • oneuniverse = I believe that is what Hansen believed was the growth rate would decline. If you think “most plausible” implies a claim of extreme precision, fine. What I read is a rough estimation of the future GHG forcing growth that ended up being essentially correct. Many would have assumed the growth rate would remain the same, or increase, as it’s largely a function of human behavior. He apparently didn’t think that. Ever wonder why?

      • If you think “most plausible” implies a claim of extreme precision, fine.

        What makes you think I think that..? I just linked to McIntyre’s analysis, and noted that he found actual forcings were a little below scenario B.

        Many would have assumed the growth rate would remain the same, or increase, as it’s largely a function of human behavior. He apparently didn’t think that.

        Yet, in his oral testimony to Congress in 1988, he described Scenario A as business-as-usual :

        We have considered cases ranging from business as usual, which is scenario A, to draconian emission cuts, scenario C, which would totally eliminate net trace gas growth by year 2000.

      • Yes, he was saying the growth rate of the 1970s and 1980s, business as usual, would not be plausible. I don’t know why he thought that.

      • Page 9345 second paragraph explains why scenario A must eventually be on the high side of reality. The plausible comment follows in the same paragraph.

    • That was comparing apples to oranges.

      Here is comparing apples to apples:

      http://bit.ly/foJ7wl

      • Girma

        Since Hansen was only speaking of amplitude of change in the 1988 summary, your graph and mine for Hansen’s purposes are equivalent.

        There is no question on Hansen’s criteria that the range of either your earlier domain or mine is distinct from the range of our later domains.

        If Hansen’s premises in 1988 were correct, there’s no disputing his predictions were correct, and indeed exceeded within the terms he set out at that time.

        Of course, by 1988, I’d decided the same thing about Hansen’s predictions and graphic methods as I’ve said here and now about yours.

        Well, admittedly, my questions about Hansen were subtler, since his methods were far and away less invalid then than yours are now.

      • Bart R

        The fact of the matter is that Hansen’s prediction (scenario A) was way off.

        It was a lousy prediction based on exaggerated assumptions of CO2 climate sensitivity.

        It’s just that simple, Bart, no matter how someone like Fred tries to rationalize it.

        And it tells me quite clearly if Hansen couldn’t get past predictions right, his future predictions are probably worthless, as well.

        Max

      • Max

        I don’t follow.

        I’ve spent over two decades thinking the predictions weren’t “right” whether their results ended up right or wrong because of my own fringe and not-terribly skillful views of chaos and climate.

        Never once occured to me to think of Hansen’s work as worthless.

        Just as work-in-progress by people who achieved and communicated far greater insights than those who had gone before.

        The field today has much undiscovered country still. Early explorations are not worthless; they’re trailblazing.

      • Bart R,
        The success rate of trailblazers is rather small.
        Think of Henry Hudson, for instance:

        http://library.thinkquest.org/4034/hudson.html

        Think of how few of the original crew from the 1492 Columbus voyage survived to old age.
        Think of how the early Nordic and Irish settlers/visitors did in North America.
        Trailblazing is not equal to successful exploration.
        History, I will bet, will see Hansen more of a lost trailblazer than a discoverer of anything worthwhile.

  51. o efficiently into the deep ocean …”

    Here’s a test I propose to assess this statement:

    - Get the monthly mean potential temperature datasets for Argo and the Model being tested, gridded by longitude, latitude, depth and month.
    - Sample the datasets at 45 degrees south across all available longitudes and average the temperature by depth and month.
    - At 45 degrees the top of atmosphere solar insolation follows a sinusoidal pattern over the year. Assuming that the top of sea forcing is also sinusoidal, fit a sine wave to the “depth by month” grid for all depths. This will return the a, b, and c coeffients for the equation y ~ a + c*sin(x-b).
    - Plot the phase shift (b) vs depth. As per my modest understanding, if one dataset shows shorter shifts, then that is the one that is demonstrating the more efficient mixing of heat into the deep ocean.

    I believe phase shifts are important for assessing heat transfer rates. For example, Mongolia has a seasonal lag of about 30 days. Mid-latitude ocean surfaces lags, on the other hand, are at least twice that.

    As you might have guessed, I have already done this comparing GFDL CM2 vs. Argo. They show two distinct patterns. Down to 600 Meters, Argo shows a shift of less than pi/2 rads relative to the surface. GFDL shows a shift of a full pi. Based on this, I believe GFDL underestimates the heat transfer rate. But then again, I am not a climate scientist and I understand that there are more factors at play. I am prepared to be illuminated about this matter.

    If you’re interested, here is the plot of shift vs. depth, along with the R code:

    http://sites.google.com/site/climateadj/gfdl-vs-argo-phase-shift

    And here is the plot of the amplitude vs. depth:

    http://sites.google.com/site/climateadj/gdfl-vs-argo-amplitude

    It’s a head scratcher that the amplitudes match so closely, but the phase shifts don’t. I can only speculate as to why.

    Thanks, AJ

    • Crap… the statement I’m assessing is “We conclude that most climate models mix heat too efficiently into the deep ocean …”

  52. COMPARISON OF HANSEN ET AL, 20-AUG-1988, WITH OBSERVED DATA

    http://bit.ly/hDGUJJ

    The 5-year running global mean temperature anomaly of Hansen et al, given in Figure 3 of the paper, are listed below.

    5-Year Running Mean of Hansen et al, 1988,
    for various emission scenarios A, B & C
    Year=>A=>B=>C
    1990=>0.5=>0.4=>0.3
    1995=>0.7=>0.4=>0.4
    2000=>0.9=>0.5=>0.5
    2005=>1.0=>0.7=>0.6
    2010=>1.1=>0.9=>0.6

    The scenarios were defined in the paper as follows:


    We define three trace gas scenarios to provide an indication of how the predicted climate trend depends upon trace gas growth rates. Scenario A assumes that growth rates of trace gas emissions typical of the 1970s and 1980s will continue indefinitely; the assumed annual growth averages about 1.5% of current emissions, so the net greenhouse forcing increases exponentially. Scenario B has decreasing trace gas growth rates, such that the annual increase of the greenhouse climate forcing remains approximately constant at the present level. Scenario C drastically reduces trace gas growth between 1990 and 2000 such that the greenhouse climate forcing ceases to increase after 2000.

    Here is the comparison of the predictions with observations (gistemp)

    http://bit.ly/hp590C

    Year=>A=>B=>C=>observed
    1990=>0.5=>0.4=>0.3=>0.24
    1995=>0.7=>0.4=>0.4=>0.39
    2000=>0.9=>0.5=>0.5=>0.48
    2005=>1.0=>0.7=>0.6=>0.55
    2010=>1.1=>0.9=>0.6=>

    Scenario B has decreasing trace gas growth and Scenario C has drastically reduced trace gas growth. As a result, the actual trace gas growth is Scenario C with continuous yearly average growth.

    The above comparison shows that the observed temperatures match scenario C where there is drastic reduction in trace gas growth. Since there was no actual reduction in trace gas growth rate, the above comparison shows the predictions of Hansen et. Al, 1998 is completely wrong.

    (Hat tip to Goddard Institute for Space Studies for making the data still available in its web site. When predictions and observations mismatch, the relevant data usually disappear into the black hole)

  53. I’m grateful to Judy Curry for calling attention to James Hansen’s latest effort. As someone fairly familiar with the subject matter, I found his analysis informative, including a new perspective on climate sensitivity and the planetary energy imbalance that deserves attention, albeit in conjunction with differing perspectives such as those expressed by Kevin Trenberth and others. Unlike Trenberth, Hansen argues that the imbalance and the trajectory of heat added to the oceans over multiple years are not mutually inconsistent. It’s worth scrutinizing his reasoning in detail.

    With subject matter as “meaty” as this, to use Dr. Curry’s characterization, I hope to learn from the paper and the commentary. Much of the latter entailed typical arguments from an agenda, but there were exceptions. In addition, some of the exchanges became too adversarial, and I regret my contribution to that familiar phenomenon. Nevertheless, I found comments from several participants enlightening. Among the participants, Alex Harvey was one who provided insight that I particularly appreciate. My physical intuition is good, but Alex adds a layer of mathematical sophistication that enhances my understanding, reinforces my valid conclusions, and helps me modify my reasoning when necessary. His approbation of the Hansen paper, with reservations, confirms my own confidence that the paper is an important repository of current understanding and novel perspectives that I want to keep handy as one reference source on which to draw, always aware that dependence on a single source is an unsound means of arriving at an accurate picture in any discipline.

    I hope we’ll have more papers like this one to discuss.

  54. testing html tag

    Year
    A
    B
    C
    observed

    1990
    0.5
    0.4
    0.3
    0.24

  55. “However, if you read Spencer/Braswell, Lindzen/Choi, and Dessler, I believe you can confirm that long term CO2 sensitivity was not addressed”

    Fred! I feel like I’ve been cheated on on my wedding night! The only thing I can recall us ever agreeing on, that the long term climate sensitivity is so unimportant it doesn’t even need to be included in the calculations, and here you are already saying it does need to be included. So does it need to be included or doesn’t it? Or is it only the long term climate sensitivty of natural forcings that isn’t important? Looking at Hadcrut3 it appears that there was approximately 0.5C warming between 1910 and 1940 and then the aerosols kicked in and prevented either a continuation of transient warming or at a minimum the realisation of any long term climate sensitvity. This means that the 0.5C is the transient amount and is 40% of total forcing. We expect 60% within the first 100 years so there is 0.25C long term sensitivity to be realised before 2040. Most of this should have already occurred. Yet nobody includes it in attributions. Call it 0.2 if you want, call it 0.15 I don’t care since, after all, you would have to cancel out the long term climate sensitivity from the earlier forcings also. Once you multiply it by the long term climate sensitivity it cancels out the numbers jump right back up. Are my numbers rough and unscientific, yes indeed they are. This isn’t my job. Show me where someone has done their job, ie reference please.

    • Steven – sorry for the confusion. In my above comment, “long term” refers to decades rather than individual years or months. The intervals to which Hansen’s climate sensitivity estimates of 6C would apply instead of 3C are multi-centennial or millennial.

      • No Fred, I’m sorry. I shouldn’t be jumping on you about this. It isn’t your job either.

      • Alexander Harvey

        Fred:

        Thanks for the plug :)

        This might interest you:

        It is a little dated 2004, but it is the wonderful Dr Munk discussing:

        “Global Sea Level: An Enigma”

        His overall perspective on the “debate” has been not to rush to judgment on the severity but the basics are sound. He is interesting in so many ways, and genuinely brilliant. He is one of the few remaining climate people from before the MacDonald/Charney reports era. Were that there be more of his kind, we need them now more than ever.

        Alex

      • Yeah. The Dr. Munk lecture sure is “dated”.

        One of the first slides shown is the Mann “hockey stick”, which has been comprehensively discredited since then.

        Max

      • Thanks for the video, Alex. I was intrigued by the observation that polar ice melting has slimmed the equator through land redistribution to the extent of a measurable increase in the speed of the Earth’s rotation.

        There is a interesting new paper out on sea level by Church and White – Sea Level Rise, attempting to integrate earlier tide gauge data with more recent satellite altimetry, and identifying a slight but detectable acceleration since 1880-1900. An increasing proportional contribution to sea level rise from thermal expansion (also noted in other reports) is alluded to but without elaboration. The abstract is quoted below:

        “Abstract We estimate the rise in global average sea level from satellite altimeter data for 1993–2009 and from coastal and island sea-level measurements from 1880 to 2009. For 1993–2009 and after correcting for glacial isostatic adjustment, the estimated rate of rise is 3.2 ± 0.4 mm year-1 from the satellite data and 2.8 ± 0.8 mm year-1 from the in situ data. The global average sea-level rise from 1880 to 2009 is about 210 mm. The linear trend from 1900 to 2009 is 1.7 ± 0.2 mm year-1 and since 1961 is 1.9 ± 0.4 mm year-1. There is
        considerable variability in the rate of rise during the twentieth century but there has been a statistically significant acceleration since 1880 and 1900 of 0.009 ± 0.003 mm year-2 and 0.009 ± 0.004 mm year-2, respectively. Since the start of the altimeter record in 1993, global average sea level rose at a rate near the upper end of the sea level projections of the Intergovernmental Panel on Climate Change’s Third and Fourth Assessment Reports. However, the reconstruction indicates there was little net change in sea level from 1990 to 1993, most likely as a result of the volcanic eruption of Mount Pinatubo in 1991.”

      • Alexander Harvey

        Fred:

        I think that during the introductory description of his achevements his work on deriving ocean temperatures by acoustic means is mentioned in passing, I have no idea what became of that work, although I do know that it involved generating a loud readily identifiable signal and timing its detection at various listening stations worldwide. One can only contemplate what the whales made of that.

        The rotation rate stuff is interesting and the anomalous response to the 1997/8 El Nino is intriguing.

        A lot of what I like is his style, noticiable in that he both supports Mann’s work, but notes the controversy as it existed in 2004, and leaves the door slightly ajar.

        Also one could hardly accuse him of reading too much into the data, almost to the point of being vague at some points. As a matter of historical interest, and an illustration that he can be decisive, I believe that it was his sea-state predictive method that was considered crucial for the timing of the North African and Normandy landings not the weather synoptics as popular history indicates, unfortunately it was a marginal call and more difficulties were encountered than expected.

        Regarding the sea level work, I do need a lot of persuading that optimising methods of extending the historic tidal gauge record to oceanic surface level rise are all that useful as compared to simpler methods.

        In my understanding it amounts to the suggestion that the ocean surface level at all points is a reliable function of coastal tide gauges and a sparse network at that. In particular I would worry about the effective weighting of the coastal gauges, e.g. is the majority of the variance attributable to certain EOFs only due to a small subset of gauges that are not optimally positioned. Here the lack of gauges for equatorial west coast South America (a hot spot for one of the spacial EOFs) prior to sometime between 1940 and 1960 is a concern, but without the temporal EOFs (not shown I think) it is not possible to say if that is an issue. That said, I note that this optimised method does not look significantly different to the simple average method post 1910, so perhaps this method does not mislead but then it doesn’t seem to add much either.

        Alex

  56. Old secret science; sticky wicket?

    The Classic Maya period and the great southern cities appear to have ended in a catastrophic manner. There is evidence of warfare, burning and hasty construction of defensive walls in some city centers. In some cases there is evidence of the massacre of the rulers. Unfortunately, the written history of the Maya provide no insight into what happened. By the end of the classic period the Maya had ceased to erect stone stele with inscriptions.

    Freedom square in Egypt looked just like that. Secrets are bad for freedom don’t you think?

  57. “Climate sensitivity” is derived from observation. It has no fundamental mathematical or physical proof. It is not founded in thermodynamics.

    Usual physical constants are constant to umpteen decimal places.

    Climate sensitivity estimates by various authors vary by an order of magnitude.

    Climate scientists would be better off forgetting about it, seriously.

  58. This is not a ” meaty ” technical paper. It is a political manifesto aimed at the faithful and gullible, ending up asking for 100 million dollars funding. It is is a rehashing of the same old poor science with fancy attributions and some downright false claims like sea level rising.

    The title that will fit this best is ” Hansen’s Mental Imbalance ” .

  59. …but the ascendency of ice melt leads us to anticipate a near-term acceleration in the rate of sea level rise.

    Let us see what the data says.

    They never show us graphs when the data contradicts AGW. As a result, I have drawn the graph for the mean sea level change for the data from University of Colorado at Boulder for the last 15 years:

    http://bit.ly/fMb7bw

    This result shows the rate of sea level increase has decreased by about 40% from 3.8mm per year to 2.26mm per year. If the current trend continues, the see level will increase by about 200mm, about 8 inches, 2/3 feet by 2100.

  60. When the thermometer, and Treemometer(R), Mojo fails, go for the ice.

    When the model Mojo fails, go for the aerosols.

    Gotta find an unfailing Mojo somewhere.

  61. Re AGW, or Arranging the Deck Chairs on the Titanic:

    In an ancillary document, IPCC provides the following useful definition that never made it into its Assessment Reports:

    There are two basic types of uncertainty:

    1. [Deck Chairs] Where the relevant variables and functional relationships are known but values of key coefficients (e.g. climate sensitivity) are not.

    2. [Ice Bergs] Where it is not clear that all relevant variables and functional relationships are known. (Referred to as structural uncertainty through much of this report.)

    Rectangular brackets added, Manning, et al., IPCC Workshop on Describing Scientific Uncertainties in Climate Change to Support Analysis of Risk and of Options, 11-13 May, 2004, ¶2.5, p. 12. See IPCC Supplementary Material; reference (2) to IPCC Uncertainty Guidance Note (AR4, Technical Summary, fn. 2, p. 22).

    Deck Chairs: Sea level; volcano eruptions; aerosol concentrations; ENSO fluctuations; radiative forcing; radiative transfer; galactic cosmic rays; temperature lapse rate; fossil fuel emissions; CO2; climate sensitivity; polar ice; scenarios; residence time; urban heat islands; deforestation; human fingerprints; precipitation.

    Ice Bergs: Cloud albedo feedback; solar radiation amplification; turbulent surface layer; Henry’s Law; ocean currents; paleo records.

  62. Can anyone please explain where Hansen accounts for thermal pollution (any earthbound energy source – coal, oil, gas, nuclear) turned to heat as a cause of “forcing”?

    This is not insignificant, as Nordell estimates it accounts for 73% of observed warming.

    • BLouis79 -
      This is not insignificant, as Nordell estimates it accounts for 73% of observed warming.

      I’ve been told (on this blog) by a modeler that it is insignificant. Do you need to ask if I believed him?

      • I’ve been told elsewhere it’s insignificant, on the basis that the energy only causes a small W/m2 forcing. Not sure how they derive the number, but it probably comes out of the computer model. The first law is about conservation of energy. The contribution of heat energy to temperature is well understood and described by the standard heat equation (nordell’s analysis considers thermodynamic distribution of heat on earth beyond that):

        ΔQ = specific heat * mass * Δu

        Climate scientists appear to have corrupted the standard heat equation into the formula (which ignores thermodynamics, conduction and convection):

        (ΔTs): ΔTs = λRF, where λ is the climate sensitivity parameter, derived by observation from Hansen’s computer model.

        The notion of “conservation of radiation” is physical nonsense. Claes Johnson’s thermodynamic model makes much more sense.

    • I work it out to be 0.01 W/m2 when you average the fossil fuel burning energy released per year over the earth’s surface. Useful numbers
      30 million J/kg from coal burning
      6 million million kg carbon burned per year globally
      500 million million square meters on earth surface
      30 million seconds in a year

      The millions cancel so 30*6/(500*30)=0.012 J/m2/s

      • One can derive the contribution of energy at the earth’s surface to temperature based on sound theoretical physics and thermodynamics.

        Nordell & Gervet’s method is described in appendix A of

        http://www.ltu.se/polopoly_fs/1.5035!nordell-gervet%20ijgw.pdf

        If he gets a different answer from the computer model, then someone is wrong.

      • Interesting paper, but it also works out to be .01 W/m2 averaged back to 1880. They neglect that the sun supplies tens of thousands of times more than this and also radiation to space accounts for the balance. They seem to be assuming the earth is an insulated system heated solely by burning and geothermal sources.

      • Jim – I get a similar result simply calculating the relative contributions of CO2 forcing and the heat Nordell calculates. The latter comes out to about 0.5 percent of the former (i.e., 1/200), and even that figure includes some natural sources. The forcing is based on radiative transfer principles (as now confirmed by the A.R.M. and other measurements), and does not require complex GCMs for its estimation. It does of course require the understanding that almost all heat originating at the Earth’s surface escapes to space rather than contributing to a temperature change, regardless of heat source. The Nordell argument was a rather silly one, and it’s somewhat surprising it managed to get published, although the “International Journal of Global Warming” does not appear to be a journal likely to be inundated by submissions from authors hoping to be admired for their work.

      • By “originating”, I refer to energy absorbed at the surface and then distributed somewhere. Almost all comes from the sun to start with.

      • I find it ironic that this kind of paper gets a free pass on the “skeptic” blogosphere.

      • What Nordell computes as I understand it is if one distributes the heat generated on the surface throughout the surface based on known physical facts of thermodynamics and heat transfer, then one gets to 73% of the energy accounted for. He does not account for sunspot activity. If one factored in the heat transferred by physical principles, and put that in the computer model, then the answer for “radiative forcing” only needs to be 25% of what it is currently estimated. This is well within the bounds of error of the present radiation budget, where the net forcing is inferred from the Hansen computer model and not measured in any verifiable sense.

        It is 100% guaranteed that heat generated on the surface will cause surface warming and that is will not all be lost to space because of atmosphere effects that keep the earth warm (whatever theory might be responsible for that).

      • The sun provides tens of thousands of times as much heating as their computed amount. How can that small amount affect anything in comparison?

      • Um, if the sun was so strong, we wouldn’t need a “greenhouse” to keep warm.

        Earth’s atmosphere operates presumably with negative feedbacks in order to maintain a relatively stable atmospheric temperature.

        Basic physics (thermodynamics) also says a warmer body absorbs less heat than a cooler one.

      • The solar heat coming goes out by IR. The flow is tens of thousands times stronger than heat added by man. It is like adding a trickle to a gushing river and expecting its level to go up.

      • Thanks, but the method described makes the same fundamental mistake made by mainstream climate scientists. It assumes all energy is converted to heat flux at the surface. How does one convert heat flux to temperature? Not with standard physics heat equation or more complex thermodynamic modeling of that. No one uses the climate sensitivity parameter (derived from Hansen’s computer model), which we know to be flawed. So the question still remains – where does Hansen account for earthbound heat generation?

  63. My lack of familiarity with some of the jargon in the paper does not make it an easy read for me, however gave it a try.

    The point I would make might be simplistic and maybe naive, but can missing energy not just be stored in total life on the planet? By total life, I would mean plants, animals, people, food. I get that water stores energy as heat. Am I not doing much the same? If this is correct, then should the variations in total life on the planet not be considered as part of the energy budget,

    Don’t know how much energy the average eight year old from Glasgow stores at any one time. Nor do I know how much energy is stored in a herd of highland cattle. Don’t even know if it is significant. But if the eight year old and the herd of cattle were not there a hundred year ago are then are we not storing energy that we didn’t previously store.

    Can oil & coal not be thought of as the sun’s past energy emissions stored by planetary processes? Are the energy stored in coal & gas not then released as heat energy. In terms of total energy, is this not also a factor?

    Paul

  64. Hansen seems to have laid all his chips on the table. The shape of the planetary energy imbalance to the Pinatoba eruption seems to be the main cause for the slow down in temperature increase for the past decade. Barring another big volcanic forcing there seems to be no reason why temperature increase shouldn’t return to 1980′s and 1990′s rates fairly soon. Does that seem like a fairly reasonable interpretation?

    • HR – I more or less agree. There is some evidence that anthropogenic aerosol (cooling) emissions have increased again recently after a late 20th century reduction, and CO2 emission rates have declined due to the recession, but neither of these should have a very dramatic long term effect. Therefore, Hansen’s reasoning leads to the prediction that the warming rate of the 1990′s should return over the next few decades, barring a major volcanic eruption or other large perturbation. (The 1980s are less of a guide, since they included the “global brightening” due to declining aerosol cooling as an additional factor in the temperature increase). In judging the prediction, I would suggest a starting point not strongly influenced by an ongoing El Nino or La Nina, just as one can’t calculate recent trends starting in 1998.

      • Do you have a reference Fred? I have this one that shows the world continuing to brighten: Satellite remote sensing reveals regional
        tropospheric aerosol trends

        Michael I. Mishchenko1* and Igor V. Geogdzhayev

        Abstract: The Global Aerosol Climatology Project data product based on
        analyses of channel 1 and 2 AVHRR radiances shows significant regional
        changes in the retrieved optical thickness of tropospheric aerosols which
        had occurred between the volcano-free periods 1988–91 and 2002–05.
        These trends appear to be generally plausible, are consistent with extensive
        sets of long-term ground-based observations throughout the world, and may
        increase the trustworthiness of the recently identified downward trend in the
        global tropospheric aerosol load

        http://www.opticsinfobase.org/view_article.cfm?gotourl=http%3A%2F%2Fwww%2Eopticsinfobase%2Eorg%2FDirectPDFAccess%2F22436C69%2DBBD4%2DFF75%2D2D25CDB345A43455%5F138170%2Epdf%3Fda%3D1%26id%3D138170%26seq%3D0%26mobile%3Dno&org=

        Might take a few extra clicks to get to the actual paper for some reason it doesn’t seem to want to allow the direct route.

      • Well this time it worked. No doubt because I thought it wouldn’t.

      • Steven – This recent paper on Anthropogenic Sulfur Emissions suggests a slight upturn around 2000 reversing the decline of the prior two decades. The effect is small and surrounded by some uncertainty. However, sulfur emissions can generally be regarded as a good index of anthropogenic cooling aerosols. Most of the recent increase, if real, probably involves the substantial increase in energy generation in China.

        The earlier paper you referenced addresses aerosol optical depth over the oceans, and the 2001-2005 data, while not showing a rise, are not inconsistent with a small rise. I don’t see either a small increase or decrease as indicative of a major change in total atmospheric forcing.

      • Hansen cites Mishchenko three times in his draft paper.

        Global dimming and brightening: A review

      • Nice paper thanks for the link.

      • “There is some evidence that anthropogenic aerosol (cooling) emissions have increased again recently after a late 20th century reduction”

        From Fig22 it appears Hansen ignores these sorts of details anyway, the forcing line shows continuous dimming.

        “Therefore, Hansen’s reasoning leads to the prediction that the warming rate of the 1990′s should return over the next few decades”

        Why not now? I doesn’t see any reason to extend the present warming slow down if you follow Hansen’s approach.

      • I meant that averaged over the next few decades, we should see a warming rate comparable to that of the 1990s. Now would be a reasonable startng point.

      • Alexander Harvey

        Hi Fred:

        I am still reading my way through the paper.

        I feel that there is an issue in the ice age section that needs resolving. The argument as stated leads to the conculsion regarding sensitivity but I think it relies on an unstated assumption regarding the secondary effects that occur in response to the primary causes. The assumption being that they can be stated as a function of global temperature T alone. Here I mean the underlying response not the random noise.

        Now the elements he quantifies are GHGs and Ice extent. Now if Ice extent were solely due to global temperature all would be fine but if part of the melt was due to a factor that could promote melting even at constant “T” (“global” temperature) such as variation in regional and sessional insolation, then we could form the partial derivative of the responses with respect to the Ice extent “I” as well as “T” the global temperatures. This raises an issue as soon as you make the assumption that 3 W/m^2 from ice melt and 3W/m^2 for GHG increases is equivalent to 6W/m^2 of GHG forcing which he does. His argument for doing so must rests on the response effects being purely feedbacks on T alone. It is not clear to me that this is in anyway certain.

        Now this could be ignored if it could be shown that Ice extent varies purely as a response to T which would rule out on of the popular theories for glacial variation.

        I have also looked at some other bits and bobs mostly to do with time constants and temperature slewing rates. The rapidity of the last glacial retreat nudges me towards thinking that the “causal” forcing (neither fast/slow responses nor GHG outgassing etc) although maybe small was not insignificant perhaps (1.0-1.5 W/m^2) giving all feedback sensitivity (including GHG response and Ice melt due to temperature) of ~3/0 – 4.5 W/m^2. This level of amplification must also apply to the time constants which are lengthened proportionately which puts some sort of lower limit under the strength of the “causal” forcing or the slew rate is called into question.

        Alex

      • Alexander Harvey

        Correction:

        “all feedback sensitivity (including GHG response and Ice melt due to temperature) of ~3/0 – 4.5 ºK/(W/m^2).

      • Alex – I don’t think Hansen argues that ice-sheet changes are “solely” responses to temperature, but that they are predominantly responses to temperature. Some of his reasoning is described on page 11, where he states “Averaged over the year and over the planet, climate forcing due to orbital perturbations is very small, typically about 0.1 W/m2. This tiny global-mean forcing is able to achieve large global climate changes by commandeering the two powerful slow feedbacks: changes of ice sheet area and changes of long-lived GHGs.” He suggests that the only meaningful non-temperature driver of ice sheet changes is the insolation change from orbital forcing, and that is small. Conceivably, some other phenomenon might also be operating to drive ice sheet changes, but it is not apparent what it might be.

        It is also true, however, as Hansen concedes, that the feedback effect of ice sheet changes is related to their initial extent, and would be greater during a glaciation than an interglacial.

      • Hansen needs a long talk with a glaciologist. Or a mountaineer. He’s hand waving about things that the latter know about qualitatively and the former know about quantitatively.

      • Alexander Harvey

        Fred:

        I am saying that he needs to make an argument that the ice response is driven by temperature or his conclusions are not sound.

        I have read the 0.1W/m^2 bit and that is problematic for the multiply from 0.1 to 6W/m^2 (60 fold) tends to lengthen the time constants 60 fold as well, (this may not be a point that is often mentioned). Multiplying a typical value for deep oceanic response (as heating at depth is indicated by the benthic data) of say 500 yrs by 60 would be at odds with the rapid end to the last glaciation.

        As I said above a primary driver of 1.0 – 1.5 W/m^2 would not have such a large problem but would raise questions about his conclusion if it was the result of changes to the ice extent forced by the distribution of insolation not by changes in global temperature.

        Part of the trouble can be seen here:

        “Although there is a nearly equal change of opposite sign at some other latitude or season, the altered distribution of solar radiation engages slow feedbacks.”

        Feedbacks to what? That is part of the issue these are indeed slow responses but in this context they are not feedbacks on the primary driver as they do not alter the earths passage through space.

        Only the part of the melting due to rise in temperature takes part in the feedback loop. This would still be very significant but as it is but a part the other part needs to be considered in the calculations.

        Implicit in the approach used is the notion that one does not need to investigate the precise nature of the fast feedbacks as they are accounted for in the 0.75 ºK/(W/m^2) figure just divide the rise in T (4.5ºK) by the effect of GHG + Ice albedo (6 W/m^2) and hey presto.

        But a problem arises if the fast responses (whatever their values) are not wholly fast feedbacks on T. For if they have a partial differential by Ice extent I (as would be the case if I was in some part not driven by T) then we cannot treat the fast responses as feedbacks on T as we do not know this to be the case.

        An aspect of the same problem can be seem in another way. The sensitivity that we require is the global temperature response to a global forcing such as GHGs. The ice age experiment may be indicating the global temperature response to a dipole forcing (antisymmetric in hemisphere/season) with no or little global component in the driver a global component in the GHG variation but some significant dipole component in the ice component. Now that may be all hunky dory if the dipole component of the Ice variation is the same as it would have been if the forcing had been globally symmetric, but if that is not beleived to be the case then one cannot be sure that one has determined the appropriate sensitivity.

        I am not saying that his result is wrong but I am saying that additional arguments are required, I am also not saying whether the calculated sensitivity is too high or too low. I am querying whether a forcing that is dipolar in space and season producing an ice response that may have been dipolar in a fashion that is different to that produced by a globally symmetric forcing is a reliable indicator of the sensitivity to the current forcing symmetries.

        Alex

      • Alex – I think I understand Hansen’s point, as well as some parts of your comment, but I’m struggling to understand other parts. I believe he is claiming that orbital changes yielded a sustained 0.1 W/m^2 globally averaged forcing that melted a small quantity of snow and ice, reducing planetary albedo slightly and thereby causing a slight warming. Everything else was a response to that warming, and therefore a feedback on temperature (not orbital changes), with the final result being a very large scale melting , albedo reduction, and warming driven by the initial small rise in temperature. I would also think it possible that during much of the initial phase, what was heated was the ice and snow covering both land and ocean, so that much deglaciation proceeded fairly rapidly at the surface, with the deep ocean warming at a pace sufficient to support ice melting on the surface, but not necessarily at a pace that brought it close to equilibrium until long after much of the sea ice was gone. However, quantifying this would require a modeling effort.

        I agree that a globally averaged 0.1 W/m^2 orbital forcing is an oversimplification, because the forcing that appears to dictate glaciations/deglaciations is experienced at high northern latitudes (e.g., 65 N), and must be larger than 0.1 W/m^2 locally for it to average 0.1 W/m^2 globally. The seasonal and latitudinal variations are of course much larger than 0.1 W/m^2 but in the absence of an additional impetus (such as an imposed forcing from insolation changes), they cancel out, and temperature and ice extent don’t change. Presumably, the orbital effects triggered changes in ocean and atmospheric circulation patterns that globally redistributed the effects of forcing at the high northern latitudes. That would be a feedback of its own, but its relevance to climate sensitivity on human timescales during an interglacial is doubtful, and so the accompanying climate sensitivity values are somewhat academic.

        Finally, I have not yet found a source for the value of 0.1 W/m^2 as a global average, although I don’t find it inconsistent with a larger value at the latitudes critical for triggering the feedback effects. The local value must be much larger than the global average because of opposing effects in the two hemispheres.

      • While the planetary average change in Milankovitch cycles may be only 0.1 W/m2, the important forcing is at 65 N in the summer which dictates the ice edge summer melting. This can vary by more than 10% in the cycles (more than 50 W/m2)., so it determines the annual mean albedo, so rather than solar forcing, this might be termed ice-albedo forcing, but ultimately it is orbital in origin.

      • Alexander Harvey

        Fred:

        What he has done is smart and basically sound. Smart in that it allows for the deduction of current sensitivity using data from a period when it was possibly much greater.

        He achieved this by using slow responses for which there is some evidence i.e. GHG and Ice albedo as if they were forcings and charterising the other responses as being fast feedbacks on global temperature “T”.

        This is fine provided that:

        the actual forcing is either miniscule or is included in the GHG or albedo figures,

        all the other responses are solely due to temperature and not some other aspect of the experiment e.g. the result of a change in the distribution of insolation.

        I find the miniscule forcing a bit of a worry and that is because of its effects on the rate at which warming could take place, first a little melting then the oceans have to warm, then the GHGs have to evolve, then the ocean has to warm a little more then more ice melts because it is warmer etc. The end of the ice age was quite fast but importantly the process seemed to start abruptly. Stopping abruptly is not an issue because that was determined by the current sensitivity.

        There is another conceivable scenario that does not require any originating global forcing. Equivalent to what would happen if one started bulldozing the ice into the sea forcing it to melt and the albedo to rise. So the excessive NH summer insolation would be viewed as a mechanical rearrangement of the ice not due to any net increase in global insolation. As it happens this in itself would not argue against the conclusion as the effective forcing produced would be included in the Ice albedo term.

        The pattern of the warming produced would remain a concern unless that it could be argued that it would have followed the same pattern if say the whole process had been driven by a progressive release of a different long lived GHG such as a CFC.

        He does make an argument from quasi-equilibrium which is a strong one and it could be argued that even if the end of the ice age was only in equilbrium at the its start and its end then the conculsion should be sound but again would that ensure that the relative melting at the north and south poles be the same as for a GHG driven warming. This is the same as asking whether restoring the original orbital properties to the Earth now would not change its global temperature even if one compensated for the net change in global insolation. I.E. does tilt and eccentricity matter in exclusion of any net forcing. Now here I do need to be specific as what matters is whether the temperature would change due to the fast forcings, e.g. a change in the water vapour and clouds not due to additional melting or refreezing as they are accounted for in his argument. All this really comes down to is whether the climate as we have it now, given the temperature that we have now, that which we would have had if we had warmed the globe out of the ice age by a strategic use of GHGs whilst the orbital properties stayed the same. Are the two cases analogous and if not does this matter?

        Now the basic concept allowing an argument about our current sensitivity by looking at data from a period where sensitivity was possibly much higher is a smart way of doing business. It does however require that any change in the sensitivity was solely due to the slow feedbacks in the albedo and the GHGs e.g. the data that he has and none of it due to non-linearity in the fast feedback effects. This should also be a concern but that would go to the heart of the argument and amount to saying I don’t think you can do this so don’t bother.

        Also running in the outside lane is Dr Muller’s orbital inclination dust hypothesis which would have a forcing that would need to be added to the GHG and Ice albedo and would reduce the sensitivity a little, I think that knowing the true method of action of the orbital effect would be a boon.

        The argument seems sound and so does require pushing to see if it has issues, this is something that is quite rare in the sensitivity debate. As I said originally it is about the best I have seen.

        I will say that the section I am commenting on could do with beefing up, it lacks sufficient mathematics to demonstrate how the various factors are accounted for, it could do with a demonstration for how various causes for the melt would affect the conculsion and some treatment to show that assuming linearity in the fast feedbacks is not problematic.

        Basically it seems to be a paper that sets the standard and needs criticism on its argument not its conclusion but I can bet which it will get more of. If its argument can survive reasoned challenges then there is some hope. I do wonder if it will benefit in that way, it really needs someone open to the possibility that it is correct to take it on, simply contradicting it with results due to a different approach will not do.

        I am not sure that I have helped you much, it really does need a few or more equations. Particularly the flip that allows slow feedbacks to be treated as forcings.

        I am still thinking about it and I am not at all certain that my concerns cannot be addressed. The above metnioned flip does require some mental effort to make sure one doesn’t fall into the trap of continuing to argue from the point of view of them being feedbacks even though they are or that the sensitivity has not been constant as that to is tidied away by the argument, leaving only whether the behaviour of the fast feedbacks is equivalent between the two epochs.

        Alex

      • Alex – Thanks for your thoughtful comments. For my edification, could you elaborate a bit on one of your earlier statements:

        “I have read the 0.1W/m^2 bit and that is problematic for the multiply from 0.1 to 6W/m^2 (60 fold) tends to lengthen the time constants 60 fold as well, (this may not be a point that is often mentioned). Multiplying a typical value for deep oceanic response (as heating at depth is indicated by the benthic data) of say 500 yrs by 60 would be at odds with the rapid end to the last glaciation.”.

        How is the time constant you refer to defined? For a specified reservoir with a given heat capacity (e.g., the deep ocean), what is the basis for your conclusion that the magnitude of the constant would be a linear function of the energy imbalance? Wouldn’t that imply that the rate of heat transfer to the deep ocean in absolute terms (e.g., Watts) would be roughly similar for a 0.1 W/m^2 imbalance and a 6.0 W/m^2 imbalance, or am I misunderstanding what you had in mind by “time constant”? Typically, time constants I’ve seen have been e-folding times, but you’ve cogently pointed out that for these phenomena involving multiple reservoirs in the calculations, something better tailored to that process than a simple exponential decay function is appropriate.

      • Alexander Harvey

        Fred:

        FWIW I am not happy about this point either.

        I am trying to go with the flow to some extent and not introduce my own arguments but to explore my understanding of his.

        As I understand it, he suggests that (even in the rapid glacial end period) the system is never far from equilibrium. I also read his suggestion as being that the deep ocean (benthic) temperatures are 2/3 of the surface (land and ocean). To be this suggests he is considering that the ocean responses to the slow progressive forcing in a slab like manner. THe effective slab depth must be at least his 2/3 so I estimated the thermal mass of the 2/3 of the ocean and multiply this by 0.75 to get a characteristic time (actually I should have multiplied by 0.7 for the ocean area, oops but it matters little). I also made an allowance for the latent heat of melting. So I should have got something like 350 years. In truth the surface and the deep ocean are probably not that tightly coupled on timescales shorter than 1000 years at present (roughly the overturning time for a mean upwelling of 4m/yr). That said, I know nothing of the overturning rate of the glacial ocean.

        So that is how I arrived at a figure if not a good one.

        The next bit I did not explain fully. The factor of 60 is necessary for to the Ice albedo plus GHG feedbacks (when treated as feedbacks not as forcings) to generate 5.9 of additional forcing to the 0.1 to give 6.0.

        Now this is normally seen as the factor one must multiply the sensitivity by to get the slow feedback sensitivity. Now this slow feedback sensitivity should be sensitivity that is used to determine the time constant hence that expands by this large factor as well.

        It could be viewed like this: if at all times during the warming the system is out of balance by just the 0.1 component. After 350yrs (using my corrected value) the feedbacks also contribute 0.1 giving 0.2 total. After 3,500 years the feedbacks contribute 1.0 giving 1.1. Not until 20,650 years have elapsed would the forcing reach 6.0. Now that is about the fastest gloss I can put on it, argued differently it would only reach about 63% of the final value by that time.

        Regarding the scaling of time constants with time intervals. As the ocean is not infinite in depth (4000m) and as it is largely upwelling causing a reduced effective depth (1000m) for heat uptake from above. The time constant must approach a maximum value. Now I have no idea what this maximum time period could be for one thing I do not know how the ocean manages to heat at depth or to the degree that it does or when we shall know (closing part of the Munk video).

        Anyway given the apparent sharpness with which the LGM went from cooling to warming, very high values of slow feedback sensitivity seem questionable, so I question them. Using a simple simulation I found I was much more happy with a driving forcing of around 1/6 of the total GHG plus albedo, but that is my prejudice.

        Providing that the forcing is correctly accounted for as part of the GHG or albedo figure it does not directly affect his argument. If the forcing was due to another cause, it does but only marginally. For instance if there was a whole 1W/m^2 unaccounted for that would reduce the calculated fast feedback sensitivity by 1/7 (6/7 x 0.75 =~ 0.64).

        There are of course all sorts of counter arguments that could be made, other land surface albedo chages etc., but I did not set out to argue outside the basic assumptions, and anyway if they could be determined they could just be added into the GHG and albedo figures without changing the basic idea that one can argue about fast feedbacks from glacial slow feedback forcings.

        Please see lower down for some of the promised thoughts on response functions. The bit about statistics is very much to the point in a world where people see fit to argue about significance without verifying that their statistical model captures what we know about the spectral variance of the global temperature record. A point I could put much more forcibly.

        Alex

    • The idea that Pinatubo can have this srot of delayed reaction is just hand waving.
      It implies, to use your metaphor, that all Hansen has ever had is a bunch of junk, but a world class gift of bluff.

  65. What seems to have driven Trenberth’s “missing heat” idea was his interpretation of measurements of TOA energy flux. The fact that Hansen finds no missing energy (Fig19) does this mean that Hansen has completely ignored this particular measurement in his calculations or that he thinks that this measurement doesn’t tell us anything over the short term or something else?

    • In my limited understanding, the “missing heat” has always had two possible hiding places: the oceans/outer space.

      So where did Hansen not find it?

      • He didn’t find it everywhere by not looking at the TOA energy non-flux.

      • My impression from RC has been from their perspective there likely was no “missing heat” for Hansen to find. I believe that is what was said to Alex Harvey.

      • Sure but I’ve also seen those impressed by Trenberth’s take on the TOA energy flux, and this is the source of the missing heat. I was just curious how that science fits with Hansen’s new take on things. The way you present it is pretty much the same way Hansen’s presents it, i.e. there is no missing heat. But Trenberth’s position is based on a specific analysis, I would expect some reasoning from Hansen on why Trenberths approach is inferior to his. But this Hansen essay is wide ranging, it seems to contradict many scientists at least in it’s detail so I guess there isn’t room for the full arguments behind every idea contained in this work.

      • “… this measurement doesn’t tell us anything over the short term …” – HR

        Because of problems with the measurement system, I believe the above is a fairly accurate synopsis of the argument made by RC as I understood it (my understanding of this stuff is not that reliable.) There was this ARGO Watch taking place – Roger Pielke Sr’s blog providing perhaps the most entertaining seat from which to watch the drama unfold. As ARGO/OHC numbers came in, Pielke was assessing the GISS model in JaOOOOOLs!

        My sense is Hansen is providing reasons. They all seemed to be saying, including Trenberth, that the “method” you refer to stinks in the short term (the travesty,) but ARGO numbers would greatly clarify things. I presume that is why Karina von Schuckmann is a co-author. She found the heat the mermaids had seduced to the deep blue sea.

        And I don’t know that I agree they found no missing heat. They found heat, just not the amount needed to affirm Trenberth’s gadgets.

  66. Even though Jim Hansen does not “joust with jesters”, I do wonder if – in part – this paper is an attempt to address Steve McIntyre’s criticism of a lack of a clear exposition from first assumptions to ultimate consequences of AGW. Of course, Hansen would never admit to this, it would involve too much backpedalling. Hansen also has a get out clause on that though – his paper is far below “engineering quality”. Nobody would build a bridge based on the quality of analysis in this paper. Nevertheless, the presence of this document could be an attempt to cover the gaping hole with a fig leaf.

    I’m still amazed at people who try to defend Hansen’s failed predictions from 1988. GISS temperature remains presently slightly below scenario C, the “take draconian action against CO2″ option. If the difference between reality and Hansen’s prediction is not that great, then the difference between taking draconian action and doing nothing is not that great. You can’t have it both ways.

    • Spence_UK,
      Believers are very tenacious in defending every aspect of the infrastructure supporting their faith.
      Thus the gullibility regarding hide the decline, the shrug-off of Hansen’s many failed predictions, the refusal to address the failed predictions about temps, weather patterns, etc.
      And I am sure it is why the believer community jumps in so quickly to attribute any weather event as proof of AGW, no matter how often they are shown to be wrong.

  67. The failure of James Hansen is not one of science – else how could he continue to ignore the ‘internal climate variability’ or ‘cloud radiative forcing’ that are cooling the planet. At this junction – a very odd thing indeed.

    The failure comes from some long standing problem of the human condition – not rectifiable but arguable. The problem of sulphur is a central expression of selective ignorance. Much of atmospheric sulphur is of biological origin from phytoplankton in the oceans. This must change as a result of changes in nutrient upwelling on decadal and longer timescales. It is not measurable but has become a qualitative narrative explaining all inconvenient discrepancies. Nonsense of course – but when will it end?

  68. Alexander Harvey

    Fred:

    Here is something on the response functions as promised:

    Hansen & the Greens Function

    The convolution required to obtain the temperature response T(t) would normally be expressed along the lines of:

    T(t) = (R * dF/dt)(t) {where * is the convolution operator}

    where R is the response to a step function,

    for convolution see : http://en.wikipedia.org/wiki/Convolution

    Hansen’s formulation, Eq (3), seems “unorthodox” to me.

    It is more common to see not the step function response but its derivative the impulse response which is commonly just termed the response function which I will indicate by Ri

    and then

    T(t) = (Ri * F)(t) = (dR/dt * F)(t)

    for convenience I will continue with the impulse response function.

    Once this is determined the response T(t) for any arbitary F(t) can be calculated but due to the convolution step except for some special cases F(t) must be known for all passed times, that is to calculate T(t) for the 20th Century F must be known for previous epochs. In practice a time horizon can be determined but for long tailed impulse response functions such as is suspect for the climate that horizon would need to be measured in one or more prior centuries depending on the level of uncertainty that can be tollerated. In the case of Hansen’s slow response case, only 60% of a step change in the forcing occuring just prior to 1880 would have worked itself out by 1980, so deep history does matter.

    T(t) = (Ri * F)(t) can be used for some functions F(t) of special interest.

    Functions of the form F(t) = A·Exp(s·t) where A is a constant an s is complex (a + ib), are eigenfucntions of the convolution, that is they result in responses of the form B·Exp(s·t) where B is a constant and B/S is a function of s.

    The complex exponentials include the sinusoids the real or simple exponentials and their product. So the transformation of the sinusoids and exponentials are just scaled and shifted versions of themselves.

    The considerable degree by which the historical reconstruction of the forcing F(t) consists of an exponent plus a solar sinusoid results in the temperature response T(t) looking like a scaled and shifted version of F(t), with the exception of the volcanic forcings.

    Another interesting function for F(t) would be gaussian white noise N(t)

    N(t) can be constructed from any arbitary passed epoch and would give T(t) for any arbitary period. It is then simple to produce a spectrum (periodogram) for T(t). Alternative a frequency response function could be determined from convolution of with sinusoids of differing frequencies.

    If it is suspected that the real world is driven by a forcing of the form F(t) + N(t) (signal plus white noise) then the spectrum produced as above should bear some resemblence to the spectrum of the real world global temperature series and hence give some indication as to whether the response function is a good candidate. Importantly a response function that acts on white noise to produce a spectrum in keeping witht ehreal world’s can be used to “whiten” the real world T(t). That is deconvolve T(t)
    to produce F*(t) + N*(t) where F* and N* are the real world forcings and noise functions. Subtracting a candidate for F* (e.g. F) would just kevek N* which can be tested for being gaussian noise.

    By convolution with statistical functions of interest such as the linear slope it is possible to determine that statistical properties of the system. e.g. the distribution of the slope, or the distribution of the mean or or an arbitary function. In that way questions can be asked as to how likely the 60-80 year wiggle in the record could have occurred by chance, or any other wiggle of interest. Given that the response function does “whiten” the historical global temperature record, this performs a very useful statistical outcome, in that it leads to “white” residuals. This is commonly the goal in selecting a statistical model for data. This goal can be attempted by using AR(), ARMA(), etc, models, these models can also be viewed as filters, possessing response function and hence a spectrum, and indeed the goal seems to be to find a model that effectively “whitens” the historic record. In that sense the statistical models are special cases of the class of all response functions and that the response function that “whitens” the record could be seen as an optimal statistical model.

    Now there is absolutely no reason why a simple process such as convolution should map forcings to global temperatures, nor why the noise should be gaussian (at least for periods >= 1 month). However these simplifying assumptions do seem to born out in that the response function of model (x) seems to capture sufficient imformation about model (x) to allow for a simple convolution with the forcings to produce a result that compatable with the model’s actual output as noted in this paper.

    If any of the above doesn’t make sense it may be an error on my part so please ask.

    Alex

    • Many thanks. I’ll go through it, and check back with you for more clarification as necessary.

      • Alexander Harvey

        Fred:

        There should be another long piece below on how he has treated the response function but it is either lost or on the spike somewhere. I am not sure what is triggering intervention, perhaps mentioning Hansen more than once is considered provocative.

        On a different tack, it puzzles me how the debate moves on before I have even read the whole paper. Oh Well!

        Alex

  69. Alexander Harvey

    Fred:

    On fudging the response function

    This is another key point in this paper, what does a reasoning person do when he thinks that the models maybe getting the OHC wrong and there is no time to fix the models (model cycles being measured in IPCC cycles).

    His answer is that one assumes that most of the significance in terms of global mean temperatures is captured in the response function and that well judged modifications of the response function will go some way to reconcilling model output with reality.

    Personally I think this is a big bold step and perhaps not one that everyone will thank him for. I like it but I doubt that will carry much weight.

    Just doing this at all raised all sorts of intellectual challenges. As I have said, it is an implicit assumption that the real world response (over the range experienced) is essentially linear and time invariant. This alone would be met with howls of pain from all those that argue strongly that the world is not that simple, because the complexity of the non-linear and chaotic dynamics indicate otherwise.

    Now there is no reason why an acknowledgedly non-linear chaotic system should result in something indistinquishable from the result of forcing plus noise fluxes when viewed on globally averaged metrics on monthly timescales and longer, but there is no reason why they cannot either.

    The trouble is, simple forcing plus noise models have some skill in emulating the simulators and also the real world data. The act of computing a simulators response function and using that to build an emulator is similar to, but not the same as constructing a simple model. In the case of the simple model one has conceptual underpinnings, type of ocean, sensitivity, land ocean coupling factors, etc.. Taking the response function from a simulator is a more opaque procedure, it captures something of the essence of the simulator but not in way that can be related back to any simple conceptual model in a straightforward manner.

    As I mentioned above, he has elected to fudge the response function, and it is fair to question if he has done this in an optimal way, or if a more objective way exists. He seems to have argued from a comparison of the implied energy imbalances of his choice of response functions and the real world data which seems sound (although I do need to read it thoroughly) but I think that the addition of a spectral analysis comparison would have made his position stronger. I would suggest that there be many ways of fudging the response that would replicated a weaking of deep ocean mixing rates, which are long term effects, and some some of them would have been better at reproducing the higher frequency end of the real world spectrum than others. FWIW I think that increasing the slope between 1-10 years may have been less than optimum, but that is my prejudice.

    Well I still haven’t finished reading it all, but I think I have commented on the chunkiest bits.

    I have always liked Hansen’s work, and suspect that he is very misunderstood, in that often, as in this case, his approach and conclusions, please almost no point of view but his own. In this case, he is quite harsh about the inadequacies of the models (particularly the difficulty in producing timely results or ficing them in a timely manner) which might please some people, indicates that the models might be getting OHC wrong because they mix down too rapidly, which might please the same groupings, but then combines simple model techniques with a seeminlgy strong separate glacial argument that nails down sensitivity to show that the lack of warming indicates a very strong aerosol effect which in turn indicates that we are going to hell in a handcart (which must I think be the obvious conclusion to draw from a worldview where all that stands between us and a much hotter world is our aerosol emissions) which I suggest upset the same groupings terribly. Conversely the strong model advocates might not take too kindly to his suggesting that the models are not getting the key OHC aspect right, although the same group may enjoy his somewhat gloomy conclusions.

    Alex

  70. Alexander Harvey

    Fred:

    In case you didn’t see it I repled far above on the time constants issue.

    Alex

  71. Alexander Harvey

    I have taken the time to turn the three idealised response functions (slow, intermediate, and fast) into their respective frequency responses.

    Somethings are much clearer to see in the frequency domain and the I must comment on how his alterations to the temporal response function have affected the general form of the simulated esrth system.

    I read his intent as being to reduce the heat uptake of the deeper sections of the ocean. Although perhaps not stated, I suspect that it was also intended not to alter the short run (fast feedback) sensitivity.

    If that is the case the effect on the frequency response (the spectrum) of moving to faster responses should be to shift the spectrum towards the high frequency end in a multiplicative fashion, except at the very high frequency end where he has concerns specifically with the response to volcanoes. This is not quite the effect achieved.

    Such a simple shift would leave the fast feedback and slow feedback sensitivities unchanged, (although the time horizons for fast and slw would change) but this has not happened.

    In the slow response case the fast feedback sensitivity can be judged as being significantly lower than the slow feedback sensitivity, perhaps around 2/3 of the latter or 2ºK/doubling, this is enhanced by a slow crawl over the next 1900 years to the full, slow feedback value. That slow crawl I interpret as the effect of the slow feedbacks.

    As he moves from slow to fast responses that differential from fast to slow feedback sensitivity is reduced by about 1/2. To me, this represents a significant change in the nature of the simulated response, in that it has converted some of what should have been a slow feedback effect into a fast one. In terms of the spectral response, the change is not a simple shift to higher frequencies but a raising of the amplitude of the response in the mid and low frequency band (3yr-2000yr). Now this does indeed emulate the effect of a thermally lighter ocean but it also reflects a move to a higher short run sensitivity, by ~20% for the intermediate and 40% for the fast response. To me, this is a significant change, and one most clearly revealed in the frequency or spectral domain.

    In my interpretation, this has the effect of:

    a) driving “more” heat into the oceans due to higher sensitivity whilst,
    b) lightening the deeper ocean more than otherwise necessary to compensate.

    In turn this leads to a requirement to reduce the medium to long term changein the forcing values, where the raised sensitivity has greatest effect, and hence the assumption of a higher sulphate effect.

    So in terms of short run, fast feedback sensitivities, it is my judgment that his preferred intermediate response function is about 2.4ºK/doubling, a figure that is in the consensus ball park, but still at odds with the 3ºK/doubling indicated by his ice age analysis. Also that preserving the fast/slow feedback balance of the GISS model would indicate a slow feedback forcing of around 3.6ºK/doubling with an onset at or around the 100yr horizon. I am still to read wht ever views this paper holds for this latter figure.

    I suppose that my overall view on this segment of the paper is that playing with response curves to fit the data has implications for the underlying model that although marginal are significant, and that an interpretation of the modifications in terms of the original model is required lest it is just a curve fitting excercise which can be performed without reference to any simulated modelling at all. Such analysis inhabits the hazardous terrain between simulation and emulation. I think this is a region best reserved for the brave. Personally I think it is good to see someone bridging this gap, not so much for the particular worth of specific conclusions but for the light it can cast on both emulations and simulations in turn, allow the strengths and weaknesses of each to be traded off, and to gain insight into each.

    The paper discusses the volcano issue. I appreciate that volcanoes exceed the grasp of emulators and possibly some, most or all simulators. The bind being that if other reponses are well emulated/simulated, volcanoes cause more surface cooling in models than in real life. My conjecture on this is that this is due to a non-linearity in the response in the ocean surface layer, specifically that large short period negative forcings thicken the surface mixed layer (SML) and make it more slab like. This has the effect of extracting more heat per degree of cooling than the emulator/simulator predicts. Hence the reason that volcanoes indicate a thicher SML is that they produce a thicker SML and hence more surface mixing and an elevated effective thermal mass for the duration of the event. I have looked for this effect in the annual mid latitude oceanic climatologies and there is some evidence for a compatable effect, a tendency for the summer/autumn peak to be sharper and the winter/spring trough to be duller as compared to a sinusoid.

    One day I shall finish reading this paper yet already the debate has moved on and this thread is but a negelcted backwater. Well I was never cut out for the debate anyway. I consider science to be largely a rational pursuit but not wholely so. Most specifically I doubt that the extraction of meaning is in the least bit a rational enterprise. But then I am philosophically an absurdist of the sort that considers meaning to be no more than my projection of metaphor. Thereby there be no inherent meaning to extract, just those returning transitory poetic reflections.

    When I consider the debate I am aware of my philosophical dispositions that must lead me to doubt. These tendencies must lead me to interpret it as one act in the theatre of the absurd.

    So I view the debate as metaphor. A playing out of what it is to be man. So I reflect on it through poetic forms, enlightened by Hamlet, Macbeth, King Lear and The Tempest, the writings Borges a great modern metaphorist, but also Camus, the dramas of Beckett and Pinter, and finally the visionary poems of Blake. I know this to be but a projection yet so I think it be by poetic forms that we live and die.

    I neither scorn nor condescend, the debate is essential, a reflection on what we are like.

    Alex

  72. Judith Curry

    Regarding Hansen’s new study, I have not read the whole paper yet, but just going through the abstract makes me nervous about the rest. I’ll defer judgment on that until I have done so, but so far the “hidden energy” remains fully hidden in Hansen’s head.

    Aerosol climate forcing today is inferred to be ~1.6 ± 0.3 W/m2, implying substantial aerosol indirect climate forcing via cloud changes.

    Inferred?????

    Continued failure to quantify the specific origins of this large forcing is untenable, as knowledge of changing aerosol effects is needed to understand future climate change.

    That’s a big mouthful.

    Sounds to me like “we imagine that there is a large aerosol forcing (because otherwise our climate sensitivity assumptions don’t check with the reality), but we can’t explain the reasons for this imagined aerosol forcing”.

    Observed sea level rise during the Argo float era can readily be accounted for by thermal expansion of the ocean and ice melt, but the ascendency of ice melt leads us to anticipate a near-term acceleration in the rate of sea level rise.

    Hey! Wait a minute! The Argo floats tell us that the upper ocean has cooled, not warmed since the “Argo float era” started in 2003 (team leader Josh Willis called it a “speed bump”).

    And global sea level rise has not accelerated. In fact it rose less in the second half of the 20th century than it did in the first half, and most recently seems to have slowed down again.

    The “ascedency of ice melt” is not a physically observed phenomenon on a global scale, so saying we “anticipate a near-term acceleration in the rate of sea level rise” is a statement of faith.

    This thing is starting off pretty badly. Let’s see if the rest gets any better…

    Max

  73. From a physical point of view, the analysis of Hansen et al. (2011) is senseless stuff. Their Eq. (1) given by

    S = DT_eq/F (1)

    is based on the so-called climate feedback equation which has its origin in the global energy balance model of Schneider & Mass (1975). Here, S is the so-called climate sensitivity parameter, DT_eq is the change of the global surface temperature, T_s, from the equilibrium of the undisturbed system to that of the system disturbed by the anthropogenic forcing F.
    This global energy balance model reads

    R dT_s/dt = Q – T_s/S , (2)

    where R is the thermal inertia coefficient only valid for a layer, but not for a surface, and Q is called the radiative forcing. The term on the LHS of this equation is not entirely correct. It must read R dT_m/dt, where T_m is the volume-averaged temperature for this layer (Kramm & Dlugi, 2010). Generally, to replace T_m by T_s is invalid. Furthermore, Eq. (2) is based on the assumption that the atmosphere is always in a stationary state. This means that the energy flux balance must fulfill following condition (Kramm and Dlugi, 2010, 2011):

    R_L(TOA) = A S_o/4 + H(ES) + E(ES) + DR_L(ES) . (3)

    Here, A S_o/4 is the absorption of solar radiation by the atmosphere (where S_o is the solar constant and A the planetary absorptivity in the solar range), and R_L(TOA) is the outgoing infrared radiation at the top of the atmosphere (TOA). Furthermore, H(ES), E(ES), and DR_LES) are the fluxes of sensible and latent heat as well as the net radiation in the infrared range at the earth’s surface (ES), respectively. If the planetary radiation balance at the TOA is fulfilled as suggested by Trenberth et al. (2009) and many others (see Kiehl & Trenberth, 1997; Kramm and Dlugi, 2011), i.e.,

    (1 – alpha_S) S_o/4 – R_L(TOA) = 0 , (4)

    where alpha_S is the planetary albedo of the entire earth-atmosphere system in the solar range, then the energy flux balance at the ES is given by

    (1 – alpha_S – A) – H(ES) – E(ES) – DR_L(ES) = 0 . (5)

    If the outgoing infrared radiation is reduced by F due to the anthropogenic effect, Eq. (3) may be written as

    R_L(TOA) – F = A S_o/4 + H(ES) + E(ES) + DR_L(ES) – F (6)

    This would mean that the net radiation at the ES would be reduced by F. However, this does not automatically mean that the surface temperature must increase to re-establish a planetary radiation balance at the TOA (see Eq. (4)), as already argued by Ramanathan et al. (1987). Since the flux terms H(ES), E(ES), and DR_L(ES) are global averages of the corresponding local quantities, none of these flux terms is a function of the global surface temperature. Furthermore, there is no constant ratio between the H(ES) plus (E(ES) on the one hand and DR_L(ES) on the other hand. A reduction of DR_L(ES) by F can easily be compensated by H(ES) and/or E(ES) to fulfill the energy flux balance (5). The same is true in case of any other of these flux terms. Moreover, the uncertainty inherent in the determination of the fluxes of sensible and latent heat is so large that F may be considered as peanuts.

    In the capture of their Figure 17 Hansen et al. (2011) stated:

    “Recent estimates of mean solar irradiance (Kopp and Lean, 2011) are smaller, 1360.8±0.5 W/m^2, but the uncertainty of the absolute value has no significant effect on the solar forcing, which depends on the temporal change of irradiance.”

    This argument is highly awkward. If R_L(TOA) is reduced by F, Eq. (4) must be written as

    (1 – alpha_S) S_o/4 – R_L(TOA) + F = 0 . (7)

    Thus, the amount of F = 0.58±0.15 W/m^2 is notably smaller than that of the quantity

    alpha_S (S_o,old – S_o,new) /4 = 0.88 W/m^2 .

    Here, S_o, old = 1366 W/m^2 and S_o, new = 1361 W/m^2.

    Finally, it is worthy to take a look on the 240 W/m^2, according to Hansen et al. (2011), the solar energy averaged over the planet’s surface. First of all, this the amount of the solar radiation that affects the entire earth-atmosphere system, i.e., (1 – alpha_S) S_o/4. According to Trenberth et al. (2009) and many others the solar radiation reaching the earth’s surface is much smaller. Customarily, a value of alpha_S = 0.3 for the planetary albedo is used. This means that the solar constant that corresponds to 240 W/m^2 must be:

    S_o = 1371 W/m^2

    This is a value which was delivered by early satellite observations (NIMBUS7/ERB, see http://www.acrim.com/).

    The accuracy in the quantification of the global energy flux balance as claimed by Hansen et al. (2011) is, by far, not achievable. This is a simple fact that is based on physics, but not on Faith in AGW.

  74. Joel Upchurch

    I seem to have located the published version of the paper here:

    http://www.atmos-chem-phys.net/11/13421/2011/acp-11-13421-2011.pdf

    I have a stupid question about figure 1 on page 13422.
    It seems to show the radiative forcing from greenhouse gases in 1880 as zero. Since standard climate science says greenhouse warming is about 30 degrees centigrade and that earth would actually be frozen without greenhouse warming, shouldn’t this be a much flatter curve?