The Hansen forecasts 30 years later

by Ross McKitrick and John Christy

Note: this is a revised version to correct the statement about CFCs and methane in Scenario B.

How accurate were James Hansen’s 1988 testimony and subsequent JGR article forecasts of global warming? According to a laudatory article by AP’s Seth Borenstein, they “pretty much” came true, with other scientists claiming their accuracy was “astounding” and “incredible.”  Pat Michaels and Ryan Maue in the Wall Street Journal, and Calvin Beisner in the Daily Caller, disputed this.

There are two problems with the debate as it has played out. First using 2017 as the comparison date is misleading because of mismatches between observed and assumed El Nino and volcanic events that artificially pinched the observations and scenarios together at the end of the sample. What really matters is the trend over the forecast interval, and this is where the problems become visible. Second, applying a post-hoc bias correction to the forcing ignores the fact that converting GHG increases into forcing is an essential part of the modeling. If a correction were needed for the CO2 concentration forecast that would be fair, but this aspect of the forecast turned out to be quite close to observations.

Let’s go through it all carefully, beginning with the CO2 forecasts. Hansen didn’t graph his CO2 concentration projections, but he described the algorithm behind them in his Appendix B. He followed observed CO2 levels from 1958 to 1981 and extrapolated from there. That means his forecast interval begins in 1982, not 1988, although he included observed stratospheric aerosols up to 1985.

From his extrapolation formulas we can compute that his projected 2017 CO2 concentrations were: Scenario A 410 ppm; Scenario B 403 ppm; and Scenario C 368 ppm. (The latter value is confirmed in the text of Appendix B). The Mauna Loa record for 2017 was 407 ppm, halfway between Scenarios A and B.

Note that Scenarios A and B also differ in their inclusion of non-CO2 forcing as well. Scenario A contains all non-CO2 trace gas effects and Scenario B contains only CFCs and methane, both of which were overestimated. Consequently, there is no justification for a post-hoc dialling down of the CO2 gas levels; nor should we dial down the associated forcing, since that is part of the model computation. To the extent the warming trend mismatch is attributed entirely to the overestimated levels of CFC and methane, that will imply that they are very influential in the model.

Now note that Hansen did not include any effects due to El Nino events. In 2015 and 2016 there was a very strong El Nino that pushed global average temperatures up by about half a degree C, a change that is now receding as the oceans cool. Had Hansen included this El Nino spike in his scenarios, he would have overestimated 2017 temperatures by a wide margin in Scenarios A and B.

Hansen added in an Agung-strength volcanic event in Scenarios B and C in 2015, which caused the temperatures to drop well below trend, with the effect persisting into 2017. This was not a forecast, it was just an arbitrary guess, and no such volcano occurred.

Thus, to make an apples-to-apples comparison, we should remove the 2015 volcanic cooling from Scenarios B and C and add the 2015/16 El Nino warming to all three Scenarios. If we do that, there would be a large mismatch as of 2017 in both A and B.

The main forecast in Hansen’s paper was a trend, not a particular temperature level. To assess his forecasts properly we need to compare his predicted trends against subsequent observations. To do this we digitized the annual data from his Figure 3. We focus on the period from 1982 to 2017 which covers the entire CO2 forecast interval.

The 1982 to 2017 warming trends in Hansen’s forecasts, in degrees C per decade, were:

  • Scenario A: 0.34 +/- 0.08,
  • Scenario B: 0.29 +/- 0.06, and
  • Scenario C: 0.18 +/- 0.11.

Compared these trends against NASA’s GISTEMP series (referred to as the Goddard Institute of Space Studies, or GISS, record), and the UAH/RSS mean MSU series from weather satellites for the lower troposphere.

  • GISTEMP: 0.19 +/- 0.04 C/decade
  • MSU: 0.17 +/- 0.05 C/decade.

(The confidence intervals are autocorrelation-robust using the Vogelsang-Franses method.)

So, the scenario that matches the observations most closely over the post-1980 interval is C. Hypothesis testing (using the VF method) shows that Scenarios A and B significantly over-predict the warming trend (even ignoring the El Nino and volcano effects). Emphasising the point here: Scenario A overstates CO2 and other greenhouse gas growth and rejects against the observations; Scenario B slightly understates CO2 growth, overstates methane and CFCs and zeroes-out other greenhouse gas growth, and it too significantly overstates the warming.

The trend in Scenario C does not reject against the observed data, in fact the two are about equal. But this is the one that left out the rise of all greenhouse gases after 2000. The observed CO2 level reached 368 ppm in 1999 and continued going up thereafter to 407 ppm in 2017. The Scenario C CO2 level reached 368 ppm in 2000 but remained fixed thereafter. Yet this scenario ended up with a warming trend most like the real world.

How can this be? Here is one possibility. Suppose Hansen had offered a Scenario D, in which greenhouse gases continue to rise, but after the 1990s they have very little effect on the climate. That would play out similarly in his model to Scenario C, and it would match the data.

Climate modelers will object that this explanation doesn’t fit the theories about climate change. But those were the theories Hansen used, and they don’t fit the data. The bottom line is, climate science as encoded in the models is far from settled.

Ross McKitrick is a Professor of Economics at the University of Guelph.

John Christy is a Professor of Atmospheric Science at the University of Alabama in Huntsville.

 

 

Moderation note:  As with all guest posts, please keep your comments civil and relevant.

 

 

235 responses to “The Hansen forecasts 30 years later

  1. There is a list of malapropisms with respect to global warming.
    One is certainly worse than expected.
    Since the time of the testimony, emissions were a little less than, but closest to scenario B.
    Observed trends were a little more than, but closest to scenario C.

    But both emissions and sensitivity have been better than expected.

    This would appear to be good news – perhaps emissions aren’t a problem at all.

    But I’m convinced now that in addition to the other lists of biases, climate advocates suffer from negativity bias, dismissing good news and clinging to narratives of catastrophe, even when observations are falsifying these ideas.

    • climate advocates suffer from negativity bias, dismissing good news and clinging to narratives of catastrophe, even when observations are falsifying these ideas.

      They know they must scare us in order to tax and control us.

      No one pays anyone to say everything is OK!

    • The CO2 emissions have been a little more than the emission scenario A. If atmospheric CO2 levels were lower than that scenario, then Hansen overestimated the airborne fraction of the emissions. So, humans emitted CO2 without any reduction (business as usual), atmospheric CO2 levels are lower than predicted by the bau scenario and the temperatures are even lower, at the scenario C level (extreme emission reductions).

      • I think Hansen somewhere wrote that he used a climate sensitivity of 4.1 °C/2xCO2

      • “he used a climate sensitivity”
        GCMs don’t use a climate sensitivity. You can use a GCM to diagnose ECS, and they did that. They got 4.2°C/doubling from GISS II.

      • Nitpicking sematics are we Nick? So he used a model with an implicit sensitivity of 4.2, that’s pretty high for IPCC standards and probably due to the extreme earlly Budyko results which prompted Hansen to climate alarmism in the first place.

      • You have to look at the model Hansen used, and it was not a global model, and if you are comparing results and not specifying the GISS data source you are using, or specifying a global GISS, well then you are doing it wrong and getting the wrong result.

        And not answering the question if his forecasts were accurate.

        So you have to RTFR, and ATFQ!

  2. Reblogged this on Climate Collections.

  3. Judy,

    Why didn’t you post the GIS Temperatures beyond the spike: http://www.woodfortrees.org/plot/gistemp/from:1988 ??

    The spike was due solely to the El Nino.

    If you plot beyond the spike, Hansen’s “C” is clearly the more correct outcome.

  4. Hansen’s 2017 Scenario B prediction was not far off reality.

    That does not matter.

    What does matter is was warming different and more than during the warming into the Roman of Medieval warm periods or the warm periods before that during the recent ten thousand years. Not even close!

  5. The bottom line is, climate science as encoded in the models is far from settled.

    Why not spend more time understanding natural climate change causes? Even so called skeptics will not discuss anything other than CO2 sensitivity.

  6. My thanks to Ross and John for cutting through the confusion and the misrepresentation to present a clear picture of the underlying facts.

    w.

  7. One often overlooked problem with Hansen’s temperature predictions based on emissions scenarios (and that is what they were, as anyone who reads the PDF of his testimony can see) is that he committed two important but fortuitously cancelling errors in his assumptions about CO2 concentrations — his Scenario A data assumed emissions would be lower than they turned out to be but that the resulting CO2 concentrations would be higher (i.e. it assumes sinks will not behave the way they actually did, increasing their uptake) Had he known “business as usual” emissions (the only independent variable, and the only one Congress can affect) would would be higher, using otherwise the same assumptions he would have predicted higher CO2 concentrations (instead of being quite close) and his temperature prediction would have been even farther off.

    • “his Scenario A data assumed emissions would be lower than they turned out to be”
      His scenario A made no assumptions about tonnage emissions. He did not, in 1988, have access to any reliable data on them. That only became available after UNFCCC got governments to collect them. The scenarios were defined purely in terms of expected gas concentrations. We have the numbers.

      • Nick, as I’ve pointed out elsewhere, Congress has no ability to directly affect concentrations, only emissions. Therefore, three policy-based scenarios must be emissions scenarios (and are in fact directly labelled as such in the presentation).

        This is pretty elementary logic — if the concentrations weren’t chosen to represent emissions scenarios, what on Earth else could they possibly have been chosen to represent, and what was the point of a hearing about them?

      • “Therefore, three policy-based scenarios must be emissions scenarios”
        Well, they aren’t. But how can you usefully give a scenario about future emissions (tons) when you don’t know what they currently are?

      • So, emission scenarios made no assumptions about emissions. That makes his scenarios projections even worse.

      • The situation in 1988 was that we were clearly burning a lot of carbon. And CO2 was increasing in the air, and was accurately measured. That increase was taken as the measure of emissions. We still do that for methane.

      • Nick, after reading your post, I think you’re getting hung up on whether Hansen knew the tonnage of emissions in 1988 (something we still don’t really know with much reliability today), when really he was only concerned with their trend. We knew emissions were rapidly increasing in 1988, and the point of his testimony was to argue that freezing emissions at whatever level Y they currently were would save us from X degrees of warming.

        Today we know (albeit with limited reliability) that CO2 emissions probably grew faster after 1988, but that carbon sinks also took in more than CO2 than expected.

      • In case it helps, here is the EPA’s chart (with the above caveats): https://www.epa.gov/sites/production/files/styles/large/public/2017-04/fossil_fuels_1.png

      • Nick,

        “And CO2 was increasing in the air, and was accurately measured. That increase was taken as the measure of emissions. We still do that for methane.”

        Well, that’s a pure measure of emissions. You have to know or correctly estimate the AF.

      • .. a poor measure

      • “You have to know or correctly estimate the AF”
        No, ppm is measured. If tonnage emission is measured, you can deduce AF (which is noisy). But none of this is relevant to Hansen’s calculation. CO2 ppm is the estimate that he needs as input for his program, and it was his scenario. If tonnages were available, he might have tried to convert them to ppm, and he might even have got that wrong. But they weren’t, and he didn’t.

      • In case the problem still isn’t clear, imagine for a moment we live in a version of history where Hansen was taken more seriously and global emissions had flatlined in 1988, but temperatures took the same path as in our version. Wouldn’t it be perfectly plausible for Hansen to claim the policy change had prevented all that Scenario A warming, and scoffing at skeptics’ suggestion that temperatures wouldn’t have risen more rapidly had emissions exceeded the pre-1988 trend, perhaps partly due to increased carbon sink uptake? This is where climate science runs into serious issues with the “falsifiable” pillar of the scientific method.

      • Nick, they are emission scenarios, even if he used only atmospheric levels. The AF is standing between emissions and concentrations. He implicitly assumed some AF. He overestimated it.

      • talldave2: “Therefore, three policy-based scenarios must be emissions scenarios”

        NickStokes: Well, they aren’t. But how can you usefully give a scenario about future emissions (tons) when you don’t know what they currently are?

        Well, …, they are. If, as you imply, they are not useful scenarios, it would have been nice if the scenario-wallahs* had alerted us to that some decades earlier.

        * Scenarists? Scenario-artists? Scenario-mongers? Scenario-smiths?

        Reading this essay and the comments, a similarly-themed essay at RealClimate, Willis Eschenbach’s essay at WUWT and some others, it seems to me that Hansen and the other scenarists would have done us all a great favor if they had pointed out the uncertainties and guesses and ambiguities back then; they could have said that Hansen’s work was Heuristic, and to be improved upon, but not taken as a severe alarm that urgent action was needed.

      • “If, as you imply, they are not useful scenarios”
        They are entiretly useful scenarios. They describe the evolution of trace gases in the air, which is what matters for both Hansen’s program and the real climate. As to how we might influence the concentrations of trace gases in the air – the answer is simple. CO2 is increasing because we are burning a lot of carbon. Burn less!

      • Why burn less? If we knew that what Hansen was stating was going to happen is what actually happened, then there is/was no cause for alarm. The current rate of warming is not a problem even if it continues. The rate being the important thing not the amount. If the rate is slow enough then the adjustments that are needed to be made will simply be made through migration and adaptation. Even if Hansen was right about the rate, as you seem to think, then he was still wrong about the need to do anything about it.

      • Scenarists? Scenario-artists? Scenario-mongers? Scenario-smiths?

        How about “scriptwriters?” They’re the ones who produce scenarios that are superficially credible to minds scientifically unequipped to deal with geophysical realities. Their work is only a step away from creating pure fiction. Small wonder that Hollywood loves them!

      • Nick — at any rate we are still left with the simple logic that either his scenarios were intended to represent his highly-confident (but wrong) prediction at to how temperatures would respond to concentrations would respond to emissions (as is clearly stated in his presentation) in which case he overestimated the strength of both relationships, or the scenarios were not based on emissions in which case his presentation had no relevance to policy.

        Fortuitous cancelling errors are the bane of any scalar prediction system, like your cousin who bought Apple stock because he thought people would eat more of them, and insists he is therefore a skilled investor.

      • I was thinking burn more, but also be careful not to be wasteful. No CCS nonsense.

  8. Pingback: The Hansen forecasts 30 years later | Watts Up With That?

  9. “Here is one possibility. Suppose Hansen had offered a Scenario D, in which greenhouse gases continue to rise, but after the 1990s they have very little effect on the climate.”

    He might also have been laughed out of the overheated presentation room where he shut off the AC and opened the windows, given that he’d just declared global warming due to human emissions was definitely a huge problem.

  10. My primary complaint about Hansen’s 1988 testimony is that he referred to Scenario A as “business as usual”. To the contrary, his1988 paper (Hansen et al., “Global climate changes as forecast by Goddard Institute for Space Studies three-dimensional model”) described scenario B is “perhaps the most plausible of the three cases.” Scenario A not only included exponential growth in non-ghg gases such as CFCs, but also included a term for “hypothetical or crudely estimated trace gas trends” which equaled the (exponentially-growing) CFC term.

    • “he referred to Scenario A as “business as usual””
      Too much is made of this. It doesn’t mean that he expects that is what will happen. It just means what will happen if nothing changes.

      But anyway, the scenarios are not predictions, otherwise we would not have A, B and C. They are to cover the range of possibilities, and what matters for the prediction is the actual evolution (of gases) that happened.

      • “the scenarios are not predictions”. For the paper, I’d agree. The intent was to simulate a range of possibilities. But when he characterized Scenario A as “business as usual”, it became a prediction — what would happen without any significant policy change.

        Scenario A is fairly close to B as far as CO2 was concerned, at least up to the present time. But A had no counteracting volcanic eruptions (as B did), and A included a wholly hypothetical forcing term for “other trace gases”. The difference in forcings (as projected to 2018) is quite considerable. This is why I consider A to have been an upper bound on ghg effects. As an upper bound, I have no problem with including a made-up term in order to include a buffer for unknowns. But it is inappropriate to then consider it “business as usual”.

    • Business as usual does not mean the most plausible. It simply means no action, no emission reductions. When it comes to CO2, that’s exactly what happened. If the CO2 emission data is accurate enough.

      • “When it comes to CO2, that’s exactly what happened.”
        In fact, CO2 did follow quite closely scenario A. It was even closer to B, but there was little difference between them (for CO2). It was the relative slowdown in other gases that reduced the outcome forcing.

      • edimbukvarevic — CO2 is not relevant when comparing scenarios A&B; the concentrations remain quite close up to 2018. It’s the non-CO2 terms which are the primary difference between forcing profiles. See previous comment to Nick.

  11. Pingback: The Hansen forecasts 30 years later |

  12. Pingback: Analytical and Scientific Arrogance | POLITICS & PROSPERITY

  13. The way I learned it was that if the hypothesis doesn’t fit the data, either there is something wrong with the data we are using to test the model, something wrong with the model, or both. So it seems the bottom line is in the last paragraph: this is far from settled. Short of a time machine…

  14. Pingback: Much Ado about the Unknown and Unknowable | POLITICS & PROSPERITY

  15. Ross McKitrick and John Christy, thank you for the essay.

  16. Joel Achenbach (The Tempest) reminds us about what was said when Hansen shared his beliefs with Congress during, “the brutally hot summer of 1988″–i.e., “less than 10 years to make drastic cuts in greenhouse emissions, lest we reach a ‘tipping point’ at which the climate will be out of our control,” is all the time America had left.

    Of course… nothing happened. All scientists should have been more skeptical back in ’88..

  17. ” Note that Scenarios A and B also represent upper and lower bounds for non-CO2 forcing as well, since Scenario A contains all trace gas effects and Scenario B contains none. So, we can treat these two scenarios as representing upper and lower bounds on a warming forecast range that contains within it the observed post-1980 increases in greenhouse gases. Consequently, there is no justification for a post-hoc dialling down of the greenhouse gas levels; nor should we dial down the associated forcing, since that is part of the model computation.”

    If I understand that correctly, it is just wrong. Completely. Scenario B contained lots of trace gas effects, as did C. And the conclusion is completely wrong.

    I have written a detailed analysis of the scenarios here, with links to sources and details. A quick summary of main sources:

    1. Although Hansen didn’t graph the scenarios, we do have a file with the numbers, here. It is actually a file for a 1989 paper, but there is every indication the scenarios A,B,C are the same.

    2. There are detailed discussions with graphs, from Gavin Schmidt recently, and from Steve McIntyre in 2008 (who got much more right than this article). I won’t give all the links here, because I will probably be sent to spam, but they are in my post linked above. SM also recalculated the scenarios from Hansen’s description; he gives numbers and plots.

    In fact, the main reason Hansen’s result came between B and C was that methane and CFC’s were overestimated in B and even C. Here is the RC plot of the scenarios and outcomes:

    https://s3-us-west-1.amazonaws.com/www.moyhu.org/2018/06/gavin1.png

    Gavin also gives the combined forcings, which quantifies the placing of the outcome between B and C.

    My calculation of the trends in the actual 30 yr prediction period 1988 to 2017 were
    A: 0.302 B: 0.284 C: 0.123 with observed generally about 0.18
    It doesn’t make sense to give error ranges, since the predictions don’t contain randomness.

    • Here is Gavin’s plot of the forcings. That is in effect the appropriate combined effect of the various trace gases, showing an outcome between B and C.

      https://s3-us-west-1.amazonaws.com/www.moyhu.org/2018/06/gavin2.png

      And here is my plot of the 1988-2017 trends for scenarios and surface temperature outcomes:

      https://s3-us-west-1.amazonaws.com/www.moyhu.org/2018/06/bar.png

      • Allow Eli to post this again and encourage others to go read the long discussion of trace gas contributions in Appendix B of Hansen, Fung, Lacis, Rind, Lebedeff, Ruedy and Russell

        https://pubs.giss.nasa.gov/docs/1988/1988_Hansen_ha02700w.pdf

        The contributions of the trace gases after 1988 are substantial and shocking, by contrast CO2 is rather boring. Clearly if your point is to evaluate the model, you would run it again using observed forcings. If your point is to evaluate the scenario you would compare the assumed forcings in the paper to the observed ones to date as in the figure from Real Climate seen above. If your mission were to spit on Hansen you would mix and match as needed.

        https://photos1.blogger.com/blogger/4284/1095/1600/Forcings.0.jpg

      • Thursday, June 01, 2006 As Mr Dooley says, trust everyone, but cut the cards…. Eli
        “folk were claiming that Hansen had never said that scenario B was the most likely. Eli went and RTFR (J. Geo Res. 93 (1988) 9341), and sure enough Hansen et al. sure enough said that B was the most likely scenario
        Until 2010, the difference in forcing between Hansens scenarios A and B all come from changes in concentration of chlorofluorocarbons, methane, and nitrous oxide and volcanic eruptions. You can see this in Fig. 2 from the paper. That figure shows (copy below) forcings from CO2 increases (top), CO2 + trace gases (middle) and CO2 + trace gases + aerosols from volcanos (bottom).

        The dip in the bottom third near ~1982 represents the effect of the eruption of El Chichon, the later dips are guesses about when major eruptions will occur. Pinatubo came at in the 90s, a bit earlier than assumed, but the depth of the effect was about right.”
        except no volcanoes were factored into A later.
        So the only reason that B and C are anywhere near realityl is that they had a dose of volcanoes that did not occur.
        Wonderful.

    • Ross McKitrick

      Nick- you are correct that B includes CFCs and methane, but not the other non-CO2 GHG’s. I missed that detail. I have revised the text above accordingly.

      • Ross,
        B and C also have N2O. The point is that it is these gases that make the difference. CO2 was scarcely different in forecast in A and B, and the reality followed. What brought the forcing down was the unexpected pause in methane, and the reduction in CFCs that followed Montreal, which A and B were sceptical about (Montreal was 1989) but C allowed for.

      • Nick and Ross, good exchange here.

      • “Montreal was 1989”. Montreal was signed in Sept 1987; the US ratified it in April 1988. Jan 1989 was when it formally entered into force. By the time of Hansen’s testimony, the writing was on the wall about CFC reduction. However, the scenarios were created well before Montreal, possibly before Vienna (1985).

    • Geoff Sherrington

      Nick,
      Surely the error ranges should include errors of bias, not just randomness.
      If you accept this, you need to accept that the error bounds should enclose most of all 3 projections because at the time in 1988 or so, all 3 were considered valid enough to publish. Therefore by 2016-7, the error estimate from the first graph above would be +/- 0.5C or so. Because the error has reached this much, it should be taken to be the error over the whole time period of the projections.
      So where is the benefit from splitting hairs in arguments when the uncertainty of the T data is +/- 0.5C? Angels dancing on pins stuff? Geoff.

      • Geoff,
        Yet again, scenarios are not predictions – they don’t have either error or bias. They are just an example of what you think might happen.

        But it makes no sense to do a calculation of error or variance on the trend either. The scenario is just a simple mathematical function. Scenario A CO2 is an exponential a + b*1.015^n.

      • Geoff Sherrington

        Nick,
        I did noit use the terms ‘scenario’ or ‘prediction’, merely ‘projection’ which these clearly are.
        There is value on putting confidence limits around projections. Most projections have ragged riders like ‘I’ll bet a lot on this being right’ or “This is an uncertain projection because we do not have good data to start wirh”
        I am saying that it is best to express this more mathematically, by the visual use of boundaries on graphs, for example.
        Part of the drive behind this suggestion is to encourage discipline for those sloppy researchers who studiously avoid proper confidence limits or calculated error bounds when they know it will weaken their stories. This is understandable, but unhelpful.
        There remains a serious impediment in a lot of climate science, namely, the lack of formal, proper error calculations as a routine practice. Geoff.

      • Of course error and bias are of interest in discussing scenarios. Their projections are the product of their assumptions and if scenarios are going to be of any use one needs to understand how likely (subjective or objective, take your pick) their assumptions are to help evaluate the scenario’s projections.

        The first thing anyone should do when evaluating projections is to understand the likelihood of the assumptions. Wheat from chaff and all that.

        Hence it is well worth asking e.g. what the likelihood of the exponential fit is given past observations etc etc and carrying that through to its impact on the projections, and if you are doing post hoc assessment you have even more information.

      • “Of course error and bias are of interest in discussing scenarios. “
        You can’t speak of error or bias unless you have a notion of what is right. And scenarios don’t have that (else why are there three of them?). The scientist is basically punting to the reader – I can calculate these options – which do you think is most likely? Or which would you prefer to try to achieve?

        Hansen’s scenario A for CO2 is a 1.5% annual increase in the annual increment. How can you put an error measure on that?

        I gave the example of an aircraft designer who gave performance figures for loads of 500kg and 1 ton. These are scenarios of how you might load your plane. How would you put an error figure on the 500 kg?

      • ‘You can’t speak of error or bias unless you have a notion of what is right.’

        And if you don’t ‘have a notion of what is right’ scenarios are useless.

      • lewispbuckingham

        ‘have a notion of what is right’
        Well yes.
        The projection/prediction with error bars of what was going to happen anyway, independant of human influence, is the line that is missing.
        The underlying theses is that warming is causal and we are the cause.
        The assumption is that this is right.
        Pointing out that projections on certain terms are error free because they are based on assumed future data does not mean that an estimate of error should not be made.
        No where here is a Null Hypotheses tested against natural temperature change.
        For meaning,that would be the projection/prediction worth seeing 30 years ago.

      • “Pointing out that projections on certain terms are error free “
        The issue was whether the scenarios themselves should have error bars. And there just isn’t any basis for doing that. As to the projected temperatures, the normal thing nowadays would be to do an ensemble to estimate variability. But in 1988, that just wasn’t feasible.

      • “And if you don’t ‘have a notion of what is right’ scenarios are useless.”
        OK, I’ll put it mathematically. A GCM is a function. It maps from a domain (scenarios) to a range (average temperature and much more). Like, say, y=f(x)=x^2. What does that function mean? You can say
        let x=2, then y=4
        let x=3, then y=9
        It might be stochastic
        let x=4, then y=16±2
        It makes sense to talk of an error in y. But how can you quantify an error in x? What could it mean? It’s your choice.

      • ‘But how can you quantify an error in x? What could it mean? It’s your choice.’

        I choose a likelihood function on the domain, as does anyone else who wants scenario work to be useful.

      • “I choose a likelihood function on the domain”
        Likelihood of what, though?

        I’ll put it in computing terms. Hansen has a program that can be written in one line:
        results=GCM(scenario)
        You can enter any scenario you like. But the program takes a while to run, so you choose scenarios whose results you might find useful later, being somehow representative. That is a criterion, but how could you associate it with a likelihood function?

      • Scenarios are the product of their assumptions. Straightforward enough to develop likelihood functions for them (even if only subjective) and straightforward enough from there to derive one for the scenario.

        Implicitly people do this when they use scenarios (e.g. your term ‘representative’ is the language of likelihoods), its just that there seems to be resistance to formalising this.

    • Clearly if your point is to evaluate the model, you would run it again using observed forcings. If your point is to evaluate the scenario you would compare the assumed forcings in the paper to the observed ones to date as in the figure from Real Climate seen above.

      This would be a great point if Hansen’s presentation had quietly noted there were no particular emissions policy implications to his work since his model couldn’t be expected to tie any particular emissions scenario to any particular concentrations or temperature trend and thus could only be fairly judged against whatever concentrations actually developed, instead of claiming (with the AC turned off and the windows open on a hot day) that major emissions policy changes were justified because he had a strong understanding of how emissions affect concentrations affect temperatures, enabling him to predict with high confidence three different temperature trends based on the one independent variable (and the only one Congress can affect): emissions policy.

      Hansen et al. sure enough said that B was the most likely scenario

      Since they represent emissions policy scenarios, that was a political prediction more than a scientific one (that is, he was predicting the policy in that statement, not its result, which he already claimed to know with high confidence). It would be unfair indeed to judge Hansen on his ability to predict the path of emissions policy, as opposed to its effects on concentrations and temperatures.

    • Sorry, my comment there probably should have gone into a different bucket.

      At any rate, based on their comments I suspect Eli and Nick perhaps have simply not seen Hansen’s presentation, so I link it below. He explicitly refers to C as “draconian emissions cuts.” He laudably admits to some “major uncertainties” with respect to GCS and ocean heat uptake (but not regarding the relationship of emissions to concentrations)… but then expresses “a high degree of confidence” in his conclusions anyway.

      http://image.guardian.co.uk/sys-files/Environment/documents/2008/06/23/ClimateChangeHearing1988.pdf

  18. Charles May

    As Monty Python might say, “And now for something completely different.”
    Rather, than go after Dr. Hansen’s predictions I choose to challenge the IPCC’s best estimate of ECS as being 3.0.

    At the end, I document that I can support the Lewis & Curry value of ECS being much lower, 1.66.

    Stay with me I hope the trip will be worth it. It is unfortunate that I can’t post pictures. Everything comes from a link to my OneDrive.

    For the UAH analysis I use the data from Mauna Loa for CO2. I do have a very precise fit. It is a quadratic fit with a sine wave on top.

    https://1drv.ms/f/s!AkPliAI0REKhgZh4Jee-1Gw8oXbzdA

    I am expecting an update on UAH from Dr. Spencer any day now so this is with last month’s data. (BTW, I did receive the update and I think what I have here will do.)

    https://1drv.ms/u/s!AkPliAI0REKhgZkASsIZ4gfFm9e4AA

    Note that the figure does include a pause line. I added that feature because as the temperature drops, if it does, I anticipated a return of the pause line. BTW, its starting point is not cherry picked. I actually calculate where the slope would be minimal.

    That figure is interesting but here is what I am after. We have all seen Dr. Spencer’s figure which shows how the models perform with respect to the balloon data. I changed that figure slightly.

    Instead of the balloon data I used a five-year moving average of the UAH data and its solution in red from the above figure. Instead of the model data I substituted the ECS values of the best estimate based on a value of 3.0.

    https://1drv.ms/u/s!AkPliAI0REKhgZkBPGtRgiAEFBAaEw

    Pay attention to one more thing. Not long ago Nic Lewis and Dr. Curry estimated that the ECS value of 3 is off by about a factor of 2. I believe their estimated value was 1.66. That is shown on the chart. It would seem with that value we are finally correlating with measurements. How in the world could the value be 3.0?

    I don’t know what the answer is but how serious is CO2 if the ECS value is 1.66. Is it really worth the expense we are going through?

    It gets even better. Dr. Spencer reviewed what Nic Lewis and Dr. Curry did.

    http://www.drroyspencer.com/2018/04/new-lewis-curry-study-concludes-climate-sensitivity-is-low/

    Here is a very important statement by Dr. Spencer.

    “If indeed some of the warming since the late 1800s was natural, the ECS would be even lower.”

    Bingo!

    With my cyclical fit I do think the ECS is lower because I assume the cyclic fit is natural sources.

    Dr. Spencer used this figure.

    https://1drv.ms/u/s!AkPliAI0REKhgZkClctphXouBBnNLg

    I have a much lower value of ECS that seems to work quite well with this figure.

    It is one thing to ask how important CO2 is if the ECS is 1.66. It is something else if it is lower than 0.5. Some have suggested this value or lower.
    Now for the part that matched the Lewis & Curry ECS value.

    I used the 5-year moving average as the basis for the evaluation and assuming that CO2 is responsible for everything this is what I got.

    y[i]=b[1]+b[2]*ln(co2/co21)/ln(2) The b values are what I guess.

    My characterization goes like this.

    co21=D*’x[1]^2+E*’x[1]+FF

    co2=D*’x[i]^2+E*’x[i]+FF

    I ignored the sine wave portion of my CO2 fit.

    https://1drv.ms/f/s!AkPliAI0REKhgZh4Jee-1Gw8oXbzdA

    That ECS value is very close to the 1.66 identified by Lewis and Curry.

    It is beyond me how a value of ECS of 3.0 can be at all justified.

    • An effective TCR in excess of 2 can easily be justified by fitting temperature to CO2. The scaling here is exactly 100 ppm per degree and as you can see it works well. For the range of 300-400 ppm, the sensitivity is 2.4 C per doubling.
      http://woodfortrees.org/plot/esrl-co2/mean:240/mean:120/derivative/scale:12
      Using much lower numbers, dangerously underestimates past warming, which is odd because supposedly their method considered it.

      • Jim D: You were so often corrected with your missleading method and WfT -operations to blame the whole T-change on CO2 instead to make the correct way: net T-change vs. net forcing change! IMO you make it intentionally and no further discussion is useful.

      • This relation derived with 60 years of data is better guidance because it accounts for proportionate factors to CO2. CO2 accounts for 70-100% of that change and the rest is correlated enough to make CO2 a good fit on its own to the forcing change. There are reasons these curves look so similar and why projecting warming from CO2 alone works so well for this period.

  19. Do we really need to listen to people who still think that urban areas show warming more than rural areas? BEST proved that rural areas warmed at the same rate.

    Do we really need to listen to people who think round stations are “contaminated” even though we all know (as Dr. Curry knows) that statistically stations that show less warming than they should are just as common as those that show more warming than they should.

    I like reading and listening to climate scientists to get the science. Policy should follow the science, not the other way around.

    • statistically stations that show less warming than they should are just as common as those that show more warming than they should.

      “statistically” stations show the temperature that occurs at that station, who ever said they show something different than they should. stations are placed to measure temperature at that place and the temperature they measure is the temperature they should measure. If temperatures are different in different kinds of places, we learn from that, we don’t just say it must be wrong.

    • … and if the “climate scientists” are actually a bunch of huckster lining their (and their crony buddies) pockets at the expense of the poor and middle class, then looks to me like you are getting pseudo-science. Suggest you always be wary of any group that scurries into the dark when faced with honest inquiry. Fact is, we have no idea what increasing levels of CO2 will bring – simply beyond our ability to determine. Use energy wisely, and we will be just fine.

      • That’s some serious “but what if…” absurdity. But hey, what if all scientists who are pretending that CO2 is not the primary forcing are nothing but hucksters? Then it looks to me like you are being duped by the oil and gas industries!

        Remember how Heartland was paid by the cigarette companies to tell us that cigarette smoking did not cause cancer? Took some people several decades to finally admit they were duped.

        Fact is, we do know that CO2 is the primary forcing.

      • No it is the sun that is the primary forcing.

    • Geoff Sherrington

      Scott on UHI,
      My home city of Melbourne has been studied for years as a candidate for UHI effect.
      University scientists think the effect is real.
      Would you like to read this and linked references and report back on UHI? Geoff
      https://www.researchgate.net/publication/266267164_The_urban_heat_island_in_Melbourne_drivers_spatial_and_temporal_variability_and_the_vital_role_of_stormwater

      • Steven Mosher

        Melbourne
        Yes it has UHI
        NO, the UHI does not impact the global record

        WHY?
        1. It is rather rare. Cities that large make up a tiny portion of all records.
        2. If you are smart your algorithm will DE BIAS the series.

        Here; Raw data is a trend of 1.07
        De Biased: .42
        http://berkeleyearth.lbl.gov/stations/151813

        But Hey, Skeptics live in a world where they never check the data.
        1. They never actually COUNT the number of stations that are in high
        population centers
        2. They never talk about adjustments that cool the record.

        But here is a question for you.

        Of the 40K stations in the database, where do you think Melbourne ranks in terms of population density?

      • Geoff Sherrington

        Hi Steven,
        Much of the Australian historic temperature data are from max/min thermometers that do not record the time of day when their maxima or minima were reached. From other studies, like the Melbourne Uni ones, we can surmise that the Tmax and Tmin will not usually happen at a time when UHI is strong for the day. So then you suggest we use a methodology to take a prescribed amount of temperature from a reading that was not showing it in any case.
        Then you say, well, if it was not showing UHI, where is the argument?
        The argument comes at transitions to other methods of recording Tmax and Tmin, such as selecting the highest and lowest of 24 hourly readings each day. While this can be valid if sel-contained, it is not valid to transition from the old Tmax/Tmin with empirical UHI adjustment, to the later method.
        The empirical adjustment is also wring, wrong wrong for several reasons, one just given, another that there is a lack of study of the relation between population and UHI (though I not in denial that there might be one – it is simply fiendish hard to get it right in every case because critical metadata are not there).
        UHI is a big problem. It cannot be satisfactorily adjusted in hindsight from LIG thermometers with max/min pegs. I’ve spent day after day trying to find ways to account for it in Australian historic data and have failed to make progress every time. There is no useful difference between urban and rural, because the more rural you get, the worse the data quality in general and noise overtakes the effort before you can get to an answer. besides, it is easy to imagine scenarios where a 1 person population creates high UHI effects.
        These are some reasons why I prefer UAH data when its application is appropriate. Yes, I have read and re-read the adjustments that are applied. There is a difference between logical, OK adjustments and adjustments made because the data look better for them. The UAH data satisfy these needs, in my view, better than any other global scheme including BEST.
        You ask “Of the 40K stations in the database, where do you think Melbourne ranks in terms of population density?”. I guess I know the rough answer to that, but for reasons just given (and more not) it is the wrong question. The right question is “What damage have we done to data integrity by attempting to make illogical, under-researched and capricious adjustments for UHI?” Face it, UHI happens and it can have a large effect. The trouble is with “can have” that cannot be turned into “does have”.
        Keep up the good work. You’ll catch up soon, we hope. Geoff.

    • Beta Blocker

      Scott Koontz: “Policy should follow the science, not the other way around.”

      Assuming you believe that settled science predicts global warming to an extent which must be considered dangerous to the health of the planet; and that we must greatly reduce the world’s carbon emissions to avoid serious damage to ourselves and to the environment, then what would be your proposed carbon reduction targets, implemented over what period of time?

      What governmental policies, what technical and political strategies, and what kinds of specific actions would you propose be adopted in furtherance of those carbon reduction targets, as they are to implemented over the timeframes you propose?

      • The question could be turned around. Based on what you know so far, is it better to be at 2100 with (a) 600-700 ppm and rising CO2 levels or (b) less than 500 ppm and CO2 stabilized or declining. If you were setting a target, which would you favor?

      • Beta Blocker

        Jim D: “The question could be turned around. Based on what you know so far, is it better to be at 2100 with (a) 600-700 ppm and rising CO2 levels or (b) less than 500 ppm and CO2 stabilized or declining. If you were setting a target, which would you favor?”

        Defenders of today’s mainstream climate science talk endlessly about the truth of the science, but say very little concerning what specific actions must be taken, and what kinds of difficult choices must be made, to get from here to there in terms of achieving specifically-stated emission reduction targets over a specifically-stated carbon reduction schedule.

        Scott Koontz and Jim D, you are being given a golden opportunity here to break that pattern. Let’s see if you will take it.

        OK, the question I originally asked was straightforward, and so why does it need to be turned around? But let’s play Jim D’s game and suppose that we as a nation decide to choose his option (b) as a basis for establishing America’s carbon reduction targets, as would be stated in terms of percent reduction from a Year 2020 baseline.

        What specific emission reduction targets would you propose, stated in terms of percent reduction from a Year 2020 baseline? What would be your proposed schedule for achieving those emission reduction targets? What governmental policies, what technical and political strategies, and what kinds of specific actions would you propose be adopted in furtherance of those carbon reduction targets, as they are to be implemented over the timeframes you would propose?

      • It needs to be turned around because first you need to state what an ideal world would look like in 2100, then figure out the technology to get there. Get the questions in the right order. If you don’t agree that a stabilized climate below 500 ppm is better than 650 ppm and rising, we can stop there and discuss that instead because you are then saying no need to even try. “What do we want?” is the first question, “How do we get there?” is the second in any policy discussion.
        As for numbers, I would say (caveat: my own numbers) that even a 50% reduction in emission rates between 2020 and 2100 would stabilize the climate at 2 C. That works out at reducing by 2.5 GtCO2 per decade from today’s 40 GtCO2 emission rate. This averages less than 1% per year, which I don’t think is asking the impossible. The current growth rate is still 2% per year, so a turnaround is needed from growth to reduction, but modest.

      • “then what would be your proposed carbon reduction targets, implemented over what period of time?”

        Shouldn’t the be left to the governments? The problem is that the world has been “successfully” stalled by the same tactics used by cigarette companies and the “scientists” they paid to announce that cigarettes were not harmful to you. Note that we are still hearing from the same groups that second-hand smoke is OK for your kids.

        What would I personally propose? Who cares? Nothing will happen as long as the scientists who are paid to muddy the waters convict the politicians (who are paid by oil interests) stop messing around. Throwing a snowball on the senate floor is a prime example of the lunacy.

  20. So here we are 30 years later.

    Winters are a little bit warmer in many places. We are told sea level has increased by about three inches but we don’t notice when we go to the same beaches of 30 years ago. There is less sea-ice around the poles, but we still get about the same snow. It melts a little bit earlier most years. And that is about everything we notice. What a huge let-down after 30 years of massive hysteria from the media and the scientists. Nobody notices anything except it is a little bit nicer. So much for global warming.

    • We are told sea level has increased by about three inches but we don’t notice when we go to the same beaches of 30 years ago.

      Profoundly ridiculous.

      • I’ve been going to the same beach-front house since the early 70’s. Coastline and beaches haven’t changed in the least. Waterfront is still at the same distance. In the early 90’s I told my parents we should sell it because sea level rise would make it lose all value. I’m glad they didn’t pay any attention to me. It is worth a lot more now, and at the present sea level rise rate could go for over a century more.

      • I am right now at the exact same beach I first visited in 1962 and every year since. (NW USA) While I could not tell you what the sea level actually is, there has been NO NEED for anyone to do anything about sea level rise in the last 60 years.

      • What’s ridiculous is that Scripps Institution of Oceanography makes no contingency plans to vacate its lower (beachfront) campus…while its grant-grabbing employees feed polemical fodder for catastrophic sea-level-rise scenarios to the media..

    • United States Naval Academy: where they can’t hit much of anything with a torpedo unless they know the temperature of the ocean water through which it will travel. Since probably at least WW2:

      https://i.imgur.com/rqbjazg.png

      • “The citizens of Hyde County have dealt with flooding issues since the incorporation of Hyde County in 1712,” said Kris Noble, the county’s planning and economic development director. “It’s just one of the things we deal with.”

        In Hyde County, their Tar Heel friends to the south, they have had subsidence rates of 1” or more per decade. It’s well known the Eastern Seaboard is sinking like a rock. Even at the time of Benjamin Franklin’s birth, that area had flooding problems.

        To really tug at the heartstrings, you could have shown a picture of Bangkok flooding, where subsidence rates in the past were 40 times the GMSLR.

    • Winters are a little bit warmer in many places.

      Last winter was a lot colder in many places. Go Figure!

    • There is less sea-ice around the poles, but we still get about the same snow.

      Actually, I have seen news stories in recent years of snow in Rome, the Holy Land and even Egypt.

      Actual measured precipitation has increased as temperatures increased. The Texas State Climate Scientist has said that is true in Texas over the past hundred years or so.

  21. ” add the 2015/16 El Nino warming to all three Scenarios”
    This is getting to be desperate stuff. Why not add the two big La Nina’s (2008 and 2011/12) which were primarily responsible for the downward excursion, which the 2016 El Nino restored?

    In fact, of course, ENSO events were not part of Hansen’s predictions.

  22. Hansen: The Sky Is Falling
    Hansen et al. (2016) continue to trumpet massive climate alarms requiring “negative CO2” etc. That would “only” require “89-535 trillion dollars”! What then would keep us from descending into the next glaciation? Why bother with scientific validation when you can reap windfalls from alarms?
    Young People’s Burden: Requirement of Negative CO2 Emissions

    Abstract
    Global temperature is a fundamental climate metric highly correlated with sea level, which implies that keeping shorelines near their present location requires keeping global temperature within or close to its preindustrial Holocene range. However, global temperature excluding short-term variability now exceeds +1°C relative to the 1880-1920 mean and annual 2016 global temperature was almost +1.3°C. We show that global temperature has risen well out of the Holocene range and Earth is now as warm as it was during the prior (Eemian) interglacial period, when sea level reached 6-9 meters higher than today. Further, Earth is out of energy balance with present atmospheric composition, implying that more warming is in the pipeline, and we show that the growth rate of greenhouse gas climate forcing has accelerated markedly in the past decade. The rapidity of ice sheet and sea level response to global temperature is difficult to predict, but is dependent on the magnitude of warming. Targets for limiting global warming thus, at minimum, should aim to avoid leaving global temperature at Eemian or higher levels for centuries. Such targets now require “negative emissions”, i.e., extraction of CO2 from the air. If phasedown of fossil fuel emissions begins soon, improved agricultural and forestry practices, including reforestation and steps to improve soil fertility and increase its carbon content, may provide much of the necessary CO2 extraction. In that case, the magnitude and duration of global temperature excursion above the natural range of the current interglacial (Holocene) could be limited and irreversible climate impacts could be minimized. In contrast, continued high fossil fuel emissions today place a burden on young people to undertake massive technological CO2 extraction if they are to limit climate change and its consequences. Proposed methods of extraction such as bioenergy with carbon capture and storage (BECCS) or air capture of CO2 have minimal estimated costs of 89-535 trillion dollars this century and also have large risks and uncertain feasibility. Continued high fossil fuel emissions unarguably sentences young people to either a massive, implausible cleanup or growing deleterious climate impacts or both.

    https://arxiv.org/ftp/arxiv/papers/1609/1609.05878.pdf

    • Actually, it is arguable. Illustrates the use of manufactured nonsense to generate panic and vast over reaction that will doom millions to poverty and lower living standards. Perfect example of a the use of fear to bludgeon folks into kowtowing to the radical left.

  23. Where to start with this shoddy analysis? How about with something simple:

    “Hansen added in an Agung-strength volcanic event in Scenarios B and C in 2015, which caused the temperatures to drop well below trend, with the effect persisting into 2017. This was not a forecast, it was just an arbitrary guess, and no such volcano occurred.”

    Had McKitrick and Christy read Hansen et al. (1988) more carefully, they would have seen that the GISS modeling was initiated in 1983 and continued through 1985. With respect to volcanic aerosols, the authors explained their modeling parameters as follows:

    “Stratospheric aerosols provide a second variable climate forcing in our experiments. This forcing is identical in all three experiments for the period 1958-1985, during which time there were two substantial volcanic eruptions, Agung in 1963 and El Chicón in 1982. In scenarios B and C, additional large volcanoes are inserted in the year 1995 (identical in properties to El Chicón), in the year 2015 (identical to Agung), and in the year 2025 (identical to El Chicón), while in scenario A no additional volcanic aerosols are included after those from El Chicón have decayed to the background stratospheric aerosol level. The stratospheric aerosols in scenario A are thus an extreme case, amounting to an assumption that the next few decades will be similar to the few decades before 1963, which were free of any volcanic eruptions creating large stratospheric optical depths. Scenarios B and C in effect use the assumption that the mean stratospheric aerosol optical depth during the next few decades will be comparable to that in the volcanically active period 1958-1985.”

    Note that a high level of stratospheric aerosols *did* occur because of Mount Pinatubo’s 1991 eruption, so McKitrick’s and Christy’s gratuitous shot at Hansen et al. is even more misplaced.

    • Geoff Sherrington

      Hansen quoted above “which were free of any volcanic eruptions creating large stratospheric optical depths. ”
      Silly me though volcanos might create smaller optical depths. Geoff.

      • ‘Optical depth’ was used synonymously with ‘optical thickness’ in the paper.

      • Geoff Sherrington

        Magma,
        Do you mean that material brought into the stratosphere increases the optical depth or its synonym optical thickness? Surely they are decreased. Geoff.

  24. “How accurate were James Hansen’s 1988 testimony and subsequent JGR article forecasts of global warming?”

    Hansen’s Senate presentation was based on research that he and colleagues had been working on since at least 1982. Their 24-page 1988 JGR paper was submitted January 25, 1988, five months before Hansen’s Senate testimony. The use of “subsequent” in this commentary is misleading, or deceptive.

  25. A rather curious but mislead statement at the top of the post

    Hansen didn’t graph his CO2 concentration projections, but he described the algorithm behind them in his Appendix B.

    What Hansen Fung, Lacis, Rind, Lebedeff, Ruedy, Russell, and Stone did was graph the FORCINGS

    https://photos1.blogger.com/blogger/4284/1095/1600/Forcings.0.jpg

    • “was graph the FORCINGS”
      And they also gave the explicit formulae that related forcings to concentrations.

    • To my humble eyes, on the vertical axis it reads °C, that’s not forcing that is warming and includes an over the top climate sensitivity.

      • Forcings were stated as changes in temperature in the 1988 article. FWIW there is a simple proportionality.

        Your point is somewhat like an organiker friend of Eli’s who insists that it is wrong to use wavenumbers for just about everything.

  26. To continue a bit. The emphasis on the global temperature curve is indeed naive. The paper is much richer than that, predicting patterns of warming. Eli has something to say about that

    http://rabett.blogspot.com/2018/07/hansen-1988-retrouve.html

    https://2.bp.blogspot.com/-f6UtO9sKY9s/WzuFYvSetFI/AAAAAAAAEZQ/2E7LCXyTprIcSFPjcMaY4tE8amOpWI6UgCLcBGAs/s1600/Untitled.png

    • Eli

      I read your interesting article.

      What was missing from your analysis and hansens original screed! are the coloured anomalies for the last time there was considerable arctic amplification from around 1918 to around 1942.

      The modern warming came after a cold period around 1955 to 1969 or so and consequently it gives a picture that is out of context and exaggerated if it is not related to the earlier warming.

      Hansen used the Mitchell curves and other data and must have been aware of earlier warmimg

      Tonyb

    • Indeed – one reason the response is lower than expected is because the Hot Spot failed to materialize. This means both the negative lapse rate feedback and much of the positive water vapor feedback did not occur as modeled.

      The result is global warming at the low end, which is exactly what the observations indicate.

      • Monolithic (single-box) energy balance models assume that the land and ocean warm at the same rate when they don’t, and with the delay in the ocean warming comes a delay in the water vapor feedback and hot spot. This is why EBMs over historic data would underestimate the ECS. They are just too simple to represent the multi-box system that is the climate system.
        “Everything should be made as simple as possible, but not simpler. – Albert Einstein ”
        Skeptics complain that the climate system is too complex to model, but then while saying that they also rely on this type of simplicity. Go figure.

      • models assume

        You are very good at finding reasons why models have failed, but not so good at pointing out any validation or verification.

        This does not distinguish models from superstition.

      • verytallguy

        Indeed – one reason the response is lower than expected is because the Hot Spot failed to materialize. This means both the negative lapse rate feedback and much of the positive water vapor feedback did not occur as modeled.

        Not according to the literature.

        http://iopscience.iop.org/article/10.1088/1748-9326/10/5/054007/meta

        Please provide a citation for your assertion.

      • Please provide a citation for your assertion.

        Of course, the IPCC is an obvious choice:
        http://jonova.s3.amazonaws.com/graphs/hot-spot/hot-spot-model-predicted.gif

        For those willing to look, linear trends are quite calculable and demonstrative from NOAA’s RATPAC, from RSS’ MSU, and from NASA’s MSU:
        https://i2.wp.com/turbulenteddies.files.wordpress.com/2018/06/hotspot_2017.png

        Now, though the further back one goes, the more instrument and other RAOB uncertainties one encounters, one can calculate from the beginning of the RAOB era, 1958, and find a hint of a hot spot:
        https://i0.wp.com/turbulenteddies.files.wordpress.com/2018/07/hot_spot_1958_thru_2017.png

        But neither the intensity, nor the vertical or horizontal extent would appear to constitute a hot spot.

      • verytallguy

        An actual literature citation which backs your assertions that “the Hot Spot failed to materialize.”

        Not annotated pictures.

      • An actual literature citation which backs your assertions that “the Hot Spot failed to materialize.”

        Not annotated pictures.

        Consider if you are suffering from de_ni_al, secondary to confirmation bias.

        Choosing to ignore even the IPCC and choosing not to examine the data yourself is certainly consistent.

      • verytallguy

        The citation does not back up your claim.

        Which was: “the Hot Spot failed to materialize. This means both the negative lapse rate feedback and much of the positive water vapor feedback did not occur as modeled.”

        A citation which claims this, please.

      • TE, you either like or don’t like monolithic EBMs being used for ECS. I think they miss observed delaying processes which make them way too simple for ECS.

      • Re: “Indeed – one reason the response is lower than expected is because the Hot Spot failed to materialize.”

        You’ve gone back to repeating your usual falsehoods on this topic. How tedious. The hot spot exists, since the tropical upper troposphere warms faster than the tropical near-surface. That has been shown in paper after paper. For example:

        In satellite data:
        #1 : “Contribution of stratospheric cooling to satellite-inferred tropospheric temperature trends”
        #2 : “Temperature trends at the surface and in the troposphere”
        #3 : “Removing diurnal cycle contamination in satellite-derived tropospheric temperatures: understanding tropical tropospheric trend discrepancies”, table 4
        #4 : “Comparing tropospheric warming in climate models and satellite data”, figure 9B

        In radiosonde (weather balloon) data:
        #5 : “Internal variability in simulated and observed tropical tropospheric temperature trends”, figures 2c and 4c
        #6 : “Atmospheric changes through 2012 as shown by iteratively homogenized radiosonde temperature and wind data (IUKv2)”, figures 1 and 2
        #7 : “New estimates of tropical mean temperature trend profiles from zonal mean historical radiosonde and pilot balloon wind shear observations”, figure 9
        #8 : “Reexamining the warming in the tropical upper troposphere: Models versus radiosonde observations”, figure 3 and table 1

        In re-analyses:
        #9 : “Detection and analysis of an amplified warming of the Sahara Desert”, figure 7
        #10 : “Westward shift of western North Pacific tropical cyclogenesis”, figure 4b
        #11 : “Influence of tropical tropopause layer cooling on Atlantic hurricane activity”, figure 4
        #12 : “Estimating low-frequency variability and trends in atmospheric temperature using ERA-Interim”, figure 23 and page 351

        Re: “This means both the negative lapse rate feedback and much of the positive water vapor feedback did not occur as modeled.”

        The hot spot is not about the positive water vapor feedback; it’s about the lapse rate feedback. And positive water vapor feedback has been observed, including in the troposphere. Please go do some reading on this. For example:

        “Upper-tropospheric moistening in response to anthropogenic warming”
        “Global water vapor trend from 1988 to 2011 and its diurnal asymmetry based on GPS, radiosonde, and microwave satellite measurements”
        “An analysis of tropospheric humidity trends from radiosondes”
        “An assessment of tropospheric water vapor feedback using radiative kernels”

        Re: “Of course, the IPCC is an obvious choice:”

        What you just did was horribly misleading. As I’ve noted elsewhere, you’re using a misleading color-scale and a misleading time-frame that renders your model-data comparisons invalid.
        [Hint: the two panels from your fabricated image don’t cover the same time-frame, and they don’t use the same color-scale]
        http://blamethenoctambulantjoycean.blogspot.com/2018/02/myth-ccsp-presented-evidence-against.html

        You also conveniently left out what your own 2006 source said about the misleading, out-dated HadAT2 image you’re abusing:

        “Systematic, historically varying biases in day-time relative to night-time radiosonde temperature data are important, particularly in the tropics […]. These are likely to have been poorly accounted for by present approaches to quality controlling such data […] and may seriously affect trends”
        https://downloads.globalchange.gov/sap/sap1-1/sap1-1-final-all.pdf

        Of course, climate science didn’t stop in 2006 with the report you abused. Scientists have been working to remedy the aforementioned issues. You simply continue to ignore the relevant evidence on this, as expected. I’ll present some illustrative images depicting what you’re avoiding:

        https://twitter.com/AtomsksSanakan/status/1013105778763468800

        https://twitter.com/AtomsksSanakan/status/1013106417170108417

      • Since 1979, HotSpot?

        RATPAC? No.
        RATPAC only reliable stations? No.
        UAH MSU? No.
        RSS MSU? No.
        NOAA STAR MSU? No.

        By the way, with regards to IUK, the data reputedly indicating a hot spot,
        do know that the data set the paper was based on is flawed.

        Per the web site:
        “If you downloaded data prior to 1 May 2015, please obtain the corrected version 2.01 or 2.2015. The original version 2.0 had a date-registration error which affected temperatures.”

      • Re: “Since 1979, HotSpot?”

        Like usual, you’re just willfully ignoring the evidence cited to you. That is, after all, how you dodge evidence of the hot spot.

        And congratulations on cherry-picking 1979 as your start point, when it’s well-known that radiosonde analyses under-estimate post-1970s tropospheric warming trends, due to 1980s changes in radiosonde equipment. I even quoted this being pointed out by the source you abuse. Yet you still dodge the point. Amazing.

        We both know that explains why you don’t want to look at pre-1959 radiosonde data, lest you run into the hot spot that you’re committing to denying. For example, take the outdated HadAT2 analysis you initially cited via JoAnne Nova’s garbage, denialist blog:

        https://4.bp.blogspot.com/-ylmxpHIufh4/Wq3B7WTk8GI/AAAAAAAABTg/twZeoZnXglsQLwJoXKaMxzUjJJbIDvp1ACLcBGAs/s1600/HadAt2%2Bimage.png
        (Figure 11 of: “Revisiting radiosonde upper-air temperatures from 1958 to 2002”)

        Re: “RATPAC? No.”

        Wrong.

        https://4.bp.blogspot.com/-f2Gv1rCXRAA/Wq24QHsiXHI/AAAAAAAABTE/rjrqwQWftLckYf9vRhDqBRGhn-zIb1AnACLcBGAs/s1600/RATPAC%2Bimage.PNG
        (Figure 4 of: “Radiosonde Atmospheric Temperature Products for Assessing Climate (RATPAC): A new data set of large-area anomaly time series”)

        Re: “By the way, with regards to IUK, the data reputedly indicating a hot spot, do know that the data set the paper was based on is flawed.”

        Did you know the subsequent versions of IUKv2 still show the hot spot, as even denialists/contrarians like John Christy and Roy Spencer admit?

        Seriously, your evasions are becoming tedious.

        Re: “UAH MSU? No.
        RSS MSU? No.
        NOAA STAR MSU? No.”

        Wrong again:

        http://journals.ametsoc.org/na101/home/literatum/publisher/ams/journals/content/clim/2015/15200442-28.6/jcli-d-13-00767.1/20150309/images/large/jcli-d-13-00767.1-t4.jpeg
        (Table 4 of: “Removing diurnal cycle contamination in satellite-derived tropospheric temperatures: understanding tropical tropospheric trend discrepancies”)

        RSS’ amplification ratio is anomalously lo, due to RSSv3 under-estimating mid- to upper tropospheric warming. RSS corrected this in their subsequent work,resulting in amplification rations on par with NOAA/STAR. That’s covered in papers such as:

        “Sensitivity of satellite-derived tropospheric temperature trends to the diurnal cycle adjustment”
        “Troposphere-stratosphere temperature trends derived from satellite data compared with ensemble simulations from WACCM”

        So RSS and NOAA/STAR show the hot spot. UAH is the outlier that doesn’t.

        By they way, you conveniently overlooked other satellite-based analyses, such as the UW analysis that shows the hot spot and which I cited above. I wonder why (I already know why; it’s because they show the hot spot that you’re committed to denying).

        https://twitter.com/AtomsksSanakan/status/1013529128983826434

        https://twitter.com/AtomsksSanakan/status/1013528056261218306

        And let’s not even get started on re-analyses like ERA-I that also show the hot spot you’re committed to denying:

        https://twitter.com/AtomsksSanakan/status/1013236156929060864

        You really should know better at this point, since ERA-I was pointed out to you before and you admitted it showed the hot spot. But here you are, conveniently cherry-picking analyses in order to avoid that point. Amazing.

        “Hmmm… That is interesting. The ERA indicates a Hot Spot. It’s skinny and subdued, but it’s there.”
        https://judithcurry.com/2016/08/01/assessing-atmospheric-temperature-data-sets-for-climate-studies/#comment-800423

  27. “Second, applying a post-hoc bias correction to the forcing ignores the fact that converting GHG increases into forcing is an essential part of the modeling.”
    There seems to be some ignorance about GCMs here. Converting GHG increases into forcing is not a part of the modelling. GCMs like Hansen’s work with gas concentrations directly. Forcings are diagnosed, frequently from the model output.

  28. Pingback: Researchers: Father of Global Warming's Theory Devastated by Actual Data | PoliticsNote

  29. “Inferred” is more accurate. “Diagnosed” belies more assurance than is properly warranted, considering the large uncertainties inherent to this entire exercise.

  30. There is far more important kinds of absurdity to this debate about Hansen 1988.

    (1) It is cited as evidence in the climate policy debate about the validity of climate models. We’re told it is valid evidence because of blog posts. No matter what journalists say, it’s weak tea.

    (2) What happened to the peer-reviewed literature? If this was a milestone 25 year climate prediction, we should have seen papers about it in Nature or Science. When will we see papers about the 30 year history? With the fate of the world at stake, tipping points and such, I hope they’re working fast.

    (3) Why fiddle with these “adjustments” to the model? Why not rerun it with actual emissions and volcanoes since its publication? That also eliminates any effect from tuning.

    I have found one (only one) published – but not peer-reviewed – paper attempting to do so in a major journal: “Skill and uncertainty in climate models” by Julia C. Hargreaves at WIREs: Climate Change, July/Aug 2010. She attempted to input the actual emissions data since 1988 and compare the resulting forecasts with actual temperatures. Hargreaves has a PhD in astrophysics from Cambridge.

    The result: “efforts to reproduce the original model runs have not yet been successful.”

    Gated: http://onlinelibrary.wiley.com/doi/10.1002/wcc.58/abstract

    Ungated copy: http://www.jamstec.go.jp/frsgc/research/d3/jules/2010%20Hargreaves%20WIREs%20Clim%20Chang.pdf

    The final irony: given the decades of work to keep the archives of methods and data about climate change in the right hands, what might be definitive test is ruined by failure to archive.

    • ” She attempted to input the actual emissions data since 1988 and compare the resulting forecasts with actual temperatures. “
      Really? Where? I can’t see anything in the text to indicate that she did that, and I think it is very unlikely. What she did do was to analyse the skill of Hansen’s results using bayesian-style methods.

      It’s true that the sentence quoted about “efforts to reproduce the original model runs” suggests that somebody might have, but no reference is cited. In fact the GISS Model II code used at the time is archived and available to download and use (there has even been a support forum, although the link is now dead). It is not known if this is the version that Hansen used for this paper, the computations went on over years. Some Scenario A results are from 1983.

      • Nick,

        (1) Hargreave explicitly says that she wanted to run the code to consider other variables and examine the various outputs of the model.

        (2) Got to love climate science! Rather than run the model and produce hard results, we get advocates for policy action saying that its easy to run. They prefer to talk big about the model than re-run it (or another contemporaneous model) and get definitive results.

        Thirty years of these games have produced little public support for major policy action. Perhaps 30 years more of this will work.

        But, as they say in AA, “Insanity is …”

      • Nick Stokes: Really? Where? I can’t see anything in the text to indicate that she did that,

        I agree with you.

      • There are two issues here as usual. Since the paper provides algebraic formulii for forcings by the different greenhouse gases it would be straightforward to input the observed emissions data (this is what Gavin Schmidt did at Real Climate)

        http://www.realclimate.org/images/anthro_h88_eff.png

        The second issue is the more detailed geographic distribution of the warming for which you would have to get the model code to run, Anybunny who has ever tried to do so knows that this is a real time burner

        Third, Julia is on Twitter. Ask her.

    • Re: “(2) What happened to the peer-reviewed literature? If this was a milestone 25 year climate prediction, we should have seen papers about it in Nature or Science. When will we see papers about the 30 year history?”

      You never see relevant papers, because you willfully ignore them. I cited some literature on this for you, no less that three times:

      https://twitter.com/AtomsksSanakan/status/1010692624855044096
      https://twitter.com/AtomsksSanakan/status/1010693846475436032
      https://twitter.com/AtomsksSanakan/status/1010692193651187712

      You simply ignore the literature, allowing you can repeat your misleading distortions here.

      Re: “The result: “efforts to reproduce the original model runs have not yet been successful.””

      You’re quote-mining again, even though you’ve been previously corrected on this. Stop.

      https://twitter.com/AtomsksSanakan/status/1010697244264271872

      • Atomsk’s Sanakan: That looks like a quote-mine.

        What exactly is your objection to a “quote mine”? What was written was intended by the author to be taken seriously, and most likely intended to be true. Unless the meaning of the quote has been distorted by removing it from the context, not a problem with this quote, the quote should not be ignored just because it is, say, uncomfortable, or not in line with someone’s “pursuit of happiness”.

      • Re: “What exactly is your objection to a “quote mine”?”

        Do you know what a “quote-mine” is? Because your question suggests that you don’t. The following page will give you a laymen’s level introduction:
        https://en.wikipedia.org/wiki/Quoting_out_of_context

        With that in place, it should be blatantly obvious (to anyone who’s intellectually honest) why I object to quote-mines.

      • Atomsk’s Sanakan: Re: “The result: “efforts to reproduce the original model runs have not yet been successful.””

        You’re quote-mining again, even though you’ve been previously corrected on this. Stop

        OK, so how did removing the context change the meaning? Efforts to reproduce the model runs have been successful?

      • Atomsk’s Sanakan: Once again:

        That took me to twitter (I guess, I don’t have an account), and Hargreaves’ paper, which I have already read.

        Did someone reproduce the original model runs?

      • “Did someone reproduce the original model runs?”
        That is the problem with quote mining. We actually have no facts. There are no references. It is a passing comment in an unpublished paper. The quote suggests only that it hasn’t been done successfully (to JH’s knowledge); it is quite consistent with the proposition that it hasn’t been done at all. Or if it has, JH hasn’t heard of it.

      • Nick Stokes: It is a passing comment in an unpublished paper. The quote suggests only that it hasn’t been done successfully (to JH’s knowledge); it is quite consistent with the proposition that it hasn’t been done at all. Or if it has, JH hasn’t heard of it.

        Has it been done? It would be hard to provide a full reference list of all the times it has not been done.

        Granting the soundness of Hausfather’s comments, would it not be a good idea to run the model now with reference to diverse scenarios over the next 30 years.

        You and I could compare its accuracy to the accuracy of Javier’s “conservative” forecast, and to other forecasts.

      • “would it not be a good idea to run the model now with reference to diverse scenarios over the next 30 years”
        I guess we would find that interesting. Somebody would have to commit a lot of time and effort to it.

      • Nick Stokes: I guess we would find that interesting.

        That might be the closest we come to agreement.

      • I would donate my 1988 computer, but my wife threw it away:

        https://i.imgur.com/3FvXJ9F.png

        In other words, a ridiculous waste of time.

  31. Pingback: About Hansen’s powerful demo that climate models work! - Fabius Maximus website

  32. Peter Lang

    Ross McKitrick,

    Thank you for this interesting post. It is short, succinct and clear.

    The last sentence is worth restating:

    The bottom line is, climate science as encoded in the models is far from settled.

    We need more posts like this. And we need more scientists, economists and policy advisors who are objective and prepared to challenge the CCC (catastrophic climate change) consensus.

  33. All of the science is based on utilising the Stefan-Boltzmann equation to calculate temperatures from various algebraic manipulations of flux values and so-called radiative balance.

    These “facts” are turned into code for computer models.

    In this article I wrote I claim the algebra used to calculate temperatures calculated by summing discrete fluxes algebraically is mathematically wrong.

    If even the simplest model is mathematically incorrect the rest collapses.

    Someone tell me why I am wrong.

    https://www.dropbox.com/s/mko3w38vqouozpb/Simple%20Model%20of%20greenhouse%20mathematics.docx?dl=0

    “The central idea in the papers of Clausius and Thomson was an exclusion principle:”Not all processes which are possible according to the law of the conservation of energy can be realized in nature.””

    • “These “facts” are turned into code for computer models.”
      GCMs apply radiative transfer equations locally, not to spatial averages over regions.

  34. His 1981 Science paper produced a better forecast and had an equilibrium sensitivity nearer 3 C per doubling which is more in line with today’s models than his 1988 model which ran hotter.

    • Jim D: His 1981 Science paper produced a better forecast and had an equilibrium sensitivity nearer 3 C per doubling which is more in line with today’s models than his 1988 model which ran hotter.

      It is a shame that for policy advocacy purposes he went beyond what he had shown in the peer-reviewed literature. And it is a shame that he went along with the trick of making sure the hearing room was hot instead of air-conditioned.

      • He is different. He prefers nuclear to renewables too. IPCC estimates don’t use just his 1988 sensitivity values which would be at the upper end of the 2-4 C range they have always considered.

  35. ” Pat Michaels and Ryan Maue in the Wall Street Journal, and Calvin Beisner in the Daily Caller, disputed this.”

    If President Trump woud but invite them to dine at the White House to watch the video of our hostess’s Great Debate,

    the nation might witness the greatest intellectual constellation since Warren Harding dined alone.

    https://vvattsupwiththat.blogspot.com/2018/07/forecast-futility-with-polar-bears.html

  36. They’re repeating the same nonsense Hausfather already rebutted, except now they’ve added some other distortions (such as acting as if Hansen’s projection was of lower tropospheric temperature trends).

    https://twitter.com/hausfath/status/1010240647960264705

    Oh well. There’s a reason I don’t rely on contrarian, non-peer-reviewed blogposts for my information on science.

    • Atomsk’s Sanakan: They’re repeating the same nonsense Hausfather already rebutted,

      It has frequently been remarked that if Hansen had done something different from what he actually did, his forecast would have turned out more accurate than it actually turned out — and that his model has thus survived a test.

      So, if the model, as modified with an updated Transient Climate Sensitivity, is taken as good, Repeat the procedure: conjecture future atmospheric CO2 and other forcing scenarios, say crossing 3 CO2 scenarios with 3 aerosol scenarios (for a total of 9) for the next 30 years; for each scenario, run the model forecasts. Specify which scenario you think is the most probable (Germany, India, Japan and China continue to burn more coal for a while before steadying, perhaps), and which of the data sets you expect to be the most reliable; specify how you would handle an end-spurt such as the most recent el Nino swing; then in 2048 review the fit statistics (say sqrt of mean square prediction error, or Hargreaves’ skill statistic).

      • You’ve been linked to Hausfather’s discussion of this topic. Feel free to go address it, instead of relying on contrarian blogposts from people who willfully ignored the non-CO2 aspects of the projection (until they were repeatedly called on their omission).

      • Atomsk’s Sanakan, quoting Hausfather: While its true that Hansen’s “most plausible” Scenario B modestly overestimates recent warming, the reason has nothing to do with the accuracy of Hansen’s model.

        OK, if Hansen’s model is accurate, then calculate its conditional forecasts for a bunch of emissions scenarios of the future, as i described, and let’s see how accurate it is, once we know which scenario best fit the actual emissions evolutions. We now have hypotheses about why Hansen’s forecast was too high (what you called the “rebuttal” by Hausfather); they may be correct*; let’s test them against future data.

        Are you asserting that Hausfather’s rebuttal is necessarily true because it rescued Hansen’s model from the inaccuracies pointed out by others? I call that “hypothesis rescue” to contrast it with “hypothesis test”, because the reasoning was engaged in after the model was known to be inaccurate.
        Eventually some model will be supported by 30+ years of accuracy without post hoc corrections of this or that. At least I hope so. If Hansen’s model is accurate, start the new test now.

        * As Eddington’s post hoc analysis of the failures of photographic plates was correct. Post hoc explanations of how the Pons-Fleischman apparatus really does work despite its failures have not been shown to be correct.

      • Re: “OK, if Hansen’s model is accurate, then calculate its conditional forecasts for a bunch of emissions scenarios of the future, as i described”

        How about you stop evading, and actually address what the model predicted in comparison to what was observed? If you’re not going to bother reading what’s cited to you, then you’re not going to learn anything. So once again:

        You’ve been linked to Hausfather’s discussion of this topic. Feel free to go address it, instead of relying on contrarian blogposts from people who willfully ignored the non-CO2 aspects of the projection (until they were repeatedly called on their omission).

        https://twitter.com/hausfath/status/1010240656004927491

        https://twitter.com/hausfath/status/1010240659096137728

        Re: “Are you asserting that Hausfather’s rebuttal is necessarily true because it rescued Hansen’s model from the inaccuracies pointed out by others?”

        Not at all what was said. So this time, actually address what was cited to you. No, quote-mining what Hausfather said (while ignoring the evidence he cited) does not count as addressing it. Nor does using implication by question to misrepresent what I said.

      • Atomsk’s Sanakan: You’ve been linked to Hausfather’s discussion of this topic. Feel free to go address it, instead of relying on contrarian blogposts from people who willfully ignored the non-CO2 aspects of the projection

        The test of the model is how well it handles out of sample data. So run the model with out-of-sample inputs, either the known inputs since 1988 (I don’t seem to have gotten a straightforward answer about whether this has been done), or scenarios of relevant possible future trajectories. I didn’t say that Hausfather’s comments are false, I said that the model can only be tested against future out of sample data, not the sort of explanations about how Hansen might have made a better forecast. There may be other factors causing error than what Hausfather has identified.

        Hansen in 1988 presented a serious warning. Is the model-based portion of that warning still a basis for us to be worried? Are we threatening our grandchildren if we do not invest the necessary time, labor and money to reduce CO2 emissions? (Like Hansen, I have grandchildren.) Isn’t this important? If it is then keep testing until there is a clearly correct forecast of future temps based on the atmospheric composition changes over the decades.

        Isn’t it possible, with 2018 computing, to run Hansen’s model on 16 – 64 or so scenarios; and then when we know which scenario is most closely matched by the atmosphere to look at how well the model output on that scenario matches the actual temperatures?

        To put it slightly differently, however good the computer model, we won’t have reason for confidence in any warnings/projections until we have multiple decades of accurate projections that don’t need correction.

      • MRM, oops. I was supposed to close the italics after Isn’t this important?

        sorry.

    • Re: “The test of the model is how well it handles out of sample data. “

      I’ve already showed you Hausfather testing Hansen’s model using out-of-sample data. You simply continue to ignore it. Please actually pay attention.

    • Re: “The test of the model is how well it handles out of sample data.”

      Already done by Hausfather in the material you continue to willfully ignore. The post-1980s data is out-of-sample, since it wasn’t present when Hansen presented the model. Yet you still continue to act as if no out-of-sample test was done. That’s ridiculous.

  37. Second, applying a post-hoc bias correction to the forcing ignores the fact that converting GHG increases into forcing is an essential part of the modeling. If a correction were needed for the CO2 concentration forecast that would be fair, but this aspect of the forecast turned out to be quite close to observations.

    This has probably already been pointed out, but this isn’t a post-hoc correction to the forcing in the sense that it’s changing the radiative response of the GHGs. It’s a correction because some of the assumed GHG emission pathways didn’t match what actually happened and because some other factors (aerosols, for example) were not included. The point is that the net anthropogenic forcing happened to end up lying between that assumed by Scenario B and Scenario C. Hence, if you correct for this (which is entirely reasonable) Hansen’s models turns out to have done quite well.

  38. Pingback: Researchers: Father of Global Warming’s Theory Devastated by Actual Data – Conservative US

  39. A lot of nit picking in the comments. The main point is about trends and the trends computed here differ only a little from those computed by nick Stokes at his blog. That’s perhaps due to start date.

    It is a well worn climate science method to correct for ENSO and volcanoes. That would lower the data trend further and raise the forecast trends.

    • Properly correcting for ENSO would raise the observed trend, due to prevalence of strong El Nino events in the first half of the 1982-2017 period and La Nina dominance of the second half. Accounting for volcanic events lowers the observed trend. The net difference made by ENSO+volcanic adjustment I find is very small – they cancel.

      Volcanic adjustment makes little overall difference to the model trend – the event at the end of the period cancels the event at the beginning.

      If we want to take things to this extent, we also need to account for solar variability which has seen a fairly substantial decline in reality, which presumably wasn’t in the model.

    • Actually, Lewis and Curry did a credible job at selecting periods to minimize volcanic effects. Their ECS is less than half that of Hansen’s model (even accounting for slower feedbacks). One would this expect Hansen’s trends to be far too high unless he got “lucky” with his start and end dates. And that’s the problem with all this commentary on Hansen. Recent science shows it must show (on average) trends that are a lot too high.

      • Charles May

        Precisely.
        I don’t know if you have looked at it but look at my comment just below yours.
        Further, look at my earlier comment when I analyzed the UAH data.
        If CO2 is the control nob then the Lewis/Curry number works. and correlates with real measured data.

  40. Charles May

    In an earlier comment I analyzed the UAH data and found support for the Lewis and Curry value of ECS as being around 1.66. My analysis of UAH was very close to that.

    I decided to look at the H4 global data. I expected the result that I got. All it does is lend further support for the lower ECS value.

    https://1drv.ms/u/s!AkPliAI0REKhgZkEEWu6C6OWNAE42g

    Here is how the fit to the raw H4 data works out.

    https://1drv.ms/u/s!AkPliAI0REKhgZkGR2ufX9U1BZODOQ

    An ECS value of 1.97 with the raw data. Here is what happens with the H4 data with a 5-year moving average.

    https://1drv.ms/u/s!AkPliAI0REKhgZkF0pSTs_8nej9sOw

    How in the world can an ECS value of 3.0 be justified. Is it too much to ask that climate models actually correlate with real life measurements? At least here I have presented two cases where they do.

    I think Lewis and Curry are very close to the mark. I can support what they have done.

  41. Tony Leavitt

    All this talk of ECS and TCS seems malformed to me. I am having a difficult time seeing that either of these are greater than zero. I do not see how CO2 traps any heat all, and only perhaps delays a very small amount of energy leaving the earth by a few seconds. I know this sounds implausible, but either I am under thinking things, or climate scientists are seriously over thinking by muddy the waters with these abstract concepts of forcing, feedbacks, ECS, TCS, and so on.

    Here is my reasoning. Please correct me where I am wrong, if I am. I have a background in computer science and physics, and I have 32 years of experience in physics-based software modeling in the defense industry. It is not climate modeling, and I do not have a thermodynamics background, but I don’t think I am as clueless as Joe Q Public. To me, it is all about following the energy in the system. Energy is neither created nor destroyed, so it has to go somewhere.

    The earth’s surface is warmed by the sun. The surface then emits long wave energy per its emission spectrum. I estimate about 15% +/- is in a wave length that will interact with CO2, which is around 13 um to 18 um. The exact amounts are not important, other than it is clear that the vast amount of long wave energy emitted by the earth will not interact with CO2. If there was no CO2 then the IR light between 13 and 18 um would freely escape to space. However, since there is CO2, the photons in the 13 to 18 um range are absorbed, and then reemitted in some random direction. Some go upwards to space and some are returned to the surface. I think the ones that are returned to the surface are the “back radiation” emitted from CO2.

    So far so good — correct? But I never hear anyone continuing the process and following the energy. Once the IR photon from CO2 is returned to the surface it is absorbed, and the photon is annihilated and its energy is converted back to thermal heat. That heat will then be reemitted per the earth’s radiation spectrum, and there is an 85% chance (+/-) it will be in a wave length not acceptable to CO2 and the energy escapes to space.

    It seems to me that CO2 only delays the energy from leaving the earth by a very small amount of time. The key question is how long the energy rattles around in the atmosphere, how much and how long before it is returned to the surface, and then how long before it is reemitted. These times cannot be very long – probably less than a second, and unlikely more than a few seconds. The distance to space is relatively short and the speed of light is extremely fast. Due to this, it seems to me that CO2 does not trap energy and only delays its exit by a few seconds. Since length of nighttime is enormous in comparison, then the energy state of the surface at sunrise is exactly that same whether or not CO2 is in the atmosphere. Where is the trapped energy needed to cause warming?

    • If you know how insulation works, it’s like that. The key point is that limiting the escape of heat also sets up and maintains a temperature gradient. In the case of earth that gradient is 33 degrees between the surface and top of atmosphere. If you add insulation (like GHGs) you increase that gradient. Note that for an insulator to work it only has to restrict the escape of heat somehow.

    • Oversimplified: Greenhouse gases block (better said limit) the flux escaping to space at the frequencies they absorb. You said that.

      To balance the incoming solar irradiation so that the emission to space matches, the blackbody emission at the surface has to increase via an increase in the surface temperature

      Some details: At the surface the average distance that a photon in the 13-18 u band travels is 10 m. A CO2 molecule which absorbs a photon does not emit one but gives up the energy by collision to O2 or N2. Other CO2 molecules are excited by collisions. About 1 in a million emit before being collisionally deactivated. More complete explanation at the link

      http://rabett.blogspot.com/2010/03/simplest-explanation.html

      An image. Red line is without greenhouse gases, black line with, blue shading what comes from the surface

      https://twitter.com/brandonrgates/status/1014285571353501699

      • eli rabett July 4, 2018
        You said this.
        “To balance the incoming solar irradiation so that the emission to space matches, the blackbody emission at the surface has to increase via an increase in the surface temperature”

        I do not think this is quite correct? Only semantics.

        The solar radiation presumably stays neutral for this argument?
        The CO2 if added or increased changes the radiative forcing
        Radiative forcing (measured in watts per square meter) can be estimated in different ways for different components. For solar irradiance (i.e., “solar forcing”), the radiative forcing is simply the change in the average amount of solar energy absorbed per square meter of the Earth’s area. Since the Earth’s cross-sectional area exposed to the Sun (πr2) is equal to 1/4 of the surface area of the Earth (4πr2), the solar input per unit area is one quarter the change in solar intensity.

        Here we see back radiation from the CO2 as the important extra factor, no longer solar radiation though of course it is directly caused by that source. The surface temperature, [and that of the adjacent meter of air] has to increase because of 2 incoming fluxes, that of the sun and that from the warmer CO2 and O2 and N, not just one.
        This warmer air layer gradually cools as it extends to the imaginary TOA where the outgoing energy matches the incoming energy.

        “Some details: At the surface the average distance that a photon in the 13-18 u band travels is 10 m.”
        I thought I had read that it was a lot shorter than that? Would you have an easy reference.

        Tony Leavitt,
        I sympathise, very difficult concepts and often seemingly contradictory.
        Re Back IR, if you believe that clouds trap heat, early evening, no sun, it can stay quite warm overnight which means “It seems to me that CO2 only delays the energy from leaving the earth by a very small amount of time.”
        is not the answer. The lag time for energy percolating through the system is different to the mechanics of heat in heat out. 12 Hours of darkness is an eternity to a light beam but not to the vast amount of energy trapped in the sea or land surface that can only be released slowly, not instantly.

      • Let Eli simply repeat John Nielsen Gammon’s answer to the questions Angech raises

        Q3: What is the mechanism by which infrared radiation trapped by CO2 in the atmosphere is turned into heat and finds its way back to sea level?

        A3: The extra heat at sea level comes from the Sun. CO2 reduces the rate at which the atmosphere loses its energy to space via infrared radiation, which in turn reduces the flow of energy from the Earth’s surface to the atmosphere.

        Energy from absorbed sunlight at sea level is transferred from the Earth’s surface to the atmosphere through direct heat exchange, evaporation, and exchange of infrared radiation. All three of these mechanisms slow down if the atmosphere is retaining extra energy, as it does if greenhouse gas concentrations increase.

        The resulting accumulation of energy from the Sun raises temperatures throughout the climate system. A warmer atmosphere is able to lose more energy to space, so the whole climate system eventually approaches a warmer equilibrium.

      • CO2 average absorption over a 3 m cell. last figure lower right hand corner

        https://rabett.blogspot.com/2018/03/dear-judge-alsop-spectroscopic-basis.html

    • Tony,
      here’s how to see it with a real spectrum. This is from a textbook by Grant Petty. It is above an icefield in Alaska, during early thaw. In the looking down spectrum, you can see the <13μ region emitting at just below 0°C. But the CO2 region emits at about 225K. The total emission is the area under, so that bite hurts. The surface has to be warmer than it otherwise would, to let the solar heat leave.

      If you increase CO2, the bite widens and deepens (deeper because the emitting level rises even higher. So again, the surface has to warm to maintain total emission. You can see the down IR effect where in 13-18μ the IR comes from warm air close to surface.

      https://s3-us-west-1.amazonaws.com/www.moyhu.org/2010/05/PettyFig8-2.jpg

      • Nick Stokes | July 4, 2018 at 5:17 pm | Reply
        ” The surface has to be warmer than it otherwise would, to let the solar heat leave.”
        “so that bite hurts.”
        Star trek I think.

  42. McKitrick and Christie write: “Note that Scenarios A and B also differ in their inclusion of non-CO2 forcing as well. Scenario A contains all non-CO2 trace gas effects and Scenario B contains only CFCs and methane, both of which were overestimated. Consequently, there is no justification for a post-hoc dialling down of the CO2 gas levels; nor should we dial down the associated forcing, since that is part of the model computation. To the extent the warming trend mismatch is attributed entirely to the overestimated levels of CFC and methane, that will imply that they are very influential in the model.”

    Sad. Climate sensitivity (transient or equilibrium) is a measure of our planet’s response to a forcing measured in K/(W/m2). Yes, we often (and stupidly) convert this to K/doubling using 3.7? W/m2/doubling, a value with uncertainty we often ignore. To a reasonable first approximation, what matters is how much forcing (W/m2) and warming Hansen projected, not simply the CO2 concentration Hansen anticipated. That means considering aerosol forcing too. Framing the argument in any other terms makes me fear deception (though it is always possible I misunderstand something).

  43. Tony Leavitt

    The emission spectrums do not help, IMO. I even think flux confuses things. I understand that CO2 gets in the way. I am looking at only one little packet of energy, just enough to form one IR photon (I understand light is both a wave and a particle, but let’s keep it as a particle since this is easier to conceptualize) with a wave length of 15 um. When that photon leaves the surface, describe to me where it goes and how long it takes to get there. I am interested in the case where there is no CO2, some CO2, and a lot of CO2. When that photon is absorbed the energy changes form but keep following as it moves around until it exits to space. It will exit to space, the only real question is how long does it take and where will it go along its path.

    • Any absorption of photons causes the air to warm more than when it doesn’t have this absorption. The atmosphere is therefore warmer and emits too. This whole cycle leads to a warmer surface and atmosphere. From the point of view of space, adding GHGs when none were there before reduces the thermal IR emission to space because the atmosphere is colder than the surface. This means the surface is now not able to emit all its incoming energy to space anymore, and therefore it gains energy and warms itself and the atmosphere until the balance is restored between incoming solar and outgoing thermal IR energy.

    • Hello, Tony. I don’t know whether this will help, but let’s give it a try. It’s radically oversimplified but it gets the point across, hopefully.

      Assume that at point X on a planet, there are 100 photons arriving from the Sun per unit time (we’ll ignore “per unit area” to keep this one-dimensional).

      (1) In the first case, there is no CO2 in the atmosphere, or perhaps no atmosphere at all. 100 shortwave photons are absorbed by the surface during time t, and 100 longwave photons are emitted, all of which go directly out to space, do not pass Go and do not collect $200. The temperature of the surface is given by the Stefan-Boltzmann law, based on that flux of 100 photons.

      (2) In the second case, there is a single layer of CO2 in the atmosphere, which allows shortwave (SW) photons to pass through unhindered but will absorb 50% of longwave (LW) photons, and will then emit an equal number of new photons, half of which go upward and half down:

      (2a) So now, at time t=0, 100 incoming SW photons are absorbed at the surface, and 100 LW photons are emitted back upward, just like in the first case. But now half of those upward-moving LW photons are absorbed in the single CO2 layer. So only 50 photons escape to space.

      (2b) At time t=1, another 100 SW photons arrive at the surface from space, but this time there are also 50 downward-traveling LW photons from the single CO2 layer. These 150 photons are absorbed, by the surface, and 150 photons are emitted. Half of them (75) escape to space, and half (75) are absorbed in the CO2 layer.

      (2c) At time t=2, we have yet another 100 incoming freshly-minted SW photons, supplemented now by 75 back-radiated LW photons, so the surface is now absorbing (and emitting) 175 photons. Of the emitted part, half (87.5 LW photons) make it out to space, and the other half go back down again. (Wait, wtf is “0.5 of a photon”? Never mind.)

      (2d) … you see where this goes. By time t=10, the outgoing LW photons escaping to space (99.95 photons) almost match the incoming SW photons (100), meaning the climate is almost at equilibrium. But there are now a total of 199.9 photons (SW+LW) hitting the surface per unit time, and the same number leaving. Therefore, the surface temperature is higher.

      (3) Now add a *second* CO2 layer to the atmosphere, just above the first one. It also absorbs 50% of the LW photons passing through it, and emits an equal number, half of them upward (to space) and half of them downward (into the first CO2 layer, where half of *them* are absorbed, while the other half pass through and continue down to the surface).

      (4) Then add a third layer, and a fourth. Each additional CO2 layer will increase the number of photons per unit time arriving at the surface — there will always be 100 SW photons, but as you add more and more CO2 layers, more and more LW photons end up making their way back downwards instead of upwards.

      (4a) Given enough time, there will be close to the original 100 LW photons exiting the top-most layer upwards to space. But there will be a lot more LW photons bouncing around in the layers below (and at the surface).

      (4b) This in turn means that the surface temperature will be higher, and each additional CO2 layer will make it still warmer.

      Hopefully this helps, Tony. You can try sketching it out (I always find it helpful to draw when I’m thinking). The numbers (100 incoming photons, 50% absorption per CO2 layer) are irrelevant; you’ll end up with the same pattern if you start with 3745 photons and 1.2345% absorption per layer; it will just take a longer to reach equilibrium.

      If none of that helps, here’s another way to think about it. Temperature in the troposphere decreases with altitude. With no CO2, the photons escape to space from the surface, where it’s warm. As you add CO2, the average height at which photons are able to escape increases, and thus the temperature of the atmosphere at that height of average emission is colder (because temperature decreases with altitude). Thanks to Stefan-Boltzmann, the lower temperature means that fewer LW photons will be emitted, so there is a gap between what is incoming and what is outgoing. This in turn means that the atmosphere (and surface) will warm up, until the temperature at the new (higher) emission height is the same as the temperature at the old (lower) emission height … at which point, the outgoing photons once again balance the incoming photons and the climate is in equilibrium, but at a higher temperature. Add still more CO2, and the height of emission rises still further, meaning that the atmosphere has to warm up again. Etc.

      Best wishes,
      CG

      • Geoff Sherrington

        Climate graphs,
        At the first stage of your essay, where the 50 photons are split and half go down to eventually heat the earth, these 50 do not go up as potential heat sources to heat the atmosphere above them, which is conceptually ‘cooled’ as a consequence.
        Imagine the blanket over the body analogy. The heat trapped by the blanket warms the body, but the heat just above the blanket reduces, it cools, for no net change overall.
        The difference between this and your model is that some of the latter 50 can go straight to jail in the void, where they cannot be involved in such compensatory cooling, so if you make this distinction, you might be right.
        However, a new question follows. Those downwelling photons that are absorbed by the ground and make the round trip as successive 1/2, 1/4, 1/8, 1/16 of the original still sum to unity as you explain. But, does the typical photon, having been emitted from the earth, have enough energy to be re-absorbed and re-emitted without loss, over and over? I am more familiar with the atomic case where atoms cannot be excited unless the incoming energy is above a minimum level set by quantum concepts. Does the same apply in the molecular case? Geoff.

    • You are making a classic mistake. After absorption the energy of the photon is thermalized in a few microseconds and the system retains no memory of it other than that a small amount of heat has been added. Same for emission in the other direction.

      Photon accounting does not work because the emission is only a function of the greenhouse gas concentration, temperature and pressure of a small volume at local thermodynamic equilibrium. Same more or less for absorption with the difference that the IR flu entering comes from emission somewhere else.

      For all practical purposes LTE means that the local volume can be characterized by a single temperature.

      • eli rabett (@EthonRaptor) | July 6, 2018 at 5:16 pm | Reply
        “You are making a classic mistake. After absorption the energy of the photon is thermalized in a few microseconds and the system retains no memory of it other than that a small amount of heat has been added.”

        The [Chemistry?] professor is confused by the physics and omits a small detail he did clarify before.
        There are two sources of energy for the CO2 molecule. Only one is the photon of IR radiation in the 13-18 u band. The other is the collision energy he mentioned above which is effectively the heat of the atmosphere.

        ELI “A CO2 molecule which absorbs a photon does not emit one but gives up the energy by collision to O2 or N2. Other CO2 molecules are excited by collisions. About 1 in a million emit before being collisionally deactivated. ”

        So for every deactivated CO2 molecule there is an about to be reactivated CO2 molecule ready to emit a photon. Now it might take a blink of the eye, an incredibly long time period for a CO2 molecule, for 1 in a million to be collisionally activated, the reverse procedure which also occurs. Nevertheless this happens billions of times in an eye-blink. The system retains full memory of every photon in the 30 plus degrees of extra heat energy at the surface and in the atmosphere, that is why we have a greenhouse effect.
        It also explains, as I said above, the curious lag effect for what should otherwise be a straight in and out energy equation.
        The thicker and deeper a gaseous atmosphere exposed to energy that reaches the surface, the hotter the surface.
        Contrast this to a liquid covering where the temperature at depth falls away.

      • That is exactly what is meant by the phrase “the energy of the photon is thermalized” Eli was terse, not confused.

      • ok

  44. The authors assert: “Now note that Hansen did not include any effects due to El Nino events. In 2015 and 2016 there was a very strong El Nino that pushed global average temperatures up by about HALF A DEG C, a change that is now receding as the oceans cool. Had Hansen included this El Nino spike in his scenarios, he would have overestimated 2017 temperatures by a wide margin in Scenarios A and B.

    Grossly exaggerated handwaving???? Monthly changes in satellite data for ENSO can show much more transient warming than the annual averages used in this analysis. An El Nino is not a massive 0.5 degC distortion for annual GISS data. The larger 97/98 El Nino is a barely visible as only 0.1 degC blip! The problem is that the changes on either side of the El Nino event have been fairly different. The 1999 La Nina returned temperatures to the 1995-7 baseline, while temperature after the 2015/6 El Nino show no signs of returning to any baseline. Temperature has begun rising in 2013 and that rise appears to be continuing today, with a significant blip in 15/16. The increase in trend caused by the last five years of more rapid warming is more a consequence of the past five years than just the 15/16 El Nino (which ended more than two years ago).

  45. Peter Lang

    Ontario Scraps $2 Billion Carbon Tax & Axes Green Subsidies
    https://mailchi.mp/8db664f79619/its-roll-back-time?

    Go Ontario! Lead the world to stop the rot.

  46. Nick Stokes | July 4, 2018 at 1:41 am |
    “Yet again, scenarios are not predictions – they don’t have either error or bias. They are just an example of what you think might happen. ”

    “Back in 1988 Hansen described the choice of scenarios
    For the future, it is difficult to predict reliably how trace gases will continue to change. In face, it would be useful to know the climatic consequences of alternative scenarios. So we have considered three scenarios for future trace gas growth, shown on the next viewgraph.
    Scenerio A assumes the CO2 emissions will grow 1.5 percent per year and that CFC emissions will grow 3 percent per year. Scenerio B assumes constant future emissions. If populations increase, Scenerio B requires emissions per capita to decrease.
    Scenerio C has drastic cuts in emissions by the year 2000, with CFC emissions eliminated entirely and other trace gas emissions reduced to a level where they just balance their sinks.

    That being the case, HFLRLRRS were pretty good on both. Back in 1988 Hansen described the choice of scenarios

    For the future, it is difficult to predict reliably how trace gases will continue to change. In face, it would be useful to know the climatic consequences of althernative scenerios. So we have considered three scenarios for future trace gas growth, shown on the next viewgraph.

    Scenerio A assumes the CO2 emissions will grow 1.5 percent per year and that CFC emissions will grow 3 percent per year. Scenerio B assumes constant future emissions. If populations increase, Scenerio B requires emissions per capita to decrease.

    Scenerio C has drastic cuts in emissions by the year 2000, with CFC emissions eliminated entirely and other trace gas emissions reduced to a level where they just balance their sinks.

    These scenarios are designed specifically to cover a very broad range of cases. If I were forced to choose one of these as most plausible, I would say Scenario B. My guess is that the world is now probably following a course that will take it somewhere between A and B”

    So Hansen thought they actually were predictions Nick.
    And actually somewhere between A and B.
    Excuse the spelling of scenarios, the person I quoted’s problem.

    • “These scenarios are designed specifically to cover a very broad range of cases. If I were forced to choose one of these as most plausible, I would say Scenario B.”
      Does that sound like someone making a prediction? Everything you have quoted says the contrary.

      So which was the prediction, A, B or C? And if, say, B, why is he analysing A and C if he has predicted B?

      • So which was the prediction, A, B or C? And if, say, B, why is he analysing A and C if he has predicted B?

        Perhaps because he was an original merchant of doubt?

        Though, to be fair, as discussed, CFCs were a large part of the RF trend in 1988. This was before the Montreal Protocol, so annual rates of RF increase were increasing exponentially at the time of testimony:
        http://climatewatcher.webs.com/Forcing3.png

        Since that time, rates of RF increase have decelerated and are now increasing at a nearly constant rate, with demographically indicated slow deceleration likely.

        RF was less than, but closest to Scenario B (orange):
        http://climatewatcher.webs.com/ABC_RF10.png

        Temperature response was greater than, but closest to Scenario C(yellow):
        https://i1.wp.com/turbulenteddies.files.wordpress.com/2018/06/hansen_2017.png

        This is good news.

        Both forcing and response were less than expected.

        It is also a tell for those with negativity bias. Because the real merchants of doubt ( those exaggerating global warming ) promulgated a “worse than expected” meme, they cannot abide the observations of less than expected global warming.

      • So which was the prediction, A, B or C? And if, say, B, why is he analysing A and C if he has predicted B?
        Each scenario is a prediction, Nick.
        Of what would happen given a particular set of circumstances.
        He was not foolish enough to predict human activity over this time. Just predict what would happen given a particular level of CO2 etc production.
        So many predictions.
        So many wrong.

      • New York Times, 1988:

        https://i.imgur.com/GtoI13j.png

      • Each scenario was a possibility. Some are more possible than others and Hansen clearly said that he thought that B was the most likely. He was talking before the Montreal Protocols so he thought that increasing concentrations of CFCs were going to be a strong forcing.

        Is every outcome in a prior distribution a prediction? That’s not even nonsense

      • Both forcing and response were less than expected.

      • maksimovich1

        ER
        Each scenario was a possibility. Some are more possible than others and Hansen clearly said that he thought that B was the most likely. He was talking before the Montreal Protocols so he thought that increasing concentrations of CFCs were going to be a strong forcing.

        CFC in terms of forcing is troublesome,as it is not significant at the 90% level and a fine example of Wittgenstein’s ruler eg Previdi 2014

        Stratospheric ozone and GHG forcings also have very different time histories during the twentieth and twenty-first centuries. The GHG forcing increases monotonically over this entire period. In contrast, stratospheric
        ozone forcing starts to become significant only in the 1970s, and
        changes sign around the year 2000 as ozone depletion transitions
        to ozone recovery (in response to reduced anthropogenic halogen
        loading). Finally, the magnitude of the global annual mean
        radiative forcing due to increasing GHGs is substantially greater
        than that due to stratospheric ozone depletion. The former is the
        largest anthropogenic forcing, with an estimated magnitude (in
        2005 relative to pre-industrial) of 2.6Wm−2 (Forster et al., 2007).
        In contrast, the global mean stratospheric ozone forcing is not
        even significantly different from zero at the 90% confidence level
        (Table 1). This serves to illustrate an important point: although the
        global annual mean radiative forcing is a widely used predictor
        of climate change, this metric may not be suitable in cases
        where the forcing agent is unevenly distributed in space and/or
        season. Stratospheric ozone depletion is thus a prime example
        of an external perturbation for which the global mean radiative
        forcing is a very poor indicator of the associated climate-system
        response.

      • Re: “Both forcing and response were less than expected.”

        The response per unit of forcing matched the prediction. That is the point you continue to dodge:

        https://twitter.com/hausfath/status/1010240656004927491

        Re: “This is good news. Both forcing and response were less than expected.”

        It’s good news, in precisely the way that most people on your side don’t want it to be. Or do you really think most people on your side want to admit to the “good news” of the effects of international agreements like the Montreal Protocol?

        Re: “It is also a tell for those with negativity bias. Because the real merchants of doubt ( those exaggerating global warming ) promulgated a “worse than expected” meme, they cannot abide the observations of less than expected global warming.”

        Don’t you ever get tired of repeating that long-debunked talking point of your’s?

        For instance, IPCC tends to under-estimate the impacts of climate change, which runs contrary to the charge of alarmist exaggeration:

        https://www.scientificamerican.com/article/how-the-ipcc-underestimated-climate-change/

        “Climate Change Skepticism and Denial: An Introduction
        […]
        A constant refrain coming from the denial campaign is that climate scientists are “alarmists” who exaggerate the degree and threat of global warming to enhance their status, funding, and influence with policy makers. The contribution by William Freudenburg and Violetta Muselli provides an insightful empirical test of this charge and finds it to lack support.”

        And this is some of the relevant supporting research on this point:

        “Reexamining Climate Change Debates: Scientific Disagreement or Scientific Certainty Argumentation Methods (SCAMs)?”
        “Climate change prediction: Erring on the side of least drama?”
        “Global warming estimates, media expectations, and the asymmetry of scientific challenge”

        Furthermore, the IPCC’s tone tends to be more tentative and less “alarmist”, with sufficient attention paid to uncertainty:

        “The language of denial: Text analysis reveals differences in language use between climate change proponents and skeptics”
        “Comment on “Climate Science and the Uncertainty Monster” by J. A. Curry and P. J. Webster”
        “Guidance note for lead authors of the IPCC Fifth Assessment Report on consistent treatment of uncertainties”

  47. Pingback: 30 Years on, Global Warming Godfather 'Significantly Overstated Warming' | PSI Intl

  48. Pingback: Deniers Fail to Blunt Hansen’s Accurate Climate Forecast | Climate Denial Crock of the Week

  49. The extent to which some adult humans are obsessed with IR absorption and heat forcing in the atmosphere as if it’s almost exclusively due to carbon dioxide, methane, and a few other trace gases is a clear indication we are not a highly evolved species.

    There is so much wrong with this assumption it is difficult to know where to begin in setting the record straight. Let’s just admit it’s more complicated than Hansen argued.

    Imagine your an IR photon emanating from the surface of the earth. What are the odds (probability) you encounter and are absorbed by a carbon dioxide molecule? A methane molecule? An aerosol? A water molecule, droplet, or ice crystal? Now double the number of carbon dioxide molecules and recalculate. Of course, the probabilities would vary with location, time of day, season, latitude, and numerous other variables, but on average, it wouldn’t make any difference, because carbon dioxide is not a very important GHG.

    At 50 percent relative humidity, and 298K there are 24 molecules of water vapor in the atmosphere for every1 of carbon dioxide. Now add clouds, ice crystals, and particulates and consider the chances an IR photon is absorbed by carbon dioxide. Now consider the IR absorption spectrum of carbon dioxide, a linear molecule which absorbs at very few IR wavelengths, compared to water, a bent molecule which absorbs IR at several wavelengths including almost all the bands absorbed by carbon dioxide.

    And this is just a small example of the foolishness of this obsessiveness. Just roll your eyes at the catastrophists. I’m 75 years old. I just wish I could live long enough to observe the rude awakening of these folks when the continental ice sheets return in spite of our best efforts.

    • Sometimes I wonder if our descendants, some years down the road, will intentionally add water vapor and carbon dioxide to the atmosphere in order to stave off the eventual end of the Holocene.

  50. For the greenhouse theory to operate as advertised requires a GHG up/down/”back” LWIR energy loop to “trap” energy and “warm” the earth and atmosphere.

    For the GHG up/down/”back” radiation energy loop to operate as advertised requires ideal black body, 1.0 emissivity, LWIR of 396 W/m^2 from the surface. (K-T diagram)

    The surface cannot do that because of a contiguous participating media, i.e. atmospheric molecules, moving over 50% ((17+80)/160) of the surface heat through non-radiative processes, i.e. conduction, convection, latent evaporation/condensation. (K-T diagram) Because of the contiguous turbulent non-radiative processes at the air interface the oceans cannot have an emissivity of 0.97.

    No GHG energy loop & no greenhouse effect means no CO2/man caused climate change and no Gorebal warming.

    https://www.linkedin.com/feed/update/urn:li:activity:6394226874976919552
    http://www.writerbeat.com/articles/21036-S-B-amp-GHG-amp-LWIR-amp-RGHE-amp-CAGW

  51. How can this be? Here is one possibility. Suppose Hansen had offered a Scenario D, in which greenhouse gases continue to rise, but after the 1990s they have very little effect on the climate. That would play out similarly in his model to Scenario C, and it would match the data.

    Which is saying, Dr. Timothy Ball has the better hypothesis–e.g., “The relationship between temperature and CO2,” according to Ball, “is like painting a window black to block sunlight. The first coat blocks most of the light. Second and third coats reduce very little more. Current CO2 levels are like the first coat of black paint.”

  52. Pingback: ‘Father of Global Warming’ Scientist Finally Admits Theory Is Wrong – Nwo Report

  53. “The bottom line is, climate science as encoded in the models is far from settled.”

    Dear friends,
    Of course that something on the models is fundamentally wrong:

    1. First error, the models should be able to reproduce accurately the past climate observations, namely the ones that can be derived from the sampling of the ocean floor, and that’s not the actual case for any of the existing models;
    2. The behaviour of the different gases that compose the atmosphere should be correctly accounted, namely the measures of the CO2 at Mauna Loa cannot be accounted for the altitude of the observatory but to the relative distance of the measuring apparatus from the nearby ground ( 6-8 meters instead of 3397 m).
    3. The reason for nº 2 is simple, CO2 is much heavier than the N2, O2 and H2O and accumulates near the ground irrespectively of the altitude because when it cools it drops from the heat plumes at very long distances from the emission points (both from soil natural thermal convections or fumes resulting from the combustion processes)!!!

    Lets try to do better in modelling because the forecast is important to our understanding of the Earth.

    Alexandre

  54. nobodysknowledge

    I assume that Hansens scenarios are based on his GISS model. As far as i know this model in 1988 had a climate sensitivity of 5 or 6 deg C pr doubling of CO2. Not strange that he got far away from reality.

    • Also not so strange that the past few years are above the center line.

      Wow. He published a paper in 1981, and managed to get things so well that people have to pretend the reason many of the most recent yearly temps are ABOVE the center line are because of El Nino.

      The 1988 paper also did well, with 2016 ABOVE scenario B. Interesting, and very telling. No wonder scientists look at that and applaud, and no wonder “skeptics” are left with cherry picking.

  55. Where are the NOAA whistleblowers?

    https://i.imgur.com/Sq1WX2M.png

  56. Pingback: Lo scienziato "Padre del riscaldamento globale" alla fine ammette che la teoria è sbagliata : Attività Solare ( Solar Activity )

  57. Well, as I commented before, this comment thread is so far as I can tell nit picking, a lot of it trying to soften the fact that Hansen’s model was a lot more sensitive to greenhouse gas forcings than recent observationally based estimates indicate. So its not surprising that Hansen’s predicted temperature rates of change are too large by a lot.

    It’s harder to get a reliable warming rate from a 30 year period than for periods separated by a hundred years. That’s why of course Hansen’s testimony was based on pretty shaky scientific foundations from the beginning. If he had shown some of the uncertainties he might have had a better chance of at least having the observations fit within his uncertainty.

  58. Well JCH, your orange line looks like what Schmidt recommends. The blue line explicitly says is not appropriate. It’s the trends, not the current value.

    Steve McIntyre had a brilliant observation about climate scientists. They plot annual values when the temperature is decreasing (so the recent decline is not visible) and monthly values when is increasing (to show a higher peak).

    But the whole thing is seems to me is hopelessly muddled by pseudo-science.

  59. Flood damage costs under the sea level rise with warming of 1.5 °C and 2 °C

    Abstract

    We estimate a median global sea level rise up to 52 cm (25–87 cm, 5th–95th percentile) and up to 63 cm (27−112 cm, 5th—95th percentile) for a temperature rise of 1.5 °C and 2.0 °C by 2100 respectively. We also estimate global annual flood costs under these scenarios and find the difference of 11 cm global sea level rise in 2100 could result in additional losses of US$ 1.4 trillion per year (0.25% of global GDP) if no additional adaptation is assumed from the modelled adaptation in the base year. If warming is not kept to 2 °C, but follows a high emissions scenario (Representative Concentration Pathway 8.5), global annual flood costs without additional adaptation could increase to US$ 14 trillion per year and US$ 27 trillion per year for global sea level rise of 86 cm (median) and 180 cm (95th percentile), reaching 2.8% of global GDP in 2100. Upper middle income countries are projected to experience the largest increase in annual flood costs (up to 8% GDP) with a large proportion attributed to China. High income countries have lower projected flood costs, in part due to their high present-day protection standards. Adaptation could potentially reduce sea level induced flood costs by a factor of 10. Failing to achieve the global mean temperature targets of 1.5 °C or 2 °C will lead to greater damage and higher levels of coastal flood risk worldwide.

    • They need a parallel analysis on the costs of subsidence by 2100. Especially in those cities where skyscrapers over 100 stories are being built even after the subsidence rates were identified. This is not a one pony show.

      • “Global mean sea level rise is combined with estimates of vertical land movement due to GIA (Peltier 2004), plus subsidence or uplift in deltaic regions (39 locations with rates from Ericson et al (2006) and a further 78 where 2 mm yr−1 of subsidence was assumed).”

        You mean like that? Or are you insisting that a subsidence study be separate and not included in the paper referenced?

      • Not really.

    • thx for this ref

    • “Adaptation could potentially reduce sea level induced flood costs by a factor of 10.”

      Let’s adapt. It’s the more valuable response. It’s more under our control. The worse the flood damage, the more the weakness of mitigation is highlighted. High sensitivity and the standard government ineptness and the power of big oil means adaption is the better response.

      At some point we more or less give up on ineffective mitigation and devote resources into something with a better return. At some point the unicorns are dismissed as fictional.

      • Adaptation requires changing planning to allow for fast-changing aspects of climate to avoid future unprecedented events from becoming disasters. To adapt planning first require an acknowledgement of the changing conditions, and I don’t think the “just adapt” people are quite there yet. They want to adapt to the disasters after they happen, like a blind person stumbling to the next step without eventually realizing it is not the last one.

    • Peter Lang

      Estimated sea level rise is much higher than other projections.

      Strange that these two papers are not cited:
      Anthoff et al. (2009). The economic impact of substantial sea-level rise https://link.springer.com/article/10.1007%2Fs11027-010-9220-7
      See Figures 2 and 11.

      Tol (2013). The economic impact of climate change in the 20th and 21st centuries https://link.springer.com/article/10.1007/s10584-012-0613-3
      Figure 3 shows that the global economic impact of sea level rise, with adaptation, is negligible up to around 4C warming.

  60. Pingback: 'Father of Global Warming' Scientist Finally Admits Theory Is Wrong | PSI Intl

  61. sushmitha oksa

    SECOND PROBLEM: TREND RELIABILITY

    Most systems use various indicators to determine the trend. Actually, there is nothing bad about using indicators. One Simply Moving Average can do the job. The problem comes with the question: “Is the market trending NOW?” Whether the market is trending or not trending is not like black and white. The correct question is: “How well the market is trending?”

    And here we have something called TREND RELIABILITY.

    Trends exist and they can be traded up and down for a profit. You have to focus only on the most reliable market trends. “Forex Trendy” is a software solution to find the BEST trending currency pairs, time frames and compute the trend reliability for each Forex chart:

    ==> http://www.forextrendy.com?nsjjd92834

  62. Pingback: ‘Father of Global Warming’ Scientist Finally Admits Theory Is Wrong - Economía y Libertad

  63. Pingback: McKitrick & Christy: The Hansen Forecasts 30 Years Later | The Global Warming Policy Forum (GWPF)

  64. already as of 2006 I shared my findings with the community via Climate Audit blog but the blog people refused to reproduce these fine findings.
    This is my original 2005 thesis at the ITIA-NTUA library (National Technical University of Athens) in Greek
    https://www.itia.ntua.gr/en/docinfo/680/
    and this is a more recent paper pf 2014 in the same ITIA-NTUA library in English explaining the sun-climate connection phenomena.
    https://www.itia.ntua.gr/en/docinfo/1486/
    In my blog I provide links to my more recent papers.
    http://dimispoulos.wixsite.com/dimis
    climate modeling can’t be accurate if not attributing to the correct phenomena.
    if you don’t spread the truth and scientifically documented discoveries, you can’t solve anything but preserve a vicious state

  65. Pingback: Energy & Environmental Newsletter: July 30, 2018 - Master Resource

  66. Pingback: Energy And Environmental Newsletter – July 30th 2018 | PA Pundits - International

  67. Pingback: ,Vater der globalen Erwär­mung‘ gibt endlich zu: Theorie ist falsch! – EIKE – Europäisches Institut für Klima & Energie

  68. Pingback: Klimaat en nepnieuws -