Controversy over comparing models with observations

by Judith Curry

My draft talk elicited an interesting conversation on twitter, that deserves some wider discussion.

The figure in question is from John Christy’s recent congressional testimony

Slide1

Gavin Schmidt tweeted:  TMT (trop + glob) comparisons to CMIP5 models on a reasonable baseline, and provided these figures

Slide1

Slide2My reaction was that these plots look nothing like Christy’s plot, and its not just a baseline issue.

Fernando Leanme tweets: @ClimateOfGavin @curryja I’m happy because you use CMIP5 RCP4.5

Gavin followed up with this tweet:  @curryja use of Christy’s misleading graph instead is the sign of partisan not a scientist. YMMV.

gavin tweets: @curryja Hey, if you think it’s fine to hide uncertainties, error bars & exaggerate differences to make political points, go right ahead.

JC responds: @ClimateOfGavin says the king of overconfidence. What political point? A true partisan sees politics in all that disagrees with him

Gavin tweets:  @curryja The only place Christy has ‘published’ his figure is in his testimony to Congress. You think that isn’t political?

JC responds:  @ClimateOfGavin I’ve decided to leave out that figure, I haven’t seen a figure that I am confident of using at this point

Chip Knappenberger tweets: @ClimateOfGavin @curryja Should also include a comparison of the trends (to avoid accusations). Something like this:

Slide1Chip Knappenberger tweets:  @ClimateOfGavin @curryja Is there a “right” answer when it comes to what baseline to use? [refers to blog post by Ed Hawkins] http://www.climate-lab-book.ac.uk/2015/connecting-projections-real-world/

JC tweets @PCKnappenberger @ClimateOfGavin baseline is a subjective decision, related to the scientific point you want to make.

JC tweets: @PCKnappenberger @ClimateOfGavin The key point is the difference in trends. How to explain this in brief public presentation is a challenge

Gavin tweets: @curryja @PCKnappenberger If you want to show trends, just show trends – incl ens spread & uncertainties in data

Slide1

JC responds:  @ClimateOfGavin @PCKnappenberger sorry, such a graph is incomprehensible to a non technical audience with 1 minute per slide

Gavin responds: @curryja If you want a talk for non-technical audiences you should start over and get rid of all the graphs. @PCKnappenberger

JC responds: @ClimateOfGavin @PCKnappenberger they can understand graphs, they like to see data, understand correlations & visual diff btwn models/obs

Gavin responds: @curryja Then they can read a histogram #specialpleading
But why don’t you just make your own figures if these are not ok?
@PCKnappenberger

JC reflections

A whole host of interesting issues are raised in this exchange:

  • How to communicate complex data to a non technical audience?
  • How to best compare model predictions against observations?
  • What is are the most reliable sources of such plots to use in public presentations?

I’ll start with the third question first.  When selecting figures to use in presentations or testimony, I am looking for the most credible figures to use.  I use figures from the most recent IPCC assessment where possible.  A second choice is a figure from the published literature.  However, in public presentations and testimony, they are looking for the most up-to-date analysis with the latest observations (which aren’t in peer reviewed publications owing to research-publication time lags).  Hence I have often used Ed Hawkins’ update of figure 11.25 from the AR5 comparing model projections and surface observations (Ed was the author of fig 11.25).

With regards to John Christy’s figure, he is the author of one of the main observational data sets used in the comparison. I don’t know the source of the time series that Gavin provided, but the observations in gavin’s figure vs Christy’s figure do not look similar in terms of time variation.  I have no idea how to explain this.  I have to say that I think John Christy’s figure is more reliable, although some additional thought could be given to how to define the beginning reference point to eliminate any spurious influence from El Nino or whatever.

Why don’t I draw my own figures for such presentations?  Apart from the issue of lack of time and lack of artistic skill in making such plots look nice, I regard published diagrams or diagrams made by originators of the data sets to have a higher credibility rank, as well as being a source of analysis independent from the person summarizing the information (i.e. me).

The issue of comparing models to observations has been hashed out here (Spinning the climate model – observation comparison Part I, Part II, Part III) and at other blogs (e.g. Lucia’s, etc.).   How to compare depends on the point you are trying to make.  For example, in Gavin’s first time series plot, I am not sure what the point is of comparing to scenario R4.5.   I don’t see that the baseline matters that much, if you are mainly comparing trends.

I really like the histogram, this conveys exactly the point I wanted to make, although explaining this to a nontechnical audience in ~1 minute is pretty hopeless.  I also like Ed Hawkin’s figure 11.15.

A final comment on twitter vs blogs.  I find twitter to be a great source of links, but very frustrating for conducting conversation such as this.

326 responses to “Controversy over comparing models with observations

    • everyone seems fine with my Fig 7. While ed’s figure is not published, he was the author of the AR5 Fig 11.25. So personally I regard this as highly credible. Ed is an honest scientist, does not seem to have been tarred with the ‘skeptic’ brush by the ‘establishment’, although he does engage with skeptics. Seems to be in a ‘category’ with Tamsin.

      • I would suggest you use all the charts and briefly explain them. If the point is to show transient response is on the low end then I believe the message will get through. I know you want simpler not complex but if Gavin is kind enough to think you’re presenting science covering all bases would be the most honest and scientific.

      • Judith, you can use the histogram. I show how to sufficiently explain it to laypersons in about 30 seconds in a comment below. You just need a computer pointer or, preferably, a laser pointer. Depends on the talk details.

      • Gavin used the RCP4.5 projections which is fine except that for most scary scenarios, folks use the RCP8.5 which is nearly twice the forcing. And the error bar envelope is also fairly large. Again this is fine as it shows the models are all over the place and can be off by half a degree in either direction. Even in the past, the spread is +/- 0.25 degrees. Seems they like error bars a lot when trying to show the models are not running away from real temperatures but not so much for anything else.

    • That is the one I was going to suggest. He as an update on his website. He seems to be pretty fair and square.

    • David L. Hagen

      Judy
      Please use Christy’s Chart as it is the most significant.
      It actually applies the scientific method of comparing the diagnostic predicted mid-tropospheric tropical temperature anthroprogenic “hot spot” with the actual data – and shows the models fail by predicting 300% too hot.

      Baselines: Christy Zeros ALL lines through the same base point 1979.
      “Trend line crosses zero at 1979 for ALL time series.”
      That’s the simple beginning of the satellite era. No 1998 El Nino issue.

      Gavin Hides the mid-tropospheric temperature Divergence
      Gavin fudges by plotting results at half height to make this huge difference look smaller. Showing the large variation in annual data also hides the impact of the differences in the noise, making the very large divergence in the trends seem smaller.

      Histogram 95% bounds:
      On the histogram slide, please clearly show the 95% bounds wtih vertical lines about the distribution curve. This clearly shows that the UAH and balloon data are at and below the 95% spread of the models. That further confirms the models are way off too hot.

      On Implications for the Future” please add:
      “Will Climate Cool into the Next Glaciation”?
      With models off by 300%, we have little guidance on how much global warming we need to generate to stop our descent from the Holocene Climatic Optimum into the next Glaciation.
      MIT Prof. Frank Cocks writes:

      In about 300 years, all available fossil fuels may well have been consumed.Over the following centuries, excess carbon dioxide will naturally dissolve into the oceans or get trapped by the formation of carbonate minerals. Such processes won’t be offset by the industrial emissions we see today, and atmospheric carbon dioxide will slowly decline toward preindustrial levels. In about 2,000 years, when the types of planetary motions that can induce polar cooling start to coincide again, the current warming trend will be a distant memory.

  1. Recall that Christy plots 5-yr moving averages. Gavin plots annual data.

    • good point. i prefer annual numbers, perhaps with a 5 yr hamming filter (moving averages can alias)

      • David L. Hagen

        Gavin uses almost twice the scale of Christy to high the divergence.
        e.g. 3.2 C vs 1.8 C.
        Knappenberger’s chart (2nd last in post) clearly shows satellite trends outside 95% range (2.5 to 97.5%) of models for 20 to 40 years. However that is not as graphic and obvious as Christy’s easy to understand graph.

    • Monthly data with 5 year (6 db down) LPF
      http://www.vukcevic.talktalk.net/UAH-GT.gif

    • 5-years moving averages. Well. How is it possible to get moving 5-years averages for satellite-data centered on 1979 when those data start in 1979? Did he invent pre-1979-data?

      On the other side: How can he get 2015-centered values? How does he get satellite-data from the future?

      More reliable says Curry…

      And as Christian comments in this thread: How can he get model-data for TMT from climate explorer when climate explorer does not provide TMT-data?

      More reiliable…

      • Look at the plots – it’s five year moving averages of the model data, not the obs.

      • I think John plots 5-yr centered averages when available, averages of partial 5-yr periods otherwise. That is, the first observed point is the 1979-1981 avg, 2nd point is 1979-1982 avg, 3rd point is 1979-1983. Same thing happens at end when data trails off. This is a guess, though, based on visual comparison (i.e., I didn’t confirm this with John).

        As far as data from Climate Explorer goes, I think the model output for the temperature at the standard atmospheric levels is available via the “taz” data field.

        -Chip

      • afonzarelli

        Chip, when Dr Spencer uses the same plot he always begins with 1983. (and presumably the last five years for the final data point…)

      • TE: Seems like you are the one in need of looking at the plots:

        http://i.imgur.com/GpKQTWE.png

        Christy is using non-existing “observations” at each end.

        Presented at hearings in US Congress…

      • Chip K thinks.

        But he like others has no idea what Christy actually has done.

        Show us what the taz button produces Chip. And what taz actually represents.

      • ehak,

        Well, I have an “idea” as to what he did, which is what I described. And the “taz” field at Climate Explorer advertises that it contains temperatures at various atmospheric levels.

        But, I wasn’t sitting next to Christy when he developed his plots, so I don’t know precisely what he did.

        -Chip

      • Chip says he “wasn’t sitting next to Christy when he developed his plots, so I don’t know precisely what he did.”

        Well. That is actually the point. You don’t know. Nobody knows. Totally undocumented. But presented as established fact to the Congres. As his 5 years running means with invented data.

        More reliable…

      • From the article:

        Now, see the text on the graph about how the warming *TRENDS* are almost always greater in the models than the observations?

        Well, the difference in trends between models and observations is not affected by any of the 5 objections listed above.

        It doesn’t matter how you plot the data with vertical offsets, or different starting points: these issues do not affect the trends, and trends are probably the single most important statistical metric to test the models against observations.

        http://www.drroyspencer.com/2015/11/models-vs-observations-plotting-a-conspiracy/

      • I checked with John Christy and he did what I presumed he did.

        -Chip

  2. Again, as I did on twitter, I warn about Gavin’s error bars on the observed trends in his histogram. They are partially double-counting the uncertainties (see http://rankexploits.com/musings/2013/ar5-trend-comparison/#comment-118293).

  3. First, that isn’t the same information.
    https://curryja.files.wordpress.com/2016/04/slide1.png?w=500&h=375
    and
    https://curryja.files.wordpress.com/2016/04/slide11.png?w=500&h=375
    The second imagine looks like it might be being compared to different model results (CMIP5 vs Laundry List). They changed scenarios, the CS of the two model runs don’t look the same.

    The big problem I have is temperatures haven’t changed equally around the world, only some places did, others didn’t.

  4. I agree with Chip–the uncertainty in Gavin’s plots appears to be badly inflated.

    It seem really unlikely, for the interval 2015 compared to 1980, that no-warming (zero trend) is anywhere near the true -95% CL bound of the model ensemble.

    • Carrick,

      See my chart (the 4th in Judy’s post). The zero trend falls outside the 2.5th percentile of model trends for periods longer than about 15 years.

      That is appears to (nearly) fall within Gavin’s plot is why that type of plot is misleading–it causes the viewer to pay more attention to interannual variability than to the trends.

      -Chip

      • “That is appears to (nearly) fall within Gavin’s plot is why that type of plot is misleading–it causes the viewer to pay more attention to interannual variability than to the trends.”

        And yet it is the “deniers” that are “misleading” the public and cherry picking.

  5. Dr Curry.
    Thanks so much for the continuing information flow. Dr Christy chart one provides the information from data without misleading the layperson.

    Still hoping to see Gavin Schmidt actually speak to Christy, Spencer or McIntre in a professional process without name calling denier or flat earth society designations.

    Use original data minus adjustments plus add another graph with adjustments to show how much of surface warming is based on human adjustments and how troposphere is a better measure.
    Scott

    • Dr Christy chart one provides the information from data without misleading the layperson.

      Not quite. The baselining is one issue, but so are the missing error estimates on both the models and the observations.

      The error bars on the satellite and balloon data are big enough to drive a truck through. And the error bars on the models don’t take into account the variations in natural forcings, I believe.

      So, there’s a lot of uncertainty that’s not being shown. When you account for it, the supposed divergence between models and observations don’t look nearly so bad.

      • As Wm. Briggs often comments, there can be no such thing as statistical error bars on models, or on ensembles of same.
        And the error bars on weather balloons and sat MSU inferences are much smaller than you imply. Please provide statistical evidence otherwise: for radiosondes that have been cross calibrated for instrument changes over time; and for UAH, RSS given their own estimates of same, refined over time.
        The divergence is real, its been going on for all of this century, and it falsifies the climate models using Ben Santer’s 17 year criterion published in 2011.

      • As Wm. Briggs often comments, there can be no such thing as statistical error bars on models, or on ensembles of same.

        I’m not sure what you’re getting at.

        Certainly you have uncertainty in the following components of model projections:
        – Parameters
        – Forcings, both present and future.
        – Initial conditions

        And then on top of that, you have differences between models in how they approach the physics, and how accurate those physics are in different regimes.

        All of these represent the uncertainty in what we know about climate, at least as instantiated in the models. And ideally, the model projections would include spreads of uncertainty that took these all into account.

      • I’m not sure what you’re getting at.
        Certainly you have uncertainty in the following components of model projections:
        – Parameters
        – Forcings, both present and future.
        – Initial conditions

        I would be really surprised the models/climate simulators calculated with a range around each input parameter, and just running the same mode at both +/- the input parameter doesn’t include all combinations of the other inputs uncertainty range.
        You have to have a special type of simulator for this type of calculation.

      • Please provide statistical evidence otherwise: for radiosondes that have been cross calibrated for instrument changes over time; and for UAH, RSS given their own estimates of same, refined over time.

        Mears has a paper out on the uncertainty in RSS from 2011. Again, large error bars. Five times the size of those of the surface record, and encompassing the surface trends quite handily.

        You might also consider the huge changes that the satellite data series have undergone in the past decade. If that’s not a demonstration of uncertainty, what is?

        Radiosondes suffer from being rather sparse. It’s not a calibration issue; it’s a coverage one. Those drop over long time periods (>25 years), though, from what I understand. There’s a paper from Free, Siedel and Angell from 2004 that addresses error bars there.

      • BW, as Briggs points out and I think (as a Ph.D level econometrician) statistics models the probability that some sample drawn from a given population accurately reflects the characteristics of that population. The old black and white marbles in a jar teaching meme.
        Now diferent models are different ‘populations’. For a given model, an ensemble of runs is not a population, it is a set of result samples.
        Just because a statistical procedure can produce a result does NOT mean the result makes any statistical sense based on the underlying math/probability theory. Thus did Mann’s centered PCA produce hockey sticks from red noise. Logically, the comment here is no different.

      • “As Wm. Briggs often comments, there can be no such thing as statistical error bars on models, or on ensembles of same.”

        You can put error bars on anything.

        For models you’ll have two types:
        1. If you run a model multiple times you can of course draw the error bars which merely represent the internal variabilty ( or weather) in the model.
        If you couldnt do this you’d not even know when to accept or reject a model or know when you had a faulty run or outlier run.

        2. For a collection of models, you can plot the range. But you should not strictly interpret this as anything more than that.

        3. best is to compare a single model ( run multiple times) to observations.

        4. The structural uncertainty on satellite data is enormous. Only skeptics think that satellites are certain.

        At some point folks have to get serious about disqualifying some models
        The democracy of models is probably holding us back.

      • To get some feel for the two estimates, here is RATPAC 700mb over UAH-LT (r6b5). 700mb is not precisely LT ( and the MT chart from UAH isn’t available ):
        http://climatewatcher.webs.com/RATPAC700_OVER_UAHLT.png

        Some variability, particularly with some individual stations ( and RATPAC is supposed to regionally best-of-the-best ). This plot includes only RAOB stations with more than 50% of potential ob times. RAOB sensors changed a lot ( I saw first hand in the early 80s and again in the early 90s and that’s just US ). MSU is also variable.

        Still, I’ll return to just how strong the Hot Spot signal is supposed to be and just how comparatively well RAOB and MSU tend to agree:
        http://climatewatcher.webs.com/HotSpot2015.png

      • BTW, the RATPAC station trends are from 1979 through 2015.
        The UAH plot is probably through Frebuary 2016.

      • Steven Mosher: You can put error bars on anything.

        They don’t mean anything except with respect to a sampling model. A sampling model for statistical summaries of GCM runs has not been written, to my knowledge (I sketched on out), much less justified and defended.

      • B.Win: “Certainly you have uncertainty in the following components of model projections:
        – Parameters
        – Forcings, both present and future.
        – Initial conditions”

        Don’t confuse ‘errors’ with ‘error bars’. A single faulty parameter in a model due to a typo can cause it to show +1e6 deg C warming per decade. It’s no use trying to plot error bars with such errors, which are theoretically possible. The true errors of the published models are certainly much smaller, but impossible to quantify without many decades of validation.

      • matthewrmarler | April 6, 2016 at 3:18 am

        “They don’t mean anything except with respect to a sampling model. A sampling model for statistical summaries of GCM runs has not been written, to my knowledge (I sketched on out), much less justified and defended.”

        AR4 (Section 10.1 I think) called their sampling of model output “an ensemble of opportunity” and noted that interpretation of the statistical interpretation of the multi-model spread was problematic. That is why they failed to report statistical confidence intervals for projected warming. If you read the fine print of AR5, the confidence intervals for projected warming were selected by expert judgment – calling what was statistically a “very likely” range a “likely range”.

      • franktoo: That is why they failed to report statistical confidence intervals for projected warming.

        Just to continue: If you could justify a probability model, such that the ensemble mean were an unbiased estimator of the true value and the random deviations at each time point were symmetric and unimodal, then the confidence interval constructed from the mean and sampling error of the mean would be much narrower than the spread displayed in the graphs. All of the data are outside that confidence interval.

        Roughly speaking, the inferential alternatives are: (a) the model trajectories are not meaningful representatives of anything or (b) the data strongly disconfirm the predictions of the model. Any can revise my few assumptions ad infinitum, but it is hard to come to a conclusion such as “the data to date confirm the predictions of an unbiased model of the true climate trajectory.”

      • RE: matthewrmarler | April 7, 2016 at 2:19 pm |

        The IPCC’s “ensemble of opportunity” of climate models doesn’t explore the full range of viable parameters that might be used by each AOGCM. Each AOGCM uses one value for each parameter (picked by manual tuning, not rigorous optimization). The use of two dozens such models fails to systematically cover that “parameter space”, rendering the spread of model projections meaningless – although the IPCC termed it “problematic”. A recent paper showed by adjusting simply the precipitation parameterization of the GFDL model could lower ECS by 1 K/doubling!

      • Get Mosher to jackknife those radiosonde readings into line! The error bars will shrink like magic!

    • SM, upthread this comment stream, it is really disppointing for you to finally reveal you know so little about statistical/probability theory.
      You can calculate anything using modern computers and packaged SW.
      Whether the result has any meaning is another thing all togther, depending on whether the underlying theorem math assumptions were met.
      Which you now provably do not grok.

      • Rud.

        Read harder.
        You can definitely plot the range of models. Watch what christy does.
        Heck, even you yourself suggested the histogram which shows the
        range. There is nothing wrong with showing the range of models
        you can definitely say that the models fall within the range.
        You can definitely observe whether the observations fall within
        or outside that range.

        The question is what do you have when you average the models?
        what does that represent?
        Does briggs ever take a mean of the models? why?
        What would you conclude if the observations fell outside that range?

        Note. You dont have to personalize, just answer simple questions.

      • Steven Mosher: What would you conclude if the observations fell outside that range?

        About all you can say now is that almost all of the model projections (the slopes) exceed almost all of the data trends. If the true model is one whose projection is within the range of the data, it would be good to find that out.

      • afonzarelli

        Yes, Matthew, and ask ourselves the question, “what’s this model got that the others ain’t got?” and maybe from there better models can be developed. I asked Dr Spencer once if there was any effort out there to revisit (and correct) these failed models. He said none that he knew of and that they’re just sitting around waiting for the models to ultimately be fulfilled…

      • A model that “fails” in 2015 could successful by 2050. A model that looks good in 2015 could be way off by 2050.

      • afonzarelli

        True, true, very true… But at some point they’re going to have to be revisited. (why not now?)

      • JCH: A model that “fails” in 2015 could successful by 2050. A model that looks good in 2015 could be way off by 2050.

        I am hopeful that a reasonable assessment will be available by 2035.

      • Given there is no attempt to time ENSO, I think now is premature; it’s interesting. If Matt England’s anomalous trade winds return with the next La Nina, then revisit won’t cover what will have to be done.

      • I’m with JCH on the suggestion not to allow contemporaneous good fits–which may be entirely spurious given all the parameterizations and other implicit modeling choices used to build these simulators–to determine which model is most credible. The problem would be less severe if more different climate observables–rainfall, winds, local effects, etc.– were combined to do the selection-by-fit because then the chances of a spurious fit or a (consciously or unconsciously) modeler-created fit are reduced.

      • stevepostrel: I’m with JCH on the suggestion not to allow contemporaneous good fits– … –to determine which model is most credible.

        To paraphrase myself, I am in agreement. Something like the integrated mean square error through 2035 can be used as a measure for ranking the accuracy of the models, and the 75% or so worse-fitting models can be disregarded for the future. With such a long span of predictions and actual data, it would be very unlikely that one of the 75% worst models could ever make it into the top 10%, on which we’d place the most confidence.

      • I would like to see the models ranked by how much they receive each year in financial support. I suspect significant hardware/programmer resources could be put to better use without the wasted effort of being the 15th best model in the climate world.

      • afonzarelli

        The above is a link to a june 2013 interview with ipcc contributor Hans Von Storch on the failure of the climate models…

        “At my institute, we analyzed how often such a 15 year stagnation in global warming occurred in the simulations. The answer was: in under 2% of all the times we ran the simulation…
        … If things continue as they have been, in five years at the latest, we will need to acknowledge that something is fundamentally wrong with our climate models. A 20 year pause in global warming does not occur in a single modeled scenario. But even today, we are finding it very difficult to reconcile actual temperature trends with our expectations.”

  6. To me the whole point is that the observed record is (I’ll use) 1.5 and the models produce a range of 1.5 to 4.5 that indicates that so far we are at the low end of projection so that should be the advisory to policy makers.

    • …so far we are at the low end of projection…

      That’s an important point that often gets lost in the climate debate’s weeds.

    • Its an important point, but not quite correct. The models do not produce that ECS range. AR5 WG1 figure 9.2 and accompanying text (page 817) states the CMIP5 range from 2.1 to 4.7 with a mean of 3.2C. ‘1.5-4.5’ was reached by applying judgemental spin since all the recent observational ECS are below 2. Same reason AR5 gave no central estimate even though AR4 defied its own lterature cited pdfs to stick with Charney’s 3.

      • Yes mine was simply an example

      • Ordvic, no harm, no foul. I was simply trying to get everybody here to be as precise as possible. Warmunists jump on minor nits to discredit skeptics. Giving them no opening to do so is a hoped for tactic. Gavin’s Twitter rage is just another example. See TE’s complete refutation of Gavin below, just by homologating axis. (axes? axises? dunno plural.)

    • Current observations for the surface are pretty much right in line with the models, if you use the observed forcings as the inputs in the models.

    • Doesn’t say much about any quality expectation of climate folks, if “useless” models still survive and are funded and referenced. Perhaps 95% of them. This seems more like a touchy feely school playground where everyone gets a gold star for effort.

  7. Its simple, Christy’s chart is incorrect, because he compares KMNI-Explorer CIMP5-Output with TMT, since TMT had a weigth on different Levels of Atmosphere and is influenced by stratosphere but CIMP5 with KMNI-Explorer isnt possible to weigth different Layers, he is compare apples to oranges.

    I do ask Gavin weeks ago, he said, that he weigth the atmophere Layer in his comparison, which is the only way to compare TMT-Product with CIMP5. And therefore, thats why Christy’s chart and Gavin are so different

    • Does that (stratospheric influence) apply to the balloon datasets of the graph as well?

      • Yes he adjust ballonns to atmosphere weighting of TMT but since he used KMNI-Explorer, he isnt able to weigth the atmosphere because with KMNI-Explorer you not able to use different Layers (500mb or 750mb or 10mb ..)

        Just look an the means, Christiy is since early 2000s much stronger increase, while Gavin increase less

    • PS: The comparison on Tamino-Blogs suffering from the same Problem: https://tamino.wordpress.com/2016/04/02/new-rss-and-balloons/ (First comment by me)

    • David L. Hagen

      Christian – Your comments do not match the titles of Christy’s charts. He explictly uses DIFFERENT model results for global bulk vs global mid tropospheric vs tropical mid tropospheric temperatures.

      • David,

        Thats ok, but look at the figure you link and what Curry shows, you link show a value arrond zero in 1979 and 1K in 2020, so and Currys shows arround zero in 1979 and arround 1.2K in 2020.

        Thats the Difference i talked about

        And after this, the Problem is, he not using the same basline and that why optical difference is greater then it really is

      • Christian –
        “you link show a value arrond zero in 1979 and 1K in 2020, so and Currys shows arround zero in 1979 and arround 1.2K in 2020.
        Thats the Difference i talked about”

        Judith’s graph above (~1.2 K for models in 2020) is for *tropical* TMT, and corresponds to the figure on page 13 of Christy’s testimony. You seem to be comparing with Christy’s figure 1 on his page 2 (~1.0K for models in 2020) , which is *global* TMT. [Repeated with spaghetti on page 12.]

      • Harold,

        Thanks was my fault, have watched the wrong figure, if you watch Page 13 you see that he is using KMNI-Explorer, therefore he isnt able to weigth different Layers in the Models and so his comparision is flawed because he compares appls to oranges.

        Thats what his comparison suffer from, he dosent weigth different Layers and therefore the Models becomes more warm by the absence of little stratosphere cooling in the models. That why its so different from Gavin, which had weigth the Layers of the CIMP5-Models up to Stratosphere

      • Christian –
        I can’t locate a description of the source of his graphs — which to me is a problem — but that fact, by itself, doesn’t suggest that what Christy used for the models is not comparable to TMT. If you have more precise information about the basis for Christy’s graph, I’d appreciate it.

    • “Its simple, Christy’s chart is incorrect, because he compares KMNI-Explorer CIMP5-Output with TMT”
      That is my main objection to Christy’s plot. It isn’t reviewed, only presented to Congress, and so much is unexplained. What weighted sum of CMIP5 levels are we seeing? How do we know that TMT and balloon results aren’t differently affected by stratospheric influence?

      And for the TMT and balloon data, all we see is averages of different data sets. How much spread is in that data (a huge amount, I believe)?

      • Nick

        I don’t disagree with your main premise, however I was quite amused to see your closing comment about averages of different data sets.

        This also describes the different sea level changes which go in all directions up and down and is then averaged, through to SST’s derived by varying methods and at different depths back to 1850 and again manipulated to provide a ‘global’ average.

        Glad to have you on board with me about the dubious value of some inappropriately averaged data. :)

        tonyb

      • Nick,

        The thing is, the Figure or Christy’s plot shown by Curry here, isnt weighted over multiple Layers, its simple the Near-Surface-Temperature, then he matcht its simply by setting a zero point, but had to use the baseline because this could matter. On the other Hand, he only use the mean of Model, mean of models also means that variability cancel each other out and therefore the mean is more like a forced response only and if you looking that way, you have to adjust the observation against internal variablity.

        Good point about averages of different data sets, this also does matter, it make on such small timeframe a great difference if you use RATPAC-A or RATPAC B only or the mean of it.

        So for that, i have to soory, but i widly ignore Christy’s plot, because its a subjective choise of data analysis

      • “again manipulated to provide a ‘global’ average”

        Tony, you are confusing things. Those are spatial averages – integrals. You have to use an averaging process, else you don’t have a figure at all. You may be doubtful about a global average temperature, but you can’t get one without averaging. And yes, people do study the sampling issues.

        Here we have the average of 3 or 4 numbers, each of which is an estimate of the same thing. They could meaningfully be graphed separately, and usually would be. How much those estimates (of the same thing) vary is relevant.

  8. “…he used KMNI-Explorer, he isnt able to weigth the atmosphere because with”

    Or just look here: https://climexp.knmi.nl/selectfield_cmip5.cgi?id=someone@somewhere#ocean

    Its not possible to look at temperature in 300mb or 500mb or 10mb.

    What you can use is TOS (Temperature on Surface) or TAS (Near-Surface Air Temperature)

  9. Dr. Curry, re How to communicate complex data to a non technical audience.

    http://sowellslawblog.blogspot.com/2016/01/science-in-courts-communication-problem.html

    The link is to my article on this very subject from January of this year. The second half of the article might be useful, beginning with “Communicating Complex Issues.”

    • If the burden of providing a little background information can be tolerated, the complexity (and data quality requirements) of the AWG issue can also be communicated by showing the sampling required, even by a perfect observation system, to detect trends in noisy processes. This approach is used in the design of costly/high value remote sensing systems for a given level of confidence. If the data utilized to “settle the science” contrasts sharply with the ideal’s confidence level it goes to undermine the credibility of their argument.

  10. The histogram can be explained in less than one minute to a nontechnical audience. Point to leftmost bar. ‘the lowest of the CMIP5 runs , this one model out of 102, had an annual rate of increase from 1979 to 2015 of 0.008C/yr.’ Point to tallest bar. ’23 models had an average rate of 0.018.’ Wave at the right side. ‘Many models had rates that were much higher.’ Now vertically wave at the colored horizontal lines, at the dots. ‘The various satellited measured actual rates are lower than almost all the models’. Then wave horizontally at the colored horizontal bars. ‘That is even true considering the uncertainties in the various satellite measurements.’ Then wave again at the right side of the histogram. ‘So we see that the models have generally run much hotter than reality over the past three and a half decades. That is why it is hard to have any confidence in their longer term temperature and sensitivity estimates.’

    Old consultants tricks presenting to Board Directors.

    BTW I checked the historgram. All 102 models are there. Did not check the average rates per year, but no reason to presume them wrong. The lowest is the Russian model. The placement relative to sat data sets, which agrees with Christy’s December chart at top of post.
    Agree Twitter is no place to carry on scientific discussion.

  11. Reblogged this on TheFlippinTruth.

  12. “I really like the histogram, this conveys exactly the point I wanted to make, although explaining this to a nontechnical audience in ~1 minute is pretty hopeless.”

    I like it too – are you sure it is hopeless?

    It is a new diagram – maybe less laden with controversy – and it is made by Gavin.

    This isn´t exact science – however, the take away from Gavins histogram is that the models seem to overestimate warming.

    Gavin has also stated that, in a comment at realclimate.org:
    “The refusal to acknowledge that the model simulations are affected by the (partially overestimated) forcing in CMIP5 as well as model responses is a telling omission.”

    If that is held up against how much IPCC relied on the models – it may be reasonable to put forward the argument that IPCC may seem to base ome of their recommendations on models having significant systematic errors. It may seem reasonable to suspend severe actions based on the IPCC report.

    (See comment 17 for quote by Gavin).


    • IPCC WGI;AR5; SPM

      D.2 Quantification of Climate System Responses

      “Observational and model studies of temperature change, climate feedbacks and changes in the Earth’s energy budget together provide confidence in the magnitude of global warming in response to past and future forcing. “

    • SoF great point about being from Gavin.
      Presenting it in under one minute is far from hopeless. Its easy with the right techniques. I scripted it for Judith above.
      Don’t have to define what a histogram is. Just explain what it shows. My own clock on it was under thirty seconds speaking slowly, while actually waving my trust laser pointer at my computer sceen shot. In the old viewgraph days of overhead projectors we used delible ink markers to do the same on 3M acetate slides. One Black marker 3x for the first three sentences (a dot, a vertical line, a horizontal line), one color (red would have been my choice) for the next two sentences. A vertical through the dots then a box around the uncertainty. Done.
      You would be amazed how nontechnical (dense, even) Boards can be. T

    • Add to it that United Nations climate panel IPCC used the models to excluded natural variability as a cause for observed warming.

      “Observed Global Mean Surface Temperature anomalies relative to 1880–1919 in recent years lie well outside the range of Global Mean Surface Temperature anomalies in CMIP5 simulations with natural forcing only, but are consistent with the ensemble of CMIP5 simulations including both anthropogenic and natural forcing … Observed temperature trends over the period 1951–2010, … are, at most observed locations, consistent with the temperature trends in CMIP5 simulations including anthropogenic and natural forcings and inconsistent with the temperature trends in CMIP5 simulations including natural forcings only.”
      (Ref.: Working Group I contribution to fifth assessment report by IPCC. TS.4.2.)

      Thats circular reasoning, and the premises (model) for the argument seems to be laden with systematic errors. The conclusion isn´t valid.

      • The degree of certainty in key findings in this assessment
        is based on the author teams’ evaluations of underlying
        scientific understanding and is expressed as a qualitative
        level of confidence (from very low to very high) and, when
        possible, probabilistically with a quantified likelihood (from exceptionally unlikely to virtually certain). Confidence in
        the validity of a finding is based on the type, amount,
        quality, and consistency of evidence (e.g., data,
        mechanistic understanding, theory, models, expert
        judgment) and the degree of agreement.’
        Intro ARS 5. Working Group Summary for Policy Makers.

        Notwithstanding the failure of models of the troposphere
        to match observations from 1998 to 2012 the working
        group express very high confidence in the models. ( p 15 )

        Say, ‘ When models and observations collide, go with
        the models.’
        H/t Climate Science Through the Looking Glass.

        .

      • I’m glad you draw attention to these quotes from the IPCC report. These kind of quotes makes me cringe – and there are many more. I urge all to dive into the report. :)

      • Science or Fiction,
        Like yr post @ yr site, ‘ This is how the climate industry
        should have reported uncertainty!’

        There exists one internationally recognized standard for
        expression of uncertainty , The International Organization
        of Legal Metrology – a standard that is freely available
        as a guideline to the IPCC and others. .

  13. The significance of the divergence of the middle troposphere goes to the lack of a HotSpot since the MSU era. I think a better comparison than the global means above is an examination of the trends by pressure and latitude. I recently updated such a plot. There remains no Hot Spot for the global MSU era. However, I found Warm Spots by examining:
    1.) the global RAOB era(1958) and
    2.) the MSU era(1979) excluding the Eastern Pacific ( 180W to 60W )

    The earlier instrumentation of the RAOB era changed a lot, of course, and was in the midst of the 1945-1975 cooling period.

    The Eastern Pacific is marked by a cooling trend since 1979, so for this region, a tropical upper tropospheric cool spot should occur and work against any global Hot Spot, since tropical convection is thought to amplify the tropical surface.

    The lack of a Hot Spot for the MSU era doesn’t contravert AGW. In fact, surface temperature indices are reasonably in line with models. But it does call into question the ability of models to predict a third century. And it does mean that the global warming that has occurred has done so without the negative feedback that a HotSpot generates. Does that mean that if the Hot Spot starts to occur, surface warming will slow down further? That’s what I understand from theory.

    http://climatewatcher.webs.com/HotSpot2015.png

  14. Gavin followed up with this tweet: @curryja use of Christy’s misleading graph instead is the sign of partisan not a scientist. YMMV.

    Blind Spot Bias: “The tendency to see oneself as less biased than other people, or to be able to identify more cognitive biases in others than in oneself.”

  15. nabilswedan

    Dear Dr. Curry,

    What counts is the procedure and basis of any work, regardless of whether it is published in a scientific journal or not. In fact many journals honored unpublished but good work such as that of Steve McGee published on Climate Etc.

    I suggest that you examine the papers yourself and make the decision accordingly.

  16. So, what do the two models that fall in with the overlap of measured temperature values look like? What do they get right and wrong?

    • Aaron, its really only one model, the Russian INM-CM4. Ron Clutz at Science Matters looked at it. Posted 3/24/15. Lowest water vapor feedback resulting in lowest CO2 net net forcing, highest climate system inertia (essentially taking into account more ocean heat capacity), lower sensitivity, closest to observation as Christy’s chart shows. Clutz’ comparisons were to HadCrut4. He proves a downloadable diagnostic Excel workbook for you also comparing all 42 CMIP5 models.

  17. So the Tropical Mid Troposphere suffered some AMO warming too from 1995. That is rather specious modeling that as all forced warming, when the the warm AMO is negative NAO/AO driven.

  18. nabilswedan

    Climate models use a hypothesis that surface and atmosphere are in radiative balance. This is untrue, they are in thermodynamic equilibrium. Backradiation (340 W/m2) from the atmosphere to surface does not exist and no one has ever measured it. Climate models are like a rifle with a crooked barrel. To hit the bull’s eye, climatologists use empirical equations to position the rifle. These equations are based on past and present climates; they are not accurate and have inherent uncertainties due to surface variability. The output is uncertain as a result.

    • So instead of hitting the bridge we hit the village!

      • nabilswedan

        Looks like it as of now.

      • nabilswedan

        Matt, Apparently you have not reviewed the instruction manual of pyrgeometer. This instrument does not measure backradiation because it is programmed to assume that backradiation exists and it is equal to surface radiation of infrared. Take a look at the energy balance used in the manuals. It lacks radiation from surroundings to the thermopile. By the time you add this radiation you end up with:

        Backradiation= f x voltage

        Where f is a calibration factor. At night, voltage is negative for the thermopile cools off, and back radiation is either negligible or negative. Where is the 340 w/m2? They do not exist. They are assumed to exist in instrument program. This is misleading.

    • nabilswedan: Backradiation (340 W/m2) from the atmosphere to surface does not exist and no one has ever measured it.

      Two surface stations built to the purpose have measured it, one in Alaska, one in Oklahoma. They have not been in place for long, but they agree with each other that the IR backradiation has been increasing.

      And you might like this: http://www.drroyspencer.com/2010/08/help-back-radiation-has-invaded-my-backyard/

      • ” Two surface stations built to the purpose have measured it, one in Alaska, one in Oklahoma. They have not been in place for long, but they agree with each other that the IR backradiation has been increasing.”
        The 8-14u window, with clear skies is half 340W/m2 or less, the only way you get close is clouds. Now, yes the 8-14u window does not include co2 back radiation , add it back, clear skies are still far under 340W /m2.

      • I hadn’t seen the Dr’s piece with an IR thermometer, I’ve been playing with mine
        Even with a cold sky that does not change over night, the cooling rate starts to slows when rel humidity get in the upper 80’s to +90%.
        But you don’t get to an average 340 without a large mix of clouds. So it going up isn’t proof of increases from an increase in co2.

      • micro6500: But you don’t get to an average 340 without a large mix of clouds. So it going up isn’t proof of increases from an increase in co2.

        Do you agree with nabilswedan that the back radiation has not been measured?

      • ” Do you agree with nabilswedan that the back radiation has not been measured?”
        I believe you can measure temperature by measuring the emitted ir and comparing that to a reference. I know I just measured a clear sky temp of ~ -71F, air temp of ~ 28F, concrete side walk at 31F, and the grass next to it at 22F.
        And at -40F adding 3.7Wm2 works out to about -38F or so.
        So it is measurable.

      • I have been in touch with NOAA Earth System Research Laboratory/Global Monitoring Division. They use pyrgeometers to “display” downwelling infrared irradiance, or backradiation, in numerous sites. Pyrgeometers are not radiometers, they use thermocouples and thermocouples for measuring temperature.

        Take a look at the instruction manuals of pyrgeometers used, examples Eppley and Kipp & Zonen type pyrgeometers. You will find that pyrgeometers do not measure backradiation. They are programmed to assume that backradiation exists in the first place and it is equal to surface radiation of infrared. That is why pyrgeometrs display about 340 w/m2 twenty-four hours per day. No one has ever measured backradiation, it is assumed to exist in the instrument program.

      • Matthew, sorry to disagree, but that study is so full of holes, it should be swiss.

        https://rclutz.wordpress.com/2015/03/21/lawrence-lab-report-proof-of-global-warming/

      • nabilswedan

        Ron, The issue is not weather this study or that is full of holes or not, it is the method and procedure that matters. A published material in a scientific journal is not necessarily correct science. Pyrgeometers do not measure backradiation, period. Yet, they are recognized by the World Meteorological Organization as instruments for measuring backradiaiton day and night. How did this happen?

        The first pyrgeometer was introduced in 1954 by its maker most likely to promote sales of their product. They probably had not anticipated that a massive set of radiative climate models would be built on their flawed experiment. Are they to be blamed? No. It is the fault of those who did not examine the procedure and experiment and failed to make sense out of the results.

      • nabilswedan: You will find that pyrgeometers do not measure backradiation.

        And mercury thermometers do not measure temperature — they measure mercury expansion. What permits the inference from mercury expansion to temperature are the quantitative science, the careful calibration, and the careful manufacture. Likewise, ammeters do not measure electrical current and pH meters do not measure pH. What permits us to call them “measurements” are the quantitative science, the calibration, and the careful manufacture.

        When a carefully manufactured and calibrated pyrgeometer is being used properly, what mechanism produces its response?

      • nabilswedan

        Matt, Apparently you have not reviewed the instruction manual of pyrgeometer. This instrument does not measure backradiation because it is programmed to assume that backradiation exists and it is equal to surface radiation of infrared. Take a look at the energy balance used in the manuals. It lacks radiation from surroundings to the thermopile. By the time you add this radiation you end up with:

        Backradiation= f x voltage

        Where f is a calibration factor. At night, voltage is negative for the thermopile cools off, and back radiation is either negligible or negative. Where is the 340 w/m2? They do not exist. They are assumed to exist in instrument program. This is misleading.

      • nabilswedan: This instrument does not measure backradiation because it is programmed to assume that backradiation exists and it is equal to surface radiation of infrared.

        So you are standing by the assertion that the downwelling LWIR does not exist. Is that right?

      • nabilswedan

        Matt: Pyrgeometers are incorrectly calibrated for the energy balance of the thermpile is incorrect. When the correct equation is used, pyrgeometers can only detect negative backradiation at night. This is in disagreement with the radiative climate model that assumes positive backradiation of about 340 w/m2 day and night.

        Pyrgeiometers are the only instruments recognized by the World Meteorological Organization for measuring backradiation. Therefore, as of now, no one has ever correctly measured backradiation at night.

        Take a look at other fields of science such as infrared astronomy. They detect minute infrared radiation from the cosmos. In the presence of 340 W/m2 of backradiation at night, infrared astronomy would be impossible to exist. The conclusion is obvious – Back radiation does not exist.

        I use 250 W/m2 of thermal light bulb to heat my bathroom and feel it on my skin. I should be able to feel 350 W/m2 of backradiation at mid night, it is too large to be missed by our senses. We do not feel backradiation at midnight because it does not exist.

      • ” I use 250 W/m2 of thermal light bulb to heat my bathroom and feel it on my skin. I should be able to feel 350 W/m2 of backradiation at mid night, it is too large to be missed by our senses. We do not feel backradiation at midnight because it does not exist.”
        Maybe if it is 80 or 90 at night in the tropics, but it seems a stretch for an average of 340 W/mm2. And you can measure the window with an IR thermometer, and just add your favorite co2 forcing and at least in Ohio, an average 340W/m2 seems very unlikely.

      • Nabilswedan is correct.

        Even Wikpedia states –

        “Pyrgeometers are frequently mistakenly used in meteorology, climatology studies. The atmospheric long-wave downward radiation is of interest for research into long term climate changes, but is not measurable by these instruments.”

        If you don’t believe nabilswedan, lie down on a glacier at 270 K at midnight. Get an idea of how warm 300 watts/m2 is.

        Cheers.

      • micro, “Maybe if it is 80 or 90 at night in the tropics, but it seems a stretch for an average of 340 W/mm2. And you can measure the window with an IR thermometer, and just add your favorite co2 forcing and at least in Ohio, an average 340W/m2 seems very unlikely.”

        DWLR is a bit of a confusing issue. 340/Wm-2 is roughly 5 degrees C or about the average temperature of the atmospheric boundary layer. If you measure DWLR, you are measuring not only direct radiation but radiation due to heat energy being moved around by the atmosphere, so a large portion of the average is advected energy. On average, there is around 120Wm-2 transferred from roughly 45S-45N to higher latitudes which is internal transfer so the “real” DWLR value would be roughly 220 Wm-2 if a less turbulent surface was selected, which is what your infrared non-contact thermometer is more likely to read. So 340Wm-2 is a bit of a fudge required due to picking a terrible reference surface.

      • DWLR is a bit of a confusing issue. 340/Wm-2 is roughly 5 degrees C or about the average temperature of the atmospheric boundary layer.

        According to my SB spreadsheet, to get a 340W/m2 field, the temp of the surface has to be 47.69F with an emissivity of .95 (~7.8C)

        If you measure DWLR, you are measuring not only direct radiation but radiation due to heat energy being moved around by the atmosphere, so a large portion of the average is advected energy. On average, there is around 120Wm-2 transferred from roughly 45S-45N to higher latitudes which is internal transfer so the “real” DWLR value would be roughly 220 Wm-2 if a less turbulent surface was selected, which is what your infrared non-contact thermometer is more likely to read. So 340Wm-2 is a bit of a fudge required due to picking a terrible reference surface.

        When I point my IR thermometer straight up on a clear day, it’s reading the accumulates 8-14u band, whatever amount of IR there is from whatever sources there are, and then has to have some kind of look up table to turn that partial spectrum into a temp. But this is what the surface is radiating to. Now it is true it’s not measuring the DWIR from Co2 and Water in the 14u-20u range, but you can add fluxes.
        On a typical day, the sky is 90-100F colder than my sidewalk, the other night it was -70F.
        At -40F adding 3.7W/m2 changes the temp by about 1.8F, I know there is more than that, I think I saw a reference of 22W/m2. Add 25W/m2 to -40, is -24F equivalent surface temp.

        But Clouds, clouds can be within 10 or so degrees of the surface. So the only way I can see anyone getting a 340 W/m2 DWIR from the entire sky is to rely on clouds for most of it, so any co2 signal has to be buried in that.
        Here’s a hot day.
        https://micro6500blog.files.wordpress.com/2015/07/july20th2015.png
        The sky is near zero F, then from cold to warm:front yard grass, concrete sidewalk, Asphalt driveway. This is a great example of Anthropomorphic Warming.
        https://micro6500blog.files.wordpress.com/2015/08/july31th2015-8_00am_cleardryuhi-annotated.png

        Here’s a cold night last week.
        https://micro6500blog.files.wordpress.com/2016/04/march29_2016_coldclearnight4.png
        Notice the grass temp is lower than air temp, and the sky temp is -70F. This is the “surface” the ground is cooling to.

        So which do you think has a bigger impact on surface temps in cities, a degree or so warming from Co2, or the 30 some degrees asphalt is warmer than grass(which is still warmer than grass under trees)?

      • Just so you guys know,

        The 340 is just an average, you don’t get 340 on top of a glacier at night, and infrared astronomy is practiced at high altitude in dry places.

        And the CO2 rise is all due to man, its not coming out of the oceans.

        Considering all the Nobel prizes awarded to those who worked on Quantum Mechanics, I am sure that any one who could prove there is no “back radiation” from the CO2 in the atmosphere would get one.

        CO2 has to emit infrared, it’s in its very nature.

      • The 340 is just an average, you don’t get 340 on top of a glacier at night

        Of course it’s an average, it’s just that after measuring the IR temp of the environment (N41,W81), I don’t see anyway it could be 340W average.

      • Of course not, you measured it at one point, 340 is the global average.

      • bobdroege,

        You wrote –

        “The 340 is just an average, you don’t get 340 on top of a glacier at night, and infrared astronomy is practiced at high altitude in dry places.”

        Couple of points. You get around 300 from the surface of a glacier at 270 K, at night, or at any time. Do the S-B calculation if you wish. 300 will not keep you warm, will it?

        As to your 340 average, are you saying that some places have far more than 340 at night to make up for those that have less?

        Where are these places, and what is the peak DWLWIR? If CO2 creates an increase in temperature by reradiating IR back to the surface, this must happen at night, too, surely.

        I hope you are not going to say that the CO2 greenhouse effect only works during the day in warm areas. That would seem to be a bit silly. Doesn’t the greenhouse effect work at night, and at the Poles?

        It’s a little odd if it doesn’t, wouldn’t you agree?

        Cheers.

      • micro, “According to my SB spreadsheet, to get a 340W/m2 field, the temp of the surface has to be 47.69F with an emissivity of .95 (~7.8C).”

        I use 1.0 and include about because there is a considerable range that could be used. I believe the accuracy of direct measurement is in the +/- 10 Wm-2 range. The main point is that “average” depends on the surface selected and at the real surface you have a large percentage of the value caused by other than direct radiation. When you use a limited spectrum ir thermometer you aren’t measuring all that could be included.

        I believe Angstrom first estimated back radiation to be on the order of 240 Wm-2 which I personally felt was a better way to go. However, I am not in charge of the circus.

      • nabilswedan: Backradiation (340 W/m2) from the atmosphere to surface does not exist and no one has ever measured it.

        That has turned into: Therefore, as of now, no one has ever correctly measured backradiation at night.

        OK,

        So the backradiation has not been measured with sufficient accuracy at night [I substituted “with sufficient accuracy” in place of your word “correctly”.] Are you standing by your statement that the backradiation does not exist? Or are you content with a comment like “Back radiation has not been measured with 3 significant figures of accuracy at night”.

      • nabilswedan

        Matt: This is not a social or political issue where gray areas and consensus may exist, it is scientific. It is either right or wrong. Backradiation does not exist, period. In fact it cannot exist:

        Basic heat transfer indicates that radiation cannot exist between two bodies that are in intimate contact with each other. Atmospheric air layers, or slabs as called in climate models, cannot radiate to each other. They can only exchange heat through convection. So does atmospheric air and surface; they are in intimate contact and radiation between atmosphere and surface cannot exist. Only convection heat transfer can exist between atmosphere and surface.

      • https://noconsensus.wordpress.com/2012/07/20/why-back-radiation-is-not-a-source-of-surface-heating/

        ‘The misunderstanding of the distinction between energy
        transfer and heat transfer (net energy transfer) seems
        to be the cause of much confusion about back radiation
        effect./ Leonard Weinstein.
        ‘Thermal energy transfers in all directions.heat flows only
        hot to cold.’ Jeff Condon.

      • nabilswedan – Explain what is to stop an excited CO2 molecule from relaxing by emitting a photon. I submit to you the only other way for the molecule to relax is by collision. But depending on the pressure and temperature, some will emit before they collide. What’s to stop this process? (Nothing)

      • nabilswedan

        Jim2: What makes you think that the molecules of carbon dioxide are excited at ambient temperature or less? Also, what makes you think that carbon dioxide molecules behave as separate entity? Take a look at basic air physics and chemistry and you will find that the atmospheric air is a mixture of gases and behaves as such. If more carbon dioxide is added to the atmosphere, the whole atmosphere as mixture of gases absorbs more solar radiation. The atmospheric air, as a mixture of gases, can only exchange heat with the surface by convection. This is the physics we know and apply in our daily engineering applications. Greenhouse gas effect does not exist in engineering reference books.

      • nabilswedan – What we call air temperature is a quality of a huge number of air molecules. It is basically the average kinetic energy of all the molecules. The kinetic energy of the individual molecules varies greatly. Some will have almost a zero velocity and close to zero translational kinetic energy. At the other end of the KE extreme, some will be very “hot.” There is plenty of energy to excite a CO2 molecule.

      • nabilswedan

        Jim2: You are assuming that air constituents behave as separate entities. This is not the physics we know, taught at school, or applied in the real life every day. Air is a mixture of gases and behaves as such; it has uniform temperature.

    • Climate models used to predict out10, 30, 50, 100 years are deterministic models that do not perform when they are back fit to actual data. The models leave out important natural variable a) intentionally in attempt to satisfy the predetermined conclusion that the vast majority of the cause of global warming is due top man-made greenhouse gases; b) because of lack of understanding of direct and interaction effects of natural causes. When you toy with the science you end up spending a huge amount of time debating and justifying everything. A real pity, and in my opinion waste of time and money for basically political interests. Another e.g., the current administration’s “made up” social cost of carbon which was increased by 50-80% over previous estimates. Without going into the meaning or significance of SCC, look at the opening statements in the 2013 report by the administration: “… the 2013 administration report ‘Update of the Social Cost of Carbon for Regulatory Impact Analysis Under Executive Order 12866: Under EO 12866, agencies are required, as permitted by law, ‘to assess the costs and the benefits of the intended regulation and – recognizing that some costs and benefits are difficult to quantify – propose or adopt a regulation only upon a reasoned determination that the benefits of the intended regulation justify its costs.” Reading the report it is impossible to determine the basis / justification for the huge increase in SCC, leaving one to conclude it is purely a tool to be used for political purposes to maximize the amount of climate regulation the administration can achieve.

      • Wow, they were wrong and did it all for just power and OPM.

      • One day they might set up a Social Cost of Smoking Fund.

        http://www.taxpolicycenter.org/statistics/tobacco-tax-revenue

        With the taxes smokers have paid for the privilege of fouling other peoples air, their segment of the population has in effect pre-paid their medical costs. Yet I don’t know of a program that gives free medical to known smokers (all the doctors need to do to confirm their status is check the persons teeth). Where is the love for smoker types to get a fair shake in this world we all live in?

      • That’s the problem when Government takes responsibility for the welfare of the individual. It creeps, as socialism must. Once they supply health insurance, they will demand you do this or that to reduce their cost. Individual responsibility avoids that problem. But now, the populace has been fooled by the illusion that other peoples money will take care of them. It won’t and can’t.

      • “Climate models used to predict out10, 30, 50, 100 years are deterministic models that do not perform when they are back fit to actual data.”

        http://arxiv.org/pdf/1409.0423.pdf

        http://iopscience.iop.org/article/10.1088/1748-9326/9/2/024009/pdf

        http://www.met.reading.ac.uk/~ed/bloguploads/cmip5_hadcrut4_comparison.png

      • Did you know this, Steve?

        http://www.magicc.org/

        You never said and I had know idea, It’s magic or scripture, what’ll it be?

      • Mosher,

        Where’s the data and code as run that produced your chart?

        Andrew

      • Bad Andrew,

        Well, I did see that one coming :O)

      • ICH please refer to

        Bad Andrew | April 8, 2016 at 6:24 pm |

      • The name on the plot is Ed Hawkins who has produced plots that Judith has used before.

      • JCH,

        You have shown an indecipherable mess. There appear to be over 150 models involved.

        Which, if any, of this conglomeration is correct enough to be of any use at all?

        All? None? Is it your belief that averaging a bunch of incorrect nonsense will lead to truth? What is the point of having projections, if they are completely and utterly useless?

        With respect, about the only benefit of preparing graphs like these is that it keeps some people off the street, (albeit at significant cost to the taxpayer), in the academic equivalent of a sheltered workshop.

        Cheers.

      • They’re all useful. If only one is correct, it would be called a spaghetto graph.

      • Flynn – I just checked all those models and figured out which one is correct. Go 2050 and put your finger on the screen. Then go straight up to the highest piece of spaghetti. Haha, it’s warmest one. No conspiracy. It just turned out that way.

      • They’re all useful. If only one is correct, it would be called a spaghetto graph.

        It’s most likely that none of the runs are correct. Still useful?

      • JCH,

        At the risk of appearing anti – Warmist, may I point out you wrote –

        “They’re all useful. If only one is correct, it would be called a spaghetto graph.”

        Warmists are not alone in conflating “useful” with “correct”, but they do it it far more often. As to your “spaghetto” remark, you could have quit while you were ahead (or behind, depending on your point of view).

        However, you then followed up with –

        “Flynn – I just checked all those models and figured out which one is correct. Go 2050 and put your finger on the screen. Then go straight up to the highest piece of spaghetti. Haha, it’s warmest one. No conspiracy. It just turned out that way.”

        You are getting confused, I think. You mention going to 2050, and saying “it just turned out that way.” With respect, you may have overlooked the fact that 2050 hasn’t actually occurred yet. I understand that to Warmists, the past, the present, and the future are as one, and are all treated similarly within the Warmist Church of Latter Day Scientism.

        Real scientists use real physics. Climatologists use RealClimate physics. Warmists cannot distinguish between the two, and get terribly confused trying to differentiate fact from fantasy as a result.

        Keep going with the crayons and bright colours if it keeps you happy. There don’t seem to be many adverse side effects associated with happiness.

        Cheers.

      • If the “skeptics” had a model from 1850, it would be off the bottom of the plot because they can’t account for the near 1 C rise we have already seen, but they would find a way to defend it as superior in some way.

      • Jim D,

        You wrote –

        “If the “skeptics” had a model from 1850, it would be off the bottom of the plot because they can’t account for the near 1 C rise we have already seen, but they would find a way to defend it as superior in some way.”

        I’m not a skeptic, more of a non adherent to the Warmist faith.

        Before I accept what you say, you will need to provide some definition of the “near 1 C rise” to which you refer. What 1850 temperature are you talking about? Antarctica? China? Maybe South America? Are you talking about surface temperature of the Earth, or supposed air temperature? What is it supposed to represent?

        You refer to a “near 1 C rise”. How accurate is your 1850 temperature, or are you just making stuff up, for a bit of fun?

        As to non Warmist explanations for observed temperature rises between, say, 1910 and 2010, a few spring to mind. The main one of course, is that 7 billion people create far more heat than 1 billion people. Coupled with a far greater per capita heat generation, it is obvious that thermometers will provide a higher reading when subjected to greater amounts of heat above environmental ambient.

        And of course, this is observed in practice. This is called Anthropgenically Generated Warming, not to be confused with the rather silly Warmist Anthropogenic Global Warming, which appears to relate to some magical properties supposedly possessed by that wonderful plant food CO2.

        But keep drawing pointless graphs. I’m not sure what the point is, and I’m pretty sure that you don’t know either. If you ever find some use for these bizarre concoctions, let somebody know – maybe folding paper versions multiple times could assist in stopping wonky cafe tables from rocking. What do you think?

        Cheers.

      • Mike, in case you missed it. Perhaps a 1 C rise is a shock to you so sit down before reading this.
        http://www.bbc.com/news/science-environment-34763036

      • jimd

        Remind me again of the actual years that comprise this perfect, non agw contaminated decade we want to try and return to?

        tonyb

      • Jim D,

        From your link –

        “For researchers, confusion about the true level of temperatures in the 1750s, when the industrial revolution began and fossil fuels became widely used, means that an accurate assessment of the amount the world has warmed since then is very difficult.

        To get over this problem, the Met Office use an average of the temperatures recorded between 1850 and 1900, which they argue makes their analysis more accurate.

        This is the first time we’re set to reach the 1C marker and it’s clear that it is human influence driving our modern climate into uncharted territory
        Prof Stephen Belcher, Met Office

        Their latest temperature information comes from a dataset jointly run by the Met Office and the Climatic Research Unit at the University of East Anglia.”

        Unfortunately, the Met Office has no more clue than you. You will notice the usual Warmist assertions, without any facts to back them up. At least the Australian Bureau of Meteorology declared that official temperatures prior to 1910 were unreliable – and the World Meteorological Organisation apparently agrees.

        So that’s the Australian continent out of your calculations. And Antarctica. And most of sub Saharan Africa. And most of South America, China, Russia, the Middle East, and the Far East. So what about Mongolia, Siberia, Alaska, the US Indian Territories, Canada and Central America?

        But wait, there’s more! The majority of the Earth is covered by water, and even the Met Office would not be silly enough to claim they had accurate near surface air temperature records over this area, from 1850 to 1900. On the other hand, if you claimed the Met Office really is that silly, I might agree with you.

        Wriggle, wriggle, little Warmist worm – time might appear to be running out for you! What do you think?

        Cheers.

  19. I’ll start with the third question first. When selecting figures to use in presentations or testimony, I am looking for the most credible figures to use. I use figures from the most recent IPCC assessment where possible. A second choice is a figure from the published literature.

    What am I missing here. Christy’s chart used real documented data in official data bases. He used documented model output.

    That is all that is needed to validate the chart. It does not mater if the chart has been published anywhere. The data and model output has been published.

    The alarmist get away with total fabrication and the skeptics must only produce charts that have been blessed by alarmists.

    Christy’s chart was presented to us by Christy in Houston, several years ago. They want it gone because if makes the alarmism look totally stupid.

    I would keep it and point out the stuff presented on the chart is published, peer reviewed, information. You could use the same published, peer reviewed, information and make your own chart. John Christy would likely support anyone doing this. He only wants the truth out.

  20. Oh, yeah, I remember these.

    I cut, and pasted these two charts graphically to the same scale – they’re they’re largely the same ( Red and Black traces ):

    http://climatewatcher.webs.com/UpperComparison.png

    • I remember you doing that. Distorting axis was two examples in The Arts of Truth. Good job.
      Gavin whining does not change the underlying data. Anomalies versus 5yr MA changes a little what gets graphed from that data. Hence your minor differences.

  21. The two CMIP5 graphs are in large part hindcasts not forecasts. For comparison with actual, shouldn’t one use only model forecasts? (Anyone can tweak a model to match known temperatures).

    • That does seem to be a big problem with all the forecasts we’re constantly resetting the starting line and the race.

    • Mike,

      That is, imo, the key point. The useful “model forecast vs. observations” test is past model runs vs. observational data from after that date.

      Successful hindcasts are useful, but too low a bar to clear for making vital public policy decisions. Climate science is a rarity that hindcasting is accepted as a definitive test of forecasting skill.

  22. “Controversy over comparing models with observations.”

    Odd, that. You’d have thought that it would have been one of the first things to sort out when the IPCC was set up all those years ago.

  23. Schmidt always comes across as smug, arrogant, and defensive… not the best attributes of a scientist supposedly seeking the truth.

    • It doesn’t really matter as far as I can tell. The main attributes of a good scientist are honesty and high intelligence, IMO.

  24. Turn the histogram box and whiskers

  25. “How to communicate complex data to a non technical audience?”

    I would say first of all it’s a bad attitude and a bad goal to communicate complex data. Instead, your goal ought to be to simplify the data to illustrate your points so it is simple, but still emphasizes the truths you want to discuss.

    Here are a few techniques I use.

    1) Assure your audience that this is something they can understand.
    2) Get them interested in what you are going to say, perhaps by letting them know you are going to help them understand something valuable. What are all these scientists arguing about? You too can see what’s going on underneath all the science, for instance.
    3) Keep it fun! For instance, you may have a complex graph, and say something to the effect that your friends wanted you to use something even more daunting. Not everyone is going to understand you, and so a bit of entertainment along the way will help the mood of the audience.
    4) Chose one main point you want to make, explain it at the beginning of the talk, and then illustrate it with your graphs. “Remember when I said XYZ? Well you can see it in operation right here!” Reinforces the points and the understanding all at once.
    5) Above all else keep it simple. It’s fine to leave out the caveats, unless there is something central to your position. Smart people will ask in the Q&A, and that’s the time to unload with the details and a bit more depth.
    6) Keep the number of moving parts simple.
    7) Keep your slides simple. People should be looking at you most of the time, unless you are explaining something about a graph. Never read bullet points from a slide! If you are going to share your slides, you can add text and graphs as backup slides.
    8) Simple slides, keep number of different views on the same topic to a minimum, spend the time to add history to the source of a graph, with some interesting anecdotes. Such as, “It’s an amazing thing you can figure out temperature from a satellite, here is the technique some smart people figured out to do it.”

  26. I really like the histogram, this conveys exactly the point I wanted to make, although explaining this to a nontechnical audience in ~1 minute is pretty hopeless. I also like Ed Hawkin’s figure 11.15.

    If you really like the histogram, present it, give it an extra 15 seconds, and count on someone in the audience being able to understand it. I think it complements the other graphs nicely, but the information is more compactly presented..

    The audience is always heterogeneous, in my experience.

  27. JC responds: @ClimateOfGavin I’ve decided to leave out that figure, I haven’t seen a figure that I am confident of using at this point

    Personally, I do not see any superiority of Gavin Schmidt’s plot to John Christy’s plot. I do think that there should be at least 1 time series plot before the histogram, to illustrate the data summarized in the histogram (i.e. slopes of linear approximations.) And I think it is better to display all the model plots as individual lines, instead of as a grey area.

    • Well, until it gets fixed, RSS has nothing useful to say about surface warming, so neither graph has value. UAH will die with their retirement. That’s an suitable end for that mess.

      • JCH: Well, until it gets fixed, RSS has nothing useful to say about surface warming, so neither graph has value.

        The CO2 theory makes lots of predictions, not just about surface temperature.

        But I would like to see more focus on the surface.

      • Well, you reveal your true colors with that comment. Mess? Only because truthed with radiosondes at nearly all latitudes and altitudes. Something factual that warmunists like you evidently cannot stand. Bring more BS on.

  28. Arguing about the decimals takes attention away from the elephant (or a herd of them) of other model-observation differences. How about showing the Mauritzen plot. According to some state-of-the-art models we crossed the catastrophic 2C line centuries ago, while some of them predict that we’ve been living in little-ice-age-like conditions all along. Is it meaningful to argue about the trends of obviously bogus temperatures?

    Speaking of hiding uncertainties, observational values of many other key diagnostics of the system have uncertainties a magnitude larger than the inferred anthropogenic imbalance, the effect of which the models are supposed to quantify. Some of these are mentioned in Figure 2.11 of WG1 AR5 ch 2. Doesn’t that mean that if you account for all uncertainties, there is no “result”.

  29. People apply all sorts of adjustments to many of these data sets. Let the baseline be the mean of all readings. Sure, it will change. So what?

  30. Re the old surface records:

    How do they know a maximum temp wasn’t lower just because of cloud at potential max time? How do they know a minimum temp wasn’t higher just because of overnight cloud? How do they adjust if they do know?

    I know…I know…Cloud is a total game killer, so don’t talk about it. It’s the Greenhouse Effect which dares not speak its name.

    – ATTC

    • Hi Moso. I agree with you on most things but with clouds I remain a bit unclear (jokingly). Just what are the daily temperature observations supposed to be measuring anyway? The average day or night? Or is it only meant to be measuring clear days or nights? What about dust storms, smoke from wild fires, smog, fog and the odd volcano blowing out ash? All this stuff is going on at the local level, where the measuring stations are measuring, so any artifact such as global average temperatures must surely only exist in some people’s minds!

      • “Just what are the daily temperature observations supposed to be measuring anyway? The average day or night? Or is it only meant to be measuring clear days or nights? ”

        Measurments come in many varieties: by the minute, by the hour,
        3 times a day, 4 times a day, twice a day.

        For the most part long daily series use tmin and tmax.. and then the average of those two.

        If the station has cloud data you could do clear sky.

        But for the long term averages we just take daily tmax and tmin and tave

      • M-twin, where I live, if the clouds drift in, you can get a winter night which is ten – yes ten! – degrees warmer than it would have been without cloud. And that’s centigrade! Zero has become eight, or even ten. And often after I’ve collected firewood for a real chiller!

        And to think this sort of distortion is going on all the time, right across the world in those very places most likely to have thermometers.

        This is why I think min/max is one of those indicators we use for lack of anything convincing. The measurements can be exact, there can be a lot of them, but they indicate nothing solid and consistent. You may have measured something interesting about temps or you may have measured…well…just a bunch of cloud.

        1950, when so many Eastern Australian places had “record” high minima, was not a warm year. It was a wet year, and freakishly wet through the normally drier winter-spring months. And guess what the winter-spring maxima were like in 1950: not very recordish at all. Whatever the sun or planet had in mind, cloud decided on lots of low maxima and high minima. Yet when the numbers are processed for 1950, the greedy maw of Statistic doesn’t care what it’s digesting. It just wants its numbers to gobble.

        And don’t let’s pick just on min/max. Even a progressive measurement of temps may fail to tell you anything worth knowing about climate in many places and years. It’s just that min/max tells even less of a story.

        I’m not saying min/max is wrong or uninteresting, merely that it is bunk when interpreted and extended.

        And kindly go back to agreeing with me!

      • M-twin, where I live, if the clouds drift in, you can get a winter night which is ten – yes ten! – degrees warmer than it would have been without cloud. And that’s centigrade! Zero has become eight, or even ten. And often after I’ve collected firewood for a real chiller!

        They came in a little later last night after my first -70F clear sky measurement (118Wm2), this morning when I measured the heavy overcast was 29F(292Wm2), a 100F swing in temperatures. This is what the surface radiates too.

        And don’t let’s pick just on min/max. Even a progressive measurement of temps may fail to tell you anything worth knowing about climate in many places and years. It’s just that min/max tells even less of a story.

        Here I strongly disagree, it is the only useful measurement of temperature we have. It’s value is 2 points:
        1)It’s the only measurement(station daily Min/Max) that you can actually compare where instrument error is as small as possible, since both measurements will have the same error.
        2)It is sampled at a constant daily rate, this allows you to use it as a derivative.

        Since in the extratropics, the ratio of day to night changes, this is a direct measurement of the effects of solar forcing.
        Then, you can also use them to see the annual rate of temp change, both min and max, and they responded differently.
        But the daily change in min temp with an increase in Co2 forcing should have a positive trend, right?
        Here is the handful of surface station in the US SW desert, a plot of daily average min and max temp
        https://micro6500blog.files.wordpress.com/2015/09/us-sw-risingfalling.png

        Rate of daily change for Min and Max for NH as the length of day changes (warm is fall to spring, cool is spring to fall which marches the daily change, the max rate of warming is spring NH).
        https://micro6500blog.files.wordpress.com/2015/04/nh.png
        The drop at the end of warm is a half year of data, and should be ignored.
        SH
        https://micro6500blog.files.wordpress.com/2015/04/sh.png
        Since I only use the actual measurements, some areas are undersampled, and can have large rates of change due to this.
        https://micro6500blog.wordpress.com/2015/11/18/evidence-against-warming-from-carbon-dioxide/

      • I didn’t say that I disagreed with you on clouds Moso just confused as to what was supposed to be measured. Lots of measurements – hell yes!

        However, these measurements IMO are just measuring what there is so that, indeed, if there is a prevalence of cloudy conditions around the world these measurements will show up as being cooler during the day and warmer at night for those particular areas.

        Hence any average temperature movement may well not be much different than an average calculated from clear sky temps. Eg (5degC + 29degC) / 2 = 17degC as opposed to (15degC + 19degC) / 2 = 17degC in the hypothetical case of cloudy conditions, which means that the average hasn’t changed much if at all?

      • Now, don’t get me wrong, I’m quite happy to accept that the world has warmed a bit overall, though maybe not as much as in previous warm phases. (I don’t really know, I just know nobody questioned Viking And Tang history till it became politic to question them.)

        And I acknowledge that in recent times many glaciers are melting and dust has been blasted out of Australia’s centre by El Nino events. But how else does the ocean get its iron to do its cooling trick if it’s not getting it from glaciers and deserts? In short, climate change is a fast and marked phenomenon which I have no interest in denying.

        I just think that “fast and marked change” has likely been the rule even in the very recent geological past (hence we don’t stroll along Bass Strait to Tassie any more) and that averages are pretty average factoids since change is both cyclical and linear. Even average rainfall doesn’t tell you enough about how the rain fell: one impossibly wet day in 1963 made a whole year in my region “wetter” then it really was. If someone decided to average out “Australian” rainfall for that year, that one day here gave improved rainfall to a dry knob two thousand miles to the west. Not because any rain fell out there but because we’re all Aussies!

        But if one is going to take an average of anything to establish anything, one has to do better than min/max, which is breathtaking in its superficiality. If you take min/max and collate it with rainfall figures, old reportage of wind direction etc and chuck in some anecdote, the superficiality becomes less breathtaking. But Big Stat just wants its hearty breakfast of numbers, doesn’t it?

    • “How do they know a maximum temp wasn’t lower just because of cloud at potential max time? How do they know a minimum temp wasn’t higher just because of overnight cloud? How do they adjust if they do know?”

      Neither matters. We dont adjust for those.

      Very simply: The temperature at any location is spatially modelled as
      combination of two terms

      T = C+ W Temperature = Climate + Weather

      The climate is determined by a regression

      C = f(Alt, Lat, Time) That is the temperature at every location can be
      expressed as a function of the the altitude of the station, the latitude of the station and the time or season.

      This regression explains over 90% of the variance in monthly average temperature. Such that you can subtract the climate from the temperature
      like so

      T-C=W

      In other words, the residual, the part of the temperature not explained by
      latitude and altitude is the WEATHER.

      What does the weather contain?

      It contains all those other factors (10%) that affect the temperature:
      clouds, rain, lake effects, circulation, the land class, All those things
      that change over time.

      If a place were always cloudy then you would see it in the weather field.
      If there is any structure in the weather field then this means that there
      are climate factors unaccounted for in the C=f(alt,lat,time).

      To date we havent found any persistent structure in the weather fields.
      Distance from coast has a small effect in certain seasons and there are some locations where inversions give the wrong answers during certain
      seasons, but these effects are small, isolated, and very difficult to
      remove. There still may be some UHI in the weather field.. work continues.

      • It’s been warming for some number of centuries it would appear. I’m not sure why some people get so upset over the various data sets.

        The problem with lower trop vs surface is that trends are damped up to an altitude where they are almost flat. Above that they are inverted. A simple linear lapse rate does not account for that behavior.

      • @Steven Mosher

        T = C+ W Temperature = Climate + Weather

        and

        C = f(Alt, Lat, Time)

        A minor quibble…or meander.

        Climate temperatures at a location are similar in magnitude to weather temperatures and fluctuations in temperature or averages of those fluctuations are significantly smaller than either of those two. The breakdown and parameterization have been stated any number of times, but I wonder about them–not what is parsed out but how you and others characterize the parsed components. There clearly are other factors that impact climate in a major way, e.g., there is adjacency in large bodies of water and land; prevailing winds certainly impact climate (and weather). Does the coast of Washington state have the same climate as the coast of New England? [They both have roughly the same latitude and the same elevation. Also weather (temperatures) are not the fluctuations around climate (temperatures), not the residuals. It is easy enough to show that weather is also dependent on the parameters {Alt, Lat, Time,…}. Thus I find the statement accepting a ‘weather’ that is the residual, the part of the temperature not explained by latitude and altitude opens the door for confusion.

        Also if climate is function of {lat,alt,time) then given the long times for changes in altitude and latitude then climate change must be occurring solely as a result of the march of time or there are other factors, i.e., physical variable coming into play and that makes the proposed breakout awkward at the conceptual level. [Latitude, and longitude parameters not variables with the first two be essentially constant. More on that below.]

        The mathematical breakout that you give is useful in estimating changes over a long–IMO no doubt about that–but I suspect the terms could be better named and explained than they have been to date. There is a little slop there…it has always bothered me a little.

        IMO the key differences in climate and weather lie in scales–time scale and spatial scale. Climate parameters are more like renormalized or upscaled weather parameters. Again I think that it also important to keep in mind that entities such as latitude, time, and elevation are parameters in the models and not canonical variables for the physics that quantifies both climate and weather. In particular the variables that drive change over time impact both (climate and weather) and this is very much in contrast to altitude and latitude.

        Sorry if this a little scattered…a lot of irons in the fire.

        Regards,
        mw

      • Ya mw.

        I dont disagree with anything you wrote.

      • Re – Jim2
        “It’s been warming for some number of centuries it would appear.”

        Considering the thermometer was invented during the LIA, that is no surprise :-)

    • Thanks Steven Mosher for the heads up on temperature measurement. It seems impossible for weather stations to do anything other than simply record the Maxs and Mins and that the other effects from cloud, rainfall etc are encased in the averaging process.

      • Err no.
        If you want to you can select stations that record by the minute, hour etc.
        And stations that record wind, clouds etc.

        but if you want to create a long estimate then you are constrained to the lowest common denominator.

        Either way focusing on clouds wont change the record.

        It is getting warmer. Unless you want to deny climate change?

        well do you?

      • I agree that it is getting warmer Steven. The last 200 years is trending warmer but whether it has been catastrophically so is moot.

      • I meant to say that weather stations generally can only be aggregated by the min max and av readings because the ones that record other effects such as cloud, precipitation etc are too few and dispersed for any meaningful trends to be drawn.

    • Across the great continents
      drifting shadows brush the plains
      with fugitive mist. Distant
      mountains rim the sky that lifts,
      across latitudes, from sombre
      indigo to brilliant azurite.

      Earth is the water planet,
      viewed from space,
      like a snapshot from the gods,
      a shimmering orb
      netted in a cloud haze.

  31. The plots that I prefer for models versus observations are these. It is long term, simple surface temperature (not some satellite-derived thing), and shows most of the temperature rise, and how models with no anthropogenic forcing just can’t do it.
    http://www.skepticalscience.com/pics/meehle_2004.jpg

  32. A conservation of energy equation, employing the time-integral of sunspot number anomalies (as a proxy) and a simple approximation of the net effect of all ocean cycles achieves a 97% match (R^2=0.977) with measured average global temperatures since before 1900. Including the effects of CO2 improves the match by 0.1%. http://globalclimatedrivers.blogspot.com shows a graph with superimposed measured and calculated average global temperatures.

  33. The entire discussion is on the wrong track. If you pick the TMT, you are looking at solar near IR energized water, particularly in the tropics where water is convectively levitated efficiently by the moist adiabatic, kinetically lighting up CO2 beginning at about 5km.

    You are NOT seeing surface OLR. That has been extinct for several kilometers of lapse. You are seeing DIFFERENT energy.

    Look closely and you will see DIFFERENT energy again that far more strongly kinetically energises CO2 beginning at about 10 kilometers as ozone absorbed UV inspires the dioxide of Carbon to dance.

    Both the water and the ozone resonances with CO2 are catalyzed by the following harmonics with Nitrogen:

    https://geosciencebigpicture.files.wordpress.com/2015/12/image3-credit-phil.jpg

    The fundamental misconception is that there is any kind of direct energy transfer between the surface and space in the “Q” primary bending bands of CO2; or the weak constructive and destructive rotational bands on either side of it.

    Purely and simply wrong.

    https://geosciencebigpicture.files.wordpress.com/2016/03/gordon-on-nasa.png

  34. 1. “I have to say that I think John Christy’s figure is more reliable, although some additional thought could be given to how to define the beginning reference point to eliminate any spurious influence from El Nino or whatever.”

    Does that mean you should leave out the current “spurious influence from El Nino “?
    No.
    The data is the data.
    Do not hide it or choose parts.
    Press on ahead full steam.
    El Nino emphasizes how much natural variability there is and how much uncertainty there is.
    Gavin does not like El Nino?
    Tough. Hit him between the eyes with it until he comes up with acceptable explanations..
    Besides, you are not staring in 1998, are you?

  35. The only model v reality that needs showing is the Lower Tropospherical Hotspot.
    Models – Hotspot
    Reality – No Hotspot.
    Game over.

    • Lol.

    • The hotspot is not restricted to GHG-forced warming. Your game over argument must imply that if there is no hotspot then no warming at all after 1980. Complete denial that is.

      Besides: The hotspot is observed both in radiosondes and MSU. RSS4, Star, and Po-Chedley have higher TMT trend in the tropics than the surface. The odd man out is UAH.

      • You can tell there are no activist scientists at UAH because no activist scientists would spend so much time with one side of the aisle in congress. That is not the stripe of an activist; it’s the stripe of a nonpartisan truth teller: best data we have, etc.

      • Besides: The hotspot is observed both in radiosondes and MSU. RSS4, Star, and Po-Chedley have higher TMT trend in the tropics than the surface. The odd man out is UAH.

        Well, since 1958, there is a hint of a Hot Spot in the RAOB ( RATPAC ) data, with the caveats that instrumentation has changed a lot since then and 1958 was in the middle of a cooling period. And also there is a hint of a Hot Spot if one excludes the Eastern Pacific ( which has a cooling trend since 1979 ).

        But globally since 1979, RATPAC, UAH, and RSS4 agree fairly closely that there is not Hot Spot:
        http://climatewatcher.webs.com/HotSpot2015.png

        What does that mean?

        It doesn’t mean RF is unreal or ineffective.

        But it does mean that GCM model runs cannot predict the atmosphere for more than one third century. Do we have faith that they can predict for a full century? Why?

        It may also mean that the surface warming may decrease when or if the Hot Spot does appear, since the Hot Spot reduces RF by increasing net radiance to space.

      • The models have predicted some of the trends observed:
        Cooling stratosphere, Arctic maxima, and surface warming.

      • Eddie missed of course the satellite data with higher TMT trend than surface trend.

        Willfully?

        RatpacA 850-300 hpa tropics after 1979 tropics: 0.22 C/dec. Clearly higher than the surface.

        Nevertheless: What do Eddie think his imaginary missing hotspot means? No warming? Less negative feedback from evapotranspiration?

      • Eddie missed of course the satellite data with higher TMT trend than surface trend.

        That would appear to be incorrect, certainly not globally.

        (v4)RSSMT lower than RSSLT which is lower than Surface
        (v6)UAHMT lower than UAHLT which is lower than Surface
        (and all are less than the 2.0/century rate the IPCC AR4 promised ):

        http://climatewatcher.webs.com/SatelliteEra.png

        RatpacA 850-300 hpa tropics after 1979 tropics: 0.22 C/dec. Clearly higher than the surface.

        That would also appear to be incorrect.

        The RATPACA levels aloft are clearly lower than the surface and 850mb:
        http://climatewatcher.webs.com/SatelliteEraRatpacA.png
        http://www1.ncdc.noaa.gov/pub/data/ratpac/ratpac-a/RATPAC-A-annual-levels.txt

        Nevertheless: What do Eddie think his imaginary missing hotspot means? No warming? Less negative feedback from evapotranspiration?

        As I wrote above, the missing Hot Spot doesn’t contravert AGW.

        But, it may mean we’re living a lie believing that GCMs can predict climate.

        The GCMs are, as I understand, hydrostatic, which means they can’t resolve important things such as cold fronts. Well, the ITCZ, which is thought to produce a Hot Spot, is the result of very much modified cold fronts impinging on the tropics from each pole. Is it any wonder that there might be errors in trying to assess changes in convective heat transfer when we can’t resolve the process which lead to it?

        Why should we believe we can predict climate, when it is not possible to predict the units of action (weather) which constitute climate?

      • Eddie’s tropical hotspot suddenly became a global phenomenon.

        Why?

        Desperately shifting goalposts.

        Of course TMT is lower than TLT. Always has been. More influenced by cooling stratosphere. So what was Eddie’s point?

        The next error from Eddies is of course his Ratpac A. It still was this hotspot. That still is in the tropics.

        Misinforming crusade Eddie?

      • The next error from Eddies is of course his Ratpac A. It still was this hotspot. That still is in the tropics.

        Nope.
        New Plot, RATPAC-A, Middle-Left.
        For the MSU era, there is no Hot Spot.

        http://climatewatcher.webs.com/HotSpot2015A.png

  36. I would use two graphs in order to represent both sides of the debate. You have some inane comments from twitter that you can quote for some of the background and I’m sure you can provide some solid reasons for the differences between the two views. Good luck!

  37. There’s also the mathematical fact that “trend” is a technical term in the choice of basis for curve fitting; it does not come from the data but from the person doing the fitting.

    In particular the data does not say it’s a trend as opposed to a piece of a 500 year cycle.

    Trend should be read as “trend at the moment.”

  38. First two graphs are more identical than it looks:
    https://roskasaitti.wordpress.com/2016/04/06/ilmastonmuutos-ja-ilmastomallien-osuvuus/

    If you put them in same scale, there is lot more the same. Sorry my inglish.

    • You’ve tamed the optical delusion.

      • Now I see that Turbulent Eddie has done quite same before me above. More sure when two times get same result, I think.
        And it is indeed mostly an optical thing why they seems to be so different.
        In my blog I made some simplification further in same line as G.S. (-;

    • Very good.

      Just like politics, people argue about imaginary differences.

  39. Replace mean and trend with a cosine and sine of a 500-year cycle (pick any phase but 0 at the center of the data is fine), and find that the fit looks the same.

    500 being any number well longer than the data.

  40. Judith – But what is really striking about this essay is the refreshing ‘heresy’ of it, something that has been far too rare in the community of climate scientists that are operating under a self-imposed consensus not only about climate science but also the policy options.

    It does raise the question of what “expertise” climate scientist has as regards the policy options and why their views on the policy options (other than the impacts of CO2 increases or reductions) should enjoy any privilege.

  41. The Christy plots demonstrate that there are huge problems with the theory that CO2 is the control knob that controls climate. The models consistently exaggerate the impacts of CO2 emissions.

  42. A model should specify how it is to be tested/validated.

  43. Reaction from a non-technical person ….
    the science doesn’t look settled
    so
    I don’t trust the ones who keep telling me it’s settled.

    • and
      I’ve read Roy Spencer’s blog plenty
      and as Judith says, he’s author of one of the main data sets
      so disparaging him is truly lame
      another seed of my distrust

  44. I have a lot of respect for John Christy, he is a serious scientist trying to do his job objectively and transparently.

    However, his running averages are terrible ( as are most running averages ). This site offers a proper low-pass filter and compares to a similar 5 year RM of RSS data:

    http://csens.org/index2.php?xakse=500&yakse=500&plot_to_margin=70&f2antkilder=3&antadded=3&gruppe1=satellite&kildefil1=RSS_Monthly_MSU_AMSU_Channel_TLT_Anomalies_Land_and_Ocean_v03_3.txt&region1=Global&fromval1=&toval1=&operasjon11=-&value11=&operasjon12=-&value12=&linecolor1=blue&linewidth1=2&gruppe2=satellite&kildefil2=RSS_Monthly_MSU_AMSU_Channel_TLT_Anomalies_Land_and_Ocean_v03_3.txt&region2=Global&fromval2=&toval2=&operasjon21=blackman&value21=50&operasjon22=-&value22=&linecolor2=red&linewidth2=2&gruppe3=satellite&kildefil3=RSS_Monthly_MSU_AMSU_Channel_TLT_Anomalies_Land_and_Ocean_v03_3.txt&region3=Global&fromval3=&toval3=&operasjon31=mean&value31=60&operasjon32=-&value32=&linecolor3=green&linewidth3=2

    I hope that crazy link works ;)

    We see how the 5y runny mean manages to make the largest warming event in the whole series into a slight dip. Comparing to Christy’s graph we also see the strange absence of the 98 El Nino.

    I also don’t like the padding of start and ends of the data, as noted by others. The Met Office were also doing this while it showed extra warming by extending the pre-2000 warming. Once it started to work in the other direction, they suddenly noticed that it was a data processing error.

    Neither do I like running averages being labelled as “5y averages”. They are not. Taking 5 year averages leaves on data point every five years.

  45. Gavin Schmidt’s graphs are labelled “historical + RCP45”. Why?
    Historical up to when , 1990?
    http://www.metoffice.gov.uk/media/pdf/i/8/AVOID_WS2_D1_11_20100422.pdf

    RCP4.5 Stabilization without overshoot pathway to
    4.5 W/m2 at stabilization after 2100
    Clarke et al. (2007) –MiniCAM

    Funny how alarmists always use RCP85 “business as usual” when they want to scream about the urgent need to cut emissions but then use RCP45 when they want to pretend that the models are not too far off.

    • grog, “Funny how alarmists always use RCP85 “business as usual” when they want to scream about the urgent need to cut emissions but then use RCP45 when they want to pretend that the models are not too far off.”

      Yep. btw, since climate is something greater than 5 years using a simple 5 year moving average is perfectly Kosher. Comparing 15 plus year linear trends would be more relevant than anomalies with fudgable baselines. However, since climate science has squat for standards there will always be spats.

      • “Yep. btw, since climate is something greater than 5 years using a simple 5 year moving average is perfectly Kosher. ”

        No, runny means are crap and are very rarely valid or “kosher”. The above is a case in point where the one clear feature of the time series gets inverted.

        With the models it does not matter since the output is randomised garbage anyway, so distortions are irrelevant. All they produce is the CO2 driven warming trend. The rest is meaningless climatey fluff.

      • grog, “No, runny means are crap and are very rarely valid or “kosher”. The above is a case in point where the one clear feature of the time series gets inverted.”

        If climate is a minimum of 30 years any inversion due to 5 year moving average is as irrelevant as the warmest year EVAH situation. I prefer seeing better filters used, but since there is no consistency in the data collected over time, a 5 yma is no cruder than the data. In fact, natural smoothing in paleo produces huge inversion so when you compare paleo to observation you get a false sense of precision with more detailed filtering.

      • David Springer

        If you see a dip in a 60 month mean where there is a spike in monthly data it means the dip was preceded in the recent past by depressed temperatures. If it’s a common feature then it becomes meaningful as it points to some multi-year cool spell that is countered as a brief hot spell where the two features are co-dependent in the physical world.

        A moving average is crap only if the viewer isn’t bright enough to understand it. Or in this case “grok” it.

      • David Springer

        P.S. in this particular case the dip in the 60 month mean overlaying a spike in monthly data is a reflection of how the deep warm pool in the Pacific was building up over a number of years of greater than usual strength westerly trade winds piling up warm surface against the east Asia continental shelf then released quickly to spread east in an El Nino episode.

        The dip isn’t a distortion it’s a red flag begging for a physical explanation like the one above.

        Write that down, Goodman.

    • not much difference between RCP4.5 and RCP8.5 until about 2040

      • Isn’t there a larger difference in the tropical middle troposphere because of amplification?

      • thanks for making that point but when spinning, every flick of the wrist counts. Every little tweak inches that data the way you want it look.

        The constant removal of inconvenient measurements, and constant adjustments all adds up. It a “take care of the pennies…” approach.
        https://climategrog.files.wordpress.com/2016/04/ar5_rcps.png

        Gav knows that models don’t work but they look a little better is we cool them by using RCP4.5

        Then he puts a uniform grey band across all models runs to hide the fact that there are very few in that even approach the obs data.

        By using a uniform block instead of a cloud of individual runs, he tries to make this look like the results are evenly spread over this range. Whereas in reality the vast majority are much higher, as we see in Christy’s graph. There the eye can judge were the models are running.

        This is Gav. wants to hide from the viewer.

      • This plus not using Christie’s graph

        +100

  46. Reblogged this on Climate Collections.

  47. In terms of making science understandable to the public at large, I had an interesting, revealing experience today. I bought 6 screens ($240 price after 25% discount) with $50 down payment. It took the cashier 45 minutes to get it right because she didn’t understand percentages and all she could do was enter numbers in the computer, of which she had no understanding. The owner of the window company was there but wouldn’t help — I am assuming that he was hoping that this interchange would be a useful learning experience for the employee. Very frustrating and time-wasteful for me.

    When I tried to get her to enter in the original prices and take a 25% discount and then subtract $50, it was far beyond her capacity.

    JD

  48. Steve McIntyre

    Gavin Schmidt’s own diagrams are frequently designed to do exactly what he accused you of. His diagrams disguise and understate the very real differences between observations and models.

    Here is a diagram that shows relevant detail not shown by Schmidt – the comparison between models with multiple runs to observations. I’ve grouped singletons into one group. Nic Lewis has used a version of this graphic. I’ve done this for TRP, but will try to do GLB some time in the next day or two. For nearly all models, there is a striking discrepancy between models and observations.

  49. Steve McIntyre

    The diagram is at http://www.climateaudit.info/images/models/cmip5/boxplot_TRP_tlt_1979-2016.png. My attempt to embed the link didn’t work.

  50. Berényi Péter

    Comparing global average temperature anomaly data sets to computational climate model projections is pointless.

    Compare regional stuff instead.

    The broadest possible regional time series are inter hemispheric differences of various parameters (like reflected short wave or emitted long wave radiation at ToA). Computational climate models fail miserably in this respect.

    Differences are preferable, because at least systematic errors are cancelled this way.

  51. The use of tweets to discuss differences in opinion on data, results and data treatment is so ludicrous and childish. And Gavin’s use of internet abbreviations / acronyms is the juvenile icing on the cake…Gavin said @ currja: “The only place Christy has ‘published’ his figure is in his testimony to Congress. You think that isn’t political? … and … “Use of Christy’s misleading graph instead is the sign of partisan not a scientist.

    YMMV. How yummy cute is that, YMMV “your mileage may vary” meaning what ?? Judith, your take on the world may be different than his? Or … Judith your world view is different than mine. … or Judith your worldview is stupid wacko wrong …. idiotic. I don’t know him but a polite interpretation would be that he has a closed mind and doesn’t accept anyone else’s opinions. An impolite interpretation is …. _______ ( I’ll tell you that in private).

    And my question to Gavin is: if John Cristy tried to publish this in Science or nature, what would reviewer Gavin comment to the author.

  52. In defence of Christy’s plot I refer you to the very new paper Fyfe et al NATURE CLIMATE CHANGE | VOL 6 | MARCH 2016 | p.224. john Fyfe at the Univ of victoria BC achieved the most extraordinary feat of scientific diplomacy, gathering together a list of mainstream climate change scientists as coauthors, analysing a collation of both satellite data sets and four surface data sets. He reached a sober objective conclusion that there has been a slowing in the rate of warming, relative to models (argue the semantics as to whether the slowing is a pause or hiatus, if you like).

    Two points of outstanding significance to me;
    1) Fyfe fig 1 shows plots of four surface temp data sets versus an average of CMIP5 models. the difference in trends between the two is very clear. While Fyfe does not spell it out, the difference in the slopes (say HADCRUT versus CMIP5) is as near as does not matter to the difference in the slopes between sat data and models in Christy’s plot. As Christy says in his written congressional testimony at [ https://science.house.gov/legislation/hearings/full-committee-hearing-paris-climate-promise-bad-deal-america ] it is the difference in trends which is significant (my emphasis – it is not an issue of choice of baselines!).

    My suggestion to Judith is to use both Christy AND Fyfe fig 1 – the trend comparisons in the two graphs make the same point.

    As a further thought (and an ideal backup slide for dealing with intelligent questions), refer to Fyfe fig 2f. It is not for the faint hearted, or the casual partisan looking for a simple point. See the orange line (UAH data, 15-year averages) and the black line (CMIP5 models, 15yr averages). See year 2008; CMIP5 trend (slope) ~0.25 deg/decade; UAH ~slope 0.1 deg/decade. Thus Fyfe et al with a highly credentialled team in a top peer reviewed journal have ended up with the same trend difference as did Christy [see his wriiten testimony fig 3] , altho they dont enter into explicit discussion of the point.

    As a further thought, look at Fyfe fig 2a, blackline Hadcrut 15yr averages. Trend in GMST at 2008 is ~0.8deg/decade. Compare fig 2d equivalent trends on CMIP5 models is ~0.23deg/decade. My reading of Fyfes plots thus shows the same discrepancy in global temp warming trends over the past 15 years for surface HADCRUT data and UAH data. And that discrepancy is essentially the same as the discrepancy shown in Christy’s plot and in his quantitative trend analysis (altho as a matter of detail I note that the time spans used in Fyfe’s averaging windows and Christy’s averaging are different).

    The detail of such comparisons cannot easily be elucidated in a set of slides for a non-specialist audience, but they certainly should be in backup slides for when the chorus of political criticisms emerge. John Fyfe may or may not suffer over the next few months the sort of criticisms that have been levelled at Christy. However my hope is that both Christy and Fyfe will be accorded the respect due to scientists who objectively analyse data – I wish them both well.

    Michael Asten, Monash University, Melbourne Australia

    • Michael:
      Thanks for the link. I found this editorial comment from Nature while looking for the article you reference but did not link to, along with a highly disingenuous input (limited quotation marks) from Gavin Schmidt which will take me until next Tuesday to fully parse:
      “Gavin Schmidt, director of NASA’s Goddard Institute for Space Studies in New York, is tired of the entire discussion, which he says comes down to definitions and academic bickering. There is no evidence for a change in the long-term warming trend, he says, and there are always a host of reasons why a short-term trend might diverge — and why the climate models might not capture that divergence.

      “A little bit of turf-protecting and self-promotion I think is the most parsimonious explanation,” Schmidt says. “Not that there’s anything wrong with that.” ”

      http://www.nature.com/news/global-warming-hiatus-debate-flares-up-again-1.19414

    • Fyfe would be one of the last to defend Christy’s plot. Fyfe and pause buster Karl have no real disagreements.

      • I have no idea what Fyfe’s views on Christy might be, but as I note in my blog entry above, Fyfe’s computation of trends (his fig. 2f) for UAH data and CMIP5 models are surprisingly close to those computed by Christy. It therefore seems possible that Fyfe might defend Christy’s results if asked.

      • Of Karl and Sherwood, Fyfe et al had this to say:

        Recent research that has identified and corrected the errors and inhomogeneities in the surface air temperature record is of high scientific value. Investigations have also identified non-climatic artefacts in tropospheric temperatures inferred from radiosondes and satellites, and important errors in ocean heat uptake estimates. Newly identified observational errors do not, however, negate the existence of a real reduction in the surface warming rate in the early twenty-first century relative to the 1970s–1990s. …

        So, he thinks Karl’s work is of high scientific value, and he thinks UAH/RSS, until repaired, are BS.

  53. I would stay off Twitter as a serious professional. Schmidt was trying to drag you into a fracas with Twitter. It’s an out-of-context forum and nothing but a revolving door..

  54. Hmm! Wonder how many number of time Gav has publicized graphs without error bars or shades around them. Now that one that makes models look really bad comes along we have the climate science jihadis, all of a sudden start talking about how it is important to always note the sect of the jihadi.

  55. I know this is a technical threat but I’ve (I’m a retired lawyer) read through the comments and find them helpful, particularly in making it obvious that this is a vastly complex subject, and that makes it so subject to manipulation. It presents an almost hopeless problem to the layman, who almost has to pick sides based on how the various advocates act. This a huge oversimplification, but I chose Curry over Schmidt on this criteria. And she has not been a paid advocate, while Gavin certainly is and has been.

    But will someone give me a plain english explanation of the histogram that Gavin was so enamored with. It is jibberish on its face. If it were presented to me without explanation I would throw up my hands.

    • I agree. I’m the son of a Fed Judge and find JC’s explanation more than reasonable and thorough. Please continue to comment. I’d love to read them.

      • Chip: Lucia said: “In a t-test, the uncertainty for the underlying mean of the observation would be that from “the weather” and the “measurement error”. In Santer, that spread estimated using “AR1” on the time series. In this analysis, the spread is obtained using the models spread (plus I’ve added measurement error). These are different methods to estimate that uncertainty– but you never double count. You can use one method or the other. (We can argue over which to use and so on.)”

        I’m arguing for a t-test of the difference in means of trends in observations and model projections.

        Lucia’s discussion of “measurement error” is confusing to me. Some systematic errors in measurement disappear when the trend is calculated (or anomalies are compared). Otherwise statistical analysis of the mean and spread in two populations of two populations can never account for systematic errors in measurement. If something in the extensive UAH or RSS processing of MSU data introduces a changing systematic error in temperature, IMO statistics can’t take that into account.

    • Sure!

      The the height of each bar is the number of model runs (out of 102 total) that have a trend (from 1979-2015) in the global average temperature of the lower atmosphere (the mid-troposphere according to John Christy’s definition) that falls within the bins indicated by the x-axis.

      Basically it charts the frequency of the trends expected to have occurred by the collection of climate models/climate model runs.

      Similar to a frequency diagram of the height of all the men in your office.

      Gavin’s histogram shows that most model runs produce a trend of around 0.02C/yr for the period 1979-2015. With the frequency dropping off for values moving away from that value.

      This distribution largely reflects the effects of random weather (the models are not expected to get the timing correct of random events like El Nino, PDO, AMO, etc). It also reflects differences between the various models in their response to historical climate forcing, as well as some differences in the forcing evolution itself across models.

      Gavin also included several datasets (the colored dots and lines) which were developed to reflect what really happened in the mid-troposphere during the same 1979-2015 period. These observed datasets don’t agree with each other precisely, but do all fall in the lower reaches of the model distribution indicating that generally, climate models expected more warming to occur during that period that was actually observed.

      Note: It is my strong opinion that Gavin should not have included the “whiskers” (the horizontal lines reflecting the uncertainties) on the observed values in his chart. His whiskers represent the uncertainty due to weather, and uncertainty that already acts to spread the model histogram. By including them on the observations, he basically double-counts the influence of weather, making it seem like there is more overlap between the observations and the model distribution than there actually is. This is misleading. For more details, you can see this Twitter conversation, if you can make heads or tails of it (https://twitter.com/ClimateOfGavin/status/716722521345769472).

      I hope this helps,

      -Chip

      • Implicit in this is the assumption that the most frequently forecast result among model runs is the most likely.

        Is that a dubious assumption?

      • ” His whiskers represent the uncertainty due to weather, and uncertainty that already acts to spread the model histogram. ”

        You forget the large structural uncertainty in observations.
        Peopl keep pretending that observations are not the results of models.
        In short for both RSS and UAH there are many ways to model the temperature they “observe”. The structural uncertainty swamps the weather uncertainty.

        When you compare GCM output with “observations” you are really comparing the outputs of two models. So you have to consider the structural uncertainty.

      • Chip: Doesn’t the confidence interval for observed warming also include some of the error in our ability to determine the observed warming trend?

        We know the precise observed trend between September 13, 1979 and April 5, 2015 (for example), but unforced variability means that we get very different trends for different periods. The confidence interval in the observed warming trend reflects this problem.

        I think the fundamental question here is: What is the probability that the difference between the observed and calculated mean warming trends could be zero given the scatter in the data.

        The classical way to answer the question: Do men and women have different heights is through the null hypothesis? What is the probability the difference could be zero, given scatter in the data? We don’t compare the mean +/- a confidence interval for men to the mean for the women.

      • Steve,

        I completely agree that error bars representing structural uncertainty are appropriate to be included with the observations when comparing them to the model trends. But including weather noise (which is what is reflected in Gavin’s whiskers, i.e., the OLS CIs) is not.

        -Chip

      • franktoo,

        That unforced variability is assumed to occur in the climate models as well, so the histogram of their output is shaped by that variability. So you don’t need to include when showing the observed trend. See the comment thread here (http://rankexploits.com/musings/2013/ar5-trend-comparison/) for a more thorough discussion.

        -Chip

      • Thanks very much. Most helpful. By whiskers I assume you mean the horizontal lines that, when combined with the y-axis vertical lines, form a box. I’m not sure I understand your objection to it, but I’m happy now because I think I get the overall message of the histogram.

      • Chip: Thanks for the link to the discussing at Lucia’s, which I admittedly haven’t read thoroughly. However, even Lucia says: “The small colored horizontal hashes indicate a conservative estimate of the additional standard deviation we would expect if measurement noise had been added to the time series for surface temperature in the model run. If I am reading the graph correctly, that amounts to almost 0.05 K/decade, which is about the width of the bars on Gavin’s plot. (Precisely what Lucia means my measurement noise isn’t clear to me right now.)

        There is a pdf (with some width) describing our knowledge of observed warming trends. There is a second pdf describing the predictions of AOGCM’s. Given those pdf’s, what is the probability that the difference between observations and projections could be zero or greater and what is the best estimate for that difference? IMO, this is the relevant question. (I’ll study the post at Lucia’s further.)

  56. I second scraft1’s motion. An explanation accessible to a layman would be appreciated. The same goes for Steve McIntyre’s diagram.

  57. Histograms vs. time series plots, ‘uncertainty bands’*, boxplots, on and on. And so it goes around on discussion here. I always found that organizing a talk around what I wanted to convey drove the figures used. For sure, there are some figures that are compelling and scream, ‘this is what should be said at this point,’ but if there is indecision on a graphic, i.e., it is not inherently compelling, then I always have to back off and re-ask what do I what to convey? Am I straying in the draft?
    ———————-
    * Damn how I dislike that term in this context. I want to know exactly what interval someone is using the the basis for that kind of interval…boundinga mean, bounding a slope, bounding future observations. Don’t tell me that much and you have said anything useful.

  58. What I find funny.
    Nobody who does these charts posts the data or code used to create them.

    • Sorry about the 5 million number Steven, I trusted someone else’s work.

    • Suddenly Skeptic™

      Andrew

      • been asking for code and data since day one in 2007.

        Dont forget who coined the lukewarmer motto.

        Open the data, free the code.

        That said only skeptics like scafetta and monkton and evans have refused

      • “been asking for code and data”

        You do sometimes. Sometimes you don’t.

        Hence my comment.

        Andrew

      • “You do sometimes. Sometimes you don’t.”

        My signature to every comment used to include it.

        I doubt you will find a single person who can claim that I am not interested in all data and all code.

        Free the code, open the daata

      • “used to”

        Andrew

      • Yes Andrew.

        After a while, when you get recognized for asking for code and data, repeating it becomes un necessary.

      • “when you get recognized for asking for code and data, repeating it becomes un necessary”

        Oh I get it. Just like when you go to a restaurant people just know you want the thing you used to want by ESP. That’s why they always ask.

        Andrew

      • Mosh: Open the data, free the code.

        followed by

        Bad Andrew: You do sometimes. Sometimes you don’t.

        Andrew, bad boy ;o), you’ve been around long enough to associate that mantra with Mosh…and he is spot on and he has been so for years. Your wordgame quip is just that. Work done on contentious topics has to be open. This is one place where some academics and some nnon-academics with policy agenda’s still seem to have a bone in their heads(1,2). The funny thing is there are more benefits than drawbacks when being open and accessible.
        ————————–
        1) Once they howled for sure over at aTTP’s when I broached the topic [sort-of with QA], though I have little doubt that if the same topic was approached here it would hardly look differ.
        2) Despite the insistence on the part of many, peer-reviewed journal publications are not up to the standards required in the regulatory arena for much less important risks. [Yeah, this where I am told about cutting edge expertise and/or the intrusion of government :O)]. This of course is due to different goals, space limitations, etc.

      • “you’ve been around long enough to associate that mantra with Mosh”

        mwgrant,

        I have no problem associating that mantra with him. He has said it many times. But he hasn’t said it universally for all charts. In this case, he has whipped it out for a chart he doesn’t like. Bravo, I guess.

        Andrew

    • As far as the data goes… Steven, how was it all transferred, copied, collated, indexed and manifested? I have never heard the complete story.

      • Go to our SVN.
        Get the code.
        It shows you how.
        even better Run the code and check for yourself.
        Code trumps whatever words I can write. because it SHOWS exactly
        what is done.
        go look.
        you wont.

      • Mosh said;

        ‘I doubt you will find a single person who can claim that I am not interested in all data and all code.’

        Yes, Mosh has consistently for years asked for data and code and supplied it in his own work.

        tonyb

      • “Mosh has consistently for years asked for data and code”

        …except for the times he hasn’t asked for it.

        Andrew

    • I meant the analog human transference at the genesis point for AGW. Not the computer run part.

      • You know, how many people were employed put it all together? How many years did it take to build a uniform data base? Stuff like that. The beginning of the AGW program.

      • I believe that so far we have spent more on AGW than we ever did on building The Bomb. We even can tell you what Oppy, was doing on any given Friday night back in the forties. There must be a link for: AGW The Beginning. Film and video taken in the control room, kinda stuff.

      • I think you mean the B-29.

    • Steve McIntyre

      Mosher says:
      “What I find funny. Nobody who does these charts posts the data or code used to create them.”

      C’mon. As you well know, right from my first days in this field, I’ve regularly posted data and code, especially for controversial posts and have always provided data and code when requested. Prior to me doing so, I’m unaware of any scientist in the field who did so. I also developed a style of script which went and got the data for third parties, making it particularly easy to see what I’d done. I don’t know why you would not mention this in your generalized complaint.

      In the case of the present dispute, I agree that it’s a waste of time for the originating academics – both Schmidt and Christy – not to post executing code for their figures. However, I’ve been able to substantially replicate Christy’s diagram fairly easily, while I haven’t been able to replicate Schmidt’s diagram thus far.

      • Steve,

        “I’ve been able to substantially replicate Christy’s diagram fairly easily, while I haven’t been able to replicate Schmidt’s diagram thus far.”

        I did here: http://fs5.directupload.net/images/160408/yssxzjn6.png

        Differences are marginal

      • Steve McIntyre

        there are several Schmidt diagrams – I should have been more clear. I haven’t been able to replicate the grey envelope of runs. I get a narrower envelope.

        Also, RSS does not have a version 4.0 online for their lower troposphere series (TLT) which is the comparandum for the runs in the Christy diagram. There is a version 4.0 for TMT but that’s for a different level. I don’t know what the effect is.

        Gavin’s use of envelopes and histograms also disguises the distributions and the improbabilities. It’s very cheeky of him to make accusations about Judy and others, when his own diagrams are so subject to the faults that he accuses others of.

      • Steve,

        You wrote:
        Ok, that is another point, dont looking for that just for the mean. Hmm, his Figure said he is looking for TMT not TLT, since TLT is calculated from TMT it make to me even more less sense because it would cause a stronger warming in it, you see this in RSS TTT Layer, which is calculated by TMT and lower Stratospere, the warming is stronger as in TMT alone.

    • Geewhillikers, a lot people around here take things too literally. Perhaps one should only engage in ‘hedge-speak’ :O).

    • Hedgehog speak (in the sense of Isaiah Berlin) mwg? I think all the foxes who read this will agree!

      • Hey, Peter…actually thinking nary an uncaveated word and lots of exits, but I think you are on to something. There’s Gould in them thar hills. ;o)

      • +10. Step functioning seems more natural than linearity in non-equilibrium systems such as weather.

  59. Judith: FWIW, I think reporting trends is far more accurate than Dr. Christie’s spaghetti graph. If you think about temperature data as a series of straight lines on a graph, we are interested in the slope, but not the y-intercept. There is no unambiguous way to align the output of a chaotic process through a particular starting date or range of starting dates.

    Discussing whether one realization of a chaotic system (our climate observed for the last 35 years) is inconsistent with N realizations of another chaotic system (the output of a single climate model) is difficult enough, but here you have N realization of M climate models and M*N is about 100. I think Gavin’s figure explains this best.

    The IPCC tells us to pay more attention to the multi-model mean than any individual model. Should the mean and a confidence interval for that mean appear as a point and horizontal line on this graph? It would be perfectly acceptable to replace the histogram with such a line. Without the histogram, it is traditional to use vertical lines, which convey a sense of warming/change more clearly.

    The MOST IMPORTANT question is: What is the probability that the difference between the model data and observed data could be ZERO given the scatter in both? (the null hypothesis) What does the pdf for the difference look like. The central estimate for this difference is about 1 K/century. Now you have the ability to claim that the models are either LIKELY, VERY LIKELY, or EXTREMELY LIKELY (in “IPCC-speak”) wrong . Then say that these are the same models say that it is VERY LIKELY that man is responsible for most 20th-century warming and future warming will be X-Y under RCP 8.5. (X-Y is the “very likely” range of model output, but downgraded to “likely” using expert judgment.)

    Analyzing the data this way assumes that the unforced variability of our climate is the same as models. If this assumption is wrong, the models may be wrong, but for a different reason.

  60. Within the overall theme of this article, there is a new study in Nature http://www.nature.com/nature/journal/v532/n7597/full/nature17418.html
    “Northern Hemisphere hydroclimate variability over the past twelve centuries” Science Daily characterizes the study as “1200 years of water balance data challenge climate models”.

  61. For what it’s worth, I want to thank Dr. Curry and the others on this blog for creating an atmosphere where people can civilly disagree and where a layman can ask a question and have it cheerfully answered. The tone of some blogs has gotten so toxic that some sponsors have given up and closed commenting – though I suppose they’ll relent at some point.

    Cheers and thanks for the help.

  62. I see very little use for these data sets and even less use for the models. You see the same arguments in Economics. What if you could accurately determine if the climate was warming or cooling. No one knows what is actually causing it. How people can claim it is CO2 without any empirical evidence or even experimentation to prove it is beyond me. Why was it obviously warmer in Medieval times than now? Was it due to CO2 levels? If not, seems to me climate science needs to go back to square one. Until someone can find the answer to warmer and cooler periods of Earth’s climate history it is just politics. No need to “bleed” the patient just yet.

    • This guy thinks:

      The reason that the data sets agree is due to collusion, not independent research as they claim. It is the biggest scientific fraud in history.

      https://stevengoddard.wordpress.com/2016/04/08/global-temperature-record-is-a-smoking-gun-of-collusion-and-fraud/

      • in response to novasportpilot:
        “I see very little use for these data sets and even less use for the models. You see the same arguments in Economics. What if you could accurately determine if the climate was warming or cooling. No one knows what is actually causing it. How people can claim it is CO2 without any empirical evidence […]”
        This guy thinks:
        The reason that the data sets agree is due to collusion, not independent research as they claim. It is the biggest scientific fraud in history.

        I don’t think there is any sign that night time cooling rates have been affected by the change in Co2, and the big changes in temperature have been regional.The step from the 97 El Nino, was a regional change (NH).

  63. 102 model runs provide 102 differing outputs.

    At least 101 are therefore incorrect.

    Averaging 101 incorrect answers may not provide one correct answer.

    If one run is correct, why bother with the others? Is it possible that each and every model is inherently defective? Is trying to predict the outcome of extremely complex interacting non linear dynamical systems – (probably chaotic) – doomed to failure?

    Have any of the models actually provided anything of use?

    Cheers.

  64. The 98% of the atmosphere that is not composed of greenhouse gases is also heated by conduction, convection etc. Does this portion of the atmosphere also emit infrared? If so, does this infrared also back radiate to the Earth’s surface?

  65. NOAA’s March 2016 PDO is1.57. While not yet at its 2014-2015 highs (1.92), it is up strongly from November 2015’s .13.

    If there is such a thing as a PDO cycle, then a persistently positive PDO could put the chill on the vigor of the next La Nina, and signal we’re in for some EL Nino dominance.

  66. This is why climate science has such a black eye. Gavin’s chart shows the average predicted increase being .5 degrees, less than half of the 1.2 Christy shows. How is it even possible to evaluate a claim that we can’t agree on?

    This is one reason why I keep saying the models and observations need to use absolute temperatures, otherwise we spend half our time arguing over stupid things like baselines. Anyone should be able to go look at the global temperature (which is what actually matters, physically) and see whether e.g.Hansen 1988’s predicted temperature has arrived on schedule.

    In any case Gavin’s charts seem absurd on their face. The error bars suggest a scenario with no warming at all can barely be excluded. Is he trying to disprove global warming?

  67. Pingback: Gavin Schmidt and Reference Period “Trickery” « Climate Audit

  68. Pingback: Gavin Schmidt and Reference Period “Trickery” - Principia Scientific International

  69. Pingback: More Climate 'Science.' Moving the Goalposts - Principia Scientific International

  70. Pingback: CCG

  71. Pingback: Weekly Climate and Energy News Roundup #225 | Watts Up With That?