Climate models versus climate reality

by Pat Michaels and Chip Knappenberger

Perhaps the most frank example of the growing disconnection between forecast and observed climate change was presented by University of Alabama’s John Christy to the Senate Subcommittee on Space, Science, and Competitiveness Committee of the U.S. House of Representatives on December 8.

Christy_Dec8

It isn’t the usual comparison between global average surface temperature and the current family of general circulation climate models. Instead, it’s the forecast and observed temperatures for the middle troposphere.

The troposphere is the earth’s active weather zone, and it extends from the surface to around 40,000 feet. It’s deeper where the atmosphere is warm, as in the tropics, and shallower at higher latitudes. All significant storms, from massive winter cyclones to gullywashing summer thunderstorms are in the troposphere.

All of the data in this plot are smoothed out by using five-year running means, which filters out year-to-year variability and emphasizes more systematic, long-term behavior.

Twice a day, weather balloons are launched simultaneously around the planet in order to get a snapshot of the physical properties of today’s atmosphere. The temperature, humidity, barometric pressure and wind data provide the basis for the next iteration of global weather forecasting models.   The instrumentation is largely standardized and calibrated for accuracy.

There are four different analyses of these datasets, and the blue dots are their running mean average.

The temperature of the mid-troposphere can also be sensed from above, by orbiting satellites that measure the vibration of diatomic oxygen, which turns out to be a much more accurate thermometer than, say, a standard mercury-in-glass instrument. There are two global analyses of these data, one by Christy’s crew and another from Remote Sensing Systems, a California consultancy. The green squares are the average of these two datasets.

Note that the satellite and balloon-sensed temperatures are independent measurments.

The red line is the five-year running mean of the 102 computer models that can generate temperatures in this layer, found in the latest (2013) scientific assessment of the UN’s Intergovernmental Panel on Climate Change.

All of the data have been scaled the same in the vertical dimension, with a maximum weighting around 12,000 feet above the surface. The sensing technique in the satellite picks off a bit of data above the troposphere, in the placid stratosphere, and the balloon and computer model data were scaled in the same fashion. So this is a true apples-to-apples-to-apples test.

What’s the advantage of looking at these temperatures versus those at the surface?

Rain and snow are largely dependent upon the temperature difference between the surface and the mid-troposphere. When there’s little difference, air in the lower atmosphere does not rise, meaning that the vertical motion required to form a cloud is absent. When the difference is large, moisture-laden surface air is very buoyant and can result in intense rain events.

Getting the difference systematically wrong in a climate model means getting the rainfall wrong, which pretty much invalidates regional temperature forecasts.   A dry surface (think: desert) warms (and cools) much more rapidly than a wet one. If the computer models are somehow getting surface temperatures right that could only be a fortuitous result if the midtropospheric temperatures are as far off as Christy’s data shows.

Indeed, the models have this temperature differential dead wrong. Over the period of study, they say it should be remaining the same. But, in fact, it is growing in the real world, at a rate nine times what is predicted by the models over this study period.

Which brings us to those surface temperatures.

They’re a bit slipperier than the mid-tropospheric ones. The laboratories responsible for the three principal histories keep changing history, much more frequently than the satellite or balloon records are reconfigured.

At Cato’s Center for the Study of Science our investigations have led us to the hypothesis that the anthropogenic influence on the earth’s climate—specifically through emissions of greenhouse gases—is near the low end of the “mainstream” (e.g., IPCC) assessed range of influence. And further, that models developed to simulate the behavior of the earth’s climate have generally overestiamted the influence of anthropogenic greenhouse gas emissions. Our new book, Lukewarming: The New Science That Changes Everything details the latest scientific findings supporting a complex, yet modest human impact on the earth’s climate.

At last December’s Fall Meeting of the American Geophysical Union (AGU), we summarized our thinking on the issue in a presentation titled “Quantifying the Lack of Consistency between Climate Model Projections and Observations of the Evolution of the Earth’s Average Surface Temperature since the Mid-20th Century.”   It reflected the state (at that time) of our continual updates to work originally presented to Congress in 2009, expanded upon at the Third Santa Fe Conference on Global and Regional Climate Change in 2011, written up into a paper, presented at the AGU’s Science Policy Conference in 2013, and regularly updated in comments on national and international climate change assessments and proposed federal regulations designed to restrict greenhouse gas emissions.

The work is a straightforward demonstration that climate models project a greater rise in the global average temperature than has been experienced, one avoids the pitfalls of other types of comparisons and is immune from claims of cherry-picking, as it includes all time periods since 1950 ending in the present. Accompanying this demonstration of model infidelity, we present a case that a major part of the reason that climate models run too hot is that the earth’s equilibrium climate sensitivity is substantially less than portrayed by the climate models. We will revisit this at the end of this paper.

Everyone by now is familiar with the “pause” or “slowdown” in the rate of global warming that has taken place over the past 20 years of so, but few realize is that the observed warming rate has been beneath the model mean expectation for periods extending back to the mid-20th century—60+ years.

We demonstrate this fact with our comparison of the observed warming rate to that of the range of climate model-predicted warming rates for all periods from 1951 ending with the most recent available data. In our AGU presentation, we included the observations of the global average surface temperature compiled by the UK’s Hadley Center. The Hadley Centre compilation has long been preferred by the IPCC.

And while the Hadley Centre’s surface temperature compilation is not the only one, its recent behavior is more consistent with the low rates of warming being revealed in the mid-tropospheric compilations, in which a substantial amount of the overall data is in fact below approximately 12,000 feet.   Here, we add the other two major compliations, from NASA and the Department of Commerce’s National Oceanic and Atmospheric Administration. We’ve also included two less prominent surface temperature compilations from Cowtan and Way (2013) and Berkeley Earth (2013), inclusions which do little more than demonstrate their overall similarity (Figure 2).We have also updated our AGU presentation with our best guess for 2015 average temperatures. Thanks, in part, to a strong El Niño, 2015 is going to be the warmest year in any of the surface temperature compilations. You can see from Figure 1, however, that this warmth does very little to narrow the disparity between the predicted and observed midtropospheric temperatures.

Slide2

Figure 2. The annual average global surface temperatures from 108 individual CMIP5 climate model runs forced with historical (+ RCP4.5 since 2006) forcings were obtained from the KNMI Climate Explorer website. Linear trends were computed through the global temperatures from each run, ending in 2015 and beginning each year from 1951 through 2006. The trends for each period (ranging in length from 10 to 65 years) were averaged across all model runs (black line). The range containing 95 percent (dotted black lines) of trends from the 108 model runs is indicated. The observed linear trends for the same periods were calculated from the annual average global surface temperature record compiled by several different agencies described in the legend (colored lines) (the value for 2015 was estimated from January through October, average).

During all periods from 10 years (2006-2015) to 65 (1951-2015) years in length, the observed temperature trend lies in the lower half of the collection of climate model simulations, and for several periods it lies very close (or even below) the 2.5th percentile of all the model runs. Over shorter periods, such as the last two decades, a plethora of mechanisms have been put forth to explain the observed/modeled divergence, but none do so completely and many of the explanations are inconsistent with each other.

One concern that has been recently been raised—some nine months after our AGU presentation—is by Cowtan et al., 2015 is that the vast majority of extant comparisons (for example, the IPCC Assessment Reports, or our own work) between climate model projections and observations of the earth’s surface temperature are not precisely apples-to-apples for two reasons: 1) observed temperature compilations include regions of missing data (i.e., incomplete geographic data coverage) while climate models include the entire surface, and 2) observed compilations combine air temperature measurements over the land with sea surface temperatures into a global average, while climate model compilations use air temperatures over both land and oceans. The combination of these factors is shown to lead to a slight warming bias in the models when compared to the observations. A more appropriate model dataset has been developed and made available for researchers to compare the models with the UK Hadley Centre data through 2014. We’ve used these data to see how this concern impacts our analysis. The results are shown in Figure 4. While this adjustment brings the observed trends closer to the multi-model mean, it remains clear that the observed trends lie near, and in some cases continue to fall beneath, the lower bound of the 2.5th percentile distribution of model projections.

(Because the 100+ model results are binned very close to a normal frequency distribution), the 2.5th percentile is analogous to the .05 confidence limits for a two-tailed (above or below the model average) distribution.)

Slide3

Figure 3. The annual average global surface temperatures, derived from a similar methodology used by the UK’s Hadley Centre in compiling temperature observations, from 109 individual CMIP5 climate model runs forced with historical (+ RCP4.5 since 2006) radiative changes. These were obtained from the University of York website (http://www-users.york.ac.uk/~kdc3/papers/robust2015/index.html, see Cowtan et al., 2015 for more details. Linear trends were computed through the global temperatures from each run, ending in 2014 and beginning each year from 1951 through 2005. The trends for each period (ranging in length from 10 to 64 years) were averaged across all model runs (black line). The range containing 95 percent (dotted black lines) of trends from the 109 model runs is indicated. The observed linear trends for the same periods were calculated from the annual average global surface temperature record compiled by the UK’s Hadley Centre (red line).

If this is not strong evidence that the climate models predict too much warming, there is an additional comparison that can be made, one which is largely free from the sampling issues raised above—an examination of the climate model behavior in a the mid-troposphere. It is in this portion of the free atmosphere where the climate models project that the temperature (globally) should warm most rapidly as the concentration of carbon dioxide grows. Performing the same analysis as we did in Figure 3, we compare the collection of climate model trends with the collection of trends observed from both satellites and weather balloons. The climate model and the weather balloon observations have been weighted to simulate the observations from the satellites so the comparison is directly apples-to-apples-to-apples, as was Figure 1. Figure 5 displays our results.

Slide4

Figure 4. The annual average global midtropospheric temperatures derived from 102 individual CMIP5 climate model runs forced with historical (+ RCP4.5 since 2006) forcings were obtained from John Christy (personal communications). Linear trends were computed through the global temperatures from each run, ending in 2015 and beginning each year from 1975 through 2006. The trends for each period (ranging in length from 10 to 40 years) were averaged across all model runs (black line). The range containing 95 percent (dotted black lines) and the minimum (dashed black line) of trends from the 102 model runs are indicated. The observed linear trends for the same periods were calculated from the annual average global mid-tropospheric temperature record compiled by several different agencies (and include compilations derived from satellite observations as well as weather balloon observations) described in the legend (colored lines) (the value for 2015 was estimated from January through October, average).

This is a devastating indictment of climate model performance. For periods of time longer than about 20 years, the observed trends from all data sources fall beneath the lower bound which contains 95 percent of all model trends and in the majority of cases, falls beneath even the absolute smallest trend found in any of the 102 climate model runs.

One other very encouraging result, using the satellite and balloon data, is that the observed trends are very flat, meaning that they are constant, neither increasing nor decreasing depending upon length of record. Greenhouse physics actually predicts this, so what we are seeing may very well in fact be the greenhouse-gas-generated response, not random noise. It is simply that the rate of warming is far beneath what has been forecast.

The amount of that overprediciton comports well with a growing body of scientific findings and growing understanding that the sensitivity of the earth’s surface temperature to rising atmospheric greenhouse gas levels—as directly determined from observations— lies towards (and yet within) the low end of the mainstream (IPCC AR5) assessed likely range.

Since 2011, at lleast 14 studies published in the peer-reviewed scientific literature provide strong evidence that the equilibrium climate sensitivity (ECS)—how much the earth’s average surface temperature will rise under a doubling of the atmospheric carbon dioxide concentration—lies near the low end of the IPCC estimates (Figure 5). This recent research includes investigations of the earth’s thermal response to changes in climate forcings that have taken place over the past century, millennium, and over glacial periods.

Slide5

Figure 5. Equilibrium climate sensitivity (ECS) estimates from new research beginning in 2011 (colored), compared with the assessed range given in the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5) and the collection of climate models used in the IPCC AR5. The “likely” (greater than a 66% likelihood of occurrence) range in the IPCC Assessment is indicated by the gray bar. The arrows indicate the 5 to 95 percent confidence bounds for each estimate along with the best estimate (median of each probability density function; or the mean of multiple estimates; colored vertical line). Ring et al. (2012) present four estimates of the climate sensitivity and the red box encompasses those estimates. The right-hand side of the IPCC AR5 range is actually the 90% upper bound (the IPCC does not actually state the value for the upper 95 percent confidence bound of their estimate). Spencer and Braswell (2013) produce a single ECS value best-matched to ocean heat content observations and internal radiative forcing.

Several of these research findings were published subsequent to the 2013 release of the IPCC’s Fifth Assessment Report (AR5), and thus were not included in that Assessment. Others were considered in the IPCC AR5, and still others were ignored. And while the IPCC AR5 did reflect some influence on these new low ECS estimates—by expanding its “likely” range of ECS estimates downward to include 1.5°C (the low end was 2.0°C in the 2007 IPCC Fourth Assessment Report) and omitting a “best estimate” value (which had previously been given as 3.0°C in the 2007 report)—it still doggedly held on to its high end “likely” estimate of 4.5°C. This was a disservice to the latest science, but was a necessary step to preserve the IPCC’s reliance on climate projections made by models with an ECS averaging 3.2°C and ranging from 2.1°C to 4.7°C—the same models recently evaluated by Christy and in our AGU presentation. Had the IPCC fully embraced an ECS near 2.0°C—that which the recent literature suggests—it would have had to throw out much of the rest of the report. We explained the IPCC’s conundrum in this post on Cato’s blog. A more detailed and extremely compelling report on how the IPCC should have handled the new ECS findings was put together by the Global Warming Policy Foundation. Any serious examination of the extant ECS literature would be remiss not to carefully consider the content of the GWPF report (which convincingly argues for an ECS of 1.75°C or even a bit lower).

One may argue that ECS estimates based upon one or two centuries of observations may not fully capture very long-term climate responses, and that therefore such ECS estimates are likely too low. While the magnitude (or even the existence) of the underestimate is difficult to assess, what is certain is that whatever the influence may be, it is only fully manifest on timescales far beyond even multiple human generations. In other words, when attempting to assess the coming climate changes over the next century or so, observationally based ECS estimates—estimates derived directly from the extant temperature histories both of the surface temperature as well as oceanic heat content—are very appropriate. This is even more so for estimates of the “transient” climate sensitivity—the temperature rise at the time of a doubling of the atmospheric CO2 concentration, as that is likely to occur sometime in the second half of this century, before the ECS is realized.   Again, the recent estimates from real -world behavior of the atmosphere and ocean are far beneath climate model expectations; see the GWPF report for a recent round-up.

That the actual ECS (at least as assessed over century times scales) is likely much lower than the average value of the climate models incorporated in the IPCC’s AR5 is an efficient explanation for why climate models tend to overpredict the amount of global warming which has taken place—which has huge significance in assessing the utility of climate model projections for future climate change.

Based upon these and other lines of evidence (laid out in our numerous scientific publications, books, blogs articles, social media (see publications listed here and here for example)), we conclude that future global warming will occur at a pace substantially lower than that upon which US federal and international actions to restrict greenhouse gas emissions are founded.

It is high time to rethink those efforts.

References:

Cowtan, K., et al., 2105. Robust comparison of climate models with observations using blended land air and ocean sea surface temperatures. Geophysical Research Letters, 42, 6526-6534, doi:10.1002/2015GL064888

Michaels, P.J., and P.C. Knappenberger, 2014. Quantifying the Lack of Consistency between Climate Model Projections and Observations of the Evolution of the Earth’s Average Surface Temperature since the Mid-20th Century. American Geophysical Union Fall Meeting, San Francisco, CA, Dec. 15-19, Paper A41A-3008.

Note:  As with all guest posts, please keep your comments relevant and civil.

244 responses to “Climate models versus climate reality

  1. Pingback: Climate models versus climate reality | Enjeux énergies et environnement

  2. Re Christy’s graph:

    Which 4 radiosondes? What layers? Weighting of the layers?

    Which 2 MSU/AMSU data sets? Version number.

    How did he get the modeled TMT?

    Michaels and Knappenberger … they don’t know.

    • They repeat their own version of the Christy graph at the end of the post with much of the info you request on it, see fig5.

      • fig4 my mistake

      • That does not answer the question re fig 1. Besides: why don’t they supply data and source code for their own graphs?

      • David Springer

        ehak, you are Kyle Hilburn who recently left RSS after 13 years with them, right?

        https://www.linkedin.com/in/kyle-hilburn-a1472a18

      • Yep it does answer some of your questions. They reproduce the fig1 graph with the info you ask for, egro they must have some idea what data the Christy graph is based on, including data sets and their versions.

        The reason no code is this is a blog post not a publication and they do point to the data coming from Climate Explorer which anybody, including you, could get the source data from if you have a problem with what they present.

    • The internet is great – the data is available to all.
      And when I look at it, I get this:
      http://climatewatcher.webs.com/HotSpot.png

      1. The upper air data sets vary ( as they should, given the details, resolutions, sampling density, etc. etc. ) but they vary much more with the model than they do with one another.

      2. The observations tend to corroborate the modeled stratospheric cooling.

      3. The observations tend to corroborate the North Polar maxima.

      4. The model tends to indicate an increase in lapse rate ( less warming with increasing height ) in the polar regions. The observations tend to corroborate this.

      5. The main feature, and point of large maximum warming in this model and most others is the ‘Hot Spot’ centered over the equator at about 300mb. The Hot Spot is probably larger than the oval I used to indicate it. The Hot Spot and a modeled decrease in lapse rate ( more warming with increasing height above surface ) occurs from roughly 60S to 60N.

      So, why no Hot Spot?
      No one knows.
      It could be a ‘natural variation’ that will reverse and exhibit the Hot Spot in the future.
      Or, it could be a systemic error in the models.
      Or something else.

      The explanation of the Hot Spot is heating from convection which shares the surface warming with the upper atmosphere. The convection of the tropics is dominated by the ITCZ. But the convection of the tropics is because of conditional instability, meaning it is dependent on the chaotic general circulation to produce convergence of polar air masses from each pole.

      One can imagine in the models that the increase in lapse rate reduces the static stability of the tropics, but the conditional instability still produces thunderstorms. A slight infidelity of the convection schemes in the models could erroneous accumulate over time.

      Further, convection and radiance for a given atmospheric profile are functions of one another. Convection warms and humidifies the upper atmosphere which increases stability. But a warmer, more humid upper atmosphere cools at a greater rate which decreases stability. Such conditions are inherently unstable and unpredictable.

      • “So, why no Hot Spot? No one knows.”

        Actually, Dr David Evans has a very plausible theory as to why there is no hotspot. The expectation that there shod be one is fundamental to the simple climate model which he thinks has a simple oversight related to the dependant variables in the feedbacks. As simply put as possible, the simple model holds other feedbacks to a forcing constant when they should treated as dependent on one another.

        TBH his posts over at Jo Novas are way wordier and complex than they need to be. He draws a very cogent and reasonable argument supported with a lot of math that I think could be condensed to a simple fundamental point about which he may or may not be right about. But it explains why no hotspot is found even though it should be there no matter what caused the warming we all agree occurred.

      • “So, why no Hot Spot?”

        In a word, convection. As soon as a “hot spot” starts to form, it becomes lighter (less dense) than the surrounding air and starts to rise. This is how clouds form, too. Cooler surrounding air moves in to replace it. Once the warm air has risen enough, it will start to cool. There are gigantic convection currents moving hot air from the tropics north and south across the planet. This is why there are large bands of dry, arid, even desert land where they come down, now without moisture that they dropped on their journey.

        In other words, with a proper understanding of atmospheric physics, you can’t make a mistake like the current generation of climate modellers has. That’s just mistake #1. Mistake #2 was to modify the historical data record to conceal mistake #1.

      • Convection does not ‘humidify’ the upper atmosphere. Rain produced during convective lifting removes moisture from the atmospheric column. Convection also induces compensating subsidence, pushing drier air aloft downward. Only relatively shallow non-precipitating convection moistens the convective cloudy layer.

    • Did you look at the publications and the websites they linked to?

    • There were ‘scientists’ from the same bunch as these, who claimed that nicotine wasn’t addictive and smoking did not cause lung disease. Industry funded spin.

      • Yes,Ceist Alles, tobacco-funded spin saying smoking is safe, is exactly like government-funded man-made global warming spin. It’s all about vested interest of the funder.

  3. Brian G Valentine

    Good work Pat and Chip. I believe that the “models” used by the IPCC not only provide a “global” picture that is wrong, the models predict phenomena at the “local” level that show that GHG “forcing” can’t be correct – at least as it is modeled. Such as the predicted rainfall patterns in the Tropics, which are all wrong. Rainfall is heat transfer,and if that can’t be accounted for correctly not much else will be either. So results of these models provide evidence that the assumptions behind them are incorrect.

  4. Now that we are close to spending vast amounts to mitigate the effects of and defend ourselves from ultra-high global warming, it is high time to determine exactly WHY we are willing to do so. It has not been shown that atmospheric CO2 is to blame (nor is it the major factor of an assortment of causes).

    All money spent to counter atmospheric CO2 has been (and will be) “wasted” – in that it will not curb the expected temperature increase. And now the US will extend its spending on existing programs for another five years. Last time I checked, atmospheric CO2 keeps rising – so far all the money spent has not altered the atmospheric CO2 curve.

    So, let’s spend our time, not in “disproving” the AGW theory, but rather to offer the best solutions to solving the problem. We have lost the battle, now that the agreement has been signed at the end of COP21. Our effort should be directed at minimizing the funds being diverted to “solve” the problem. We have more important ways to spend the money.

    • David Springer

      Hansen got it right this time. Nothing will come of the COP21 agreement due to it being non-binding. The vast majority of people who would be burdened under it don’t even know about it. Obama won’t ask congress if they intend to write any checks to cover the promises because he knows the answer is no. Better for him to let the alarmists fantasize about the decision going their way. Typical do-nothing politics. Talk a good game but don’t actually suit and play.

      • Actually, Congress did, essentially, write the check Obama requested – it is in the just passed spending bill.

        “In a victory for the Obama administration, the spending package released by congressional leaders on Wednesday won’t block American financial contributions to an international climate fund for poorer nations.

        “The bill, greens and Democrats say, doesn’t explicitly appropriate funding for President Obama’s pledged contribution to the Green Climate Fund (GCF). But since the legislation doesn’t formally block money for the GCF either, Obama is expected to be able to use current discretionary funding streams to send American money to it.

        ““Based on what we have reviewed so far, there are no restrictions on our ability to make good on the president’s pledge to contribute to the Green Climate Fund,” White House press secretary Josh Earnest said on Wednesday. ”

        http://thehill.com/policy/energy-environment/263447-spending-bill-wont-stop-funds-for-obama-climate-deal

      • @ David Springer – You don’t full appreciate the insanity of the supporters of COP21. According to Canadian Green Party leader, it’s a ‘legally binding voluntary agreement’, whatever that means.

  5. Pat Michaels

    I spoke to you after a meeting in Sydney, gave you a book and explained briefly why it is the physics which is wrong. Perhaps you haven’t digested this yet, so let me give you a few indications as to why it’s wrong.

    1) The AGW “physics” has complete disregard for the entropy maximization process which occurs in all natural processes.

    2) The ocean surface only absorbs solar radiation, that having a mean of about 160 to 170 watts per square meter. A blackbody at minus 40C would radiate that much, so they have not explained surface temperatures, and never will be able to do so with radiation calculations.

    3) Everyone knows that the “moist adiabatic lapse rate” is less steep than the dry. But the IPCC et al claim that water vapor (just over 1% of the troposphere on average) raises the surface temperature by most of 33 degrees. How could it do that and at the same time make the temperature gradient less steep, thus raising the region 10Km above the surface by about 60 degrees or more. How could radiative balance with the Sun be achieved? For a rain forest region double or triple those figures.

    What is in that book is the correct physics. You will never defeat them with articles like this: we need to attack the root cause of the problem, namely the false physics on which it is based. Otherwise, the world will just start 30 years of warming after about 2030 and they’ll be celebrating. We have to get the message across well before 2025.

  6. Nice! Clearly we also need to rethink the US Social Cost of Carbon modeling, which is based on these hot models and used to justify numerous Federal actions.

  7. ATTP has a good post up on this subject with a rebuttal study based on different climate sensitivities for different components contributing to climate sensitivity.
    All sounds a bit precious.
    This article is a bit too verbose and hence has some difficulties in getting the ideas across. The anomalies graph is the money graph but a graph of actual middle tropospheric temperatures would be helpful to the readers. They are a lot lower than surface temperatures and it might prevent people from making accidental mistakes when talking about the actual temperatures.
    The idea that the models do exist, like for like, for this area of the earth’s atmosphere is never brought up, people just make the usual surface and lower troposphere comparisons and comments.
    I fully understand why Mosher and Zeke have never mentioned them before. It just bust their arguments so completely .
    Finally thanks for using Cowtan and Way as references. These boys results are so wonderful they will be used in future textbooks on how to confuse politics (or religion) with maths.
    Note I make no aspersions on their maths. They are both obviously good at Maths to put up their programme and have it so completely accepted by the General climate science community.

    Where they proseletysed was in foisting their belief system in global warming onto their temperature modelling programme.
    They have the classical horse racing programme input of known data so when they reproduce / Run models in hind cast mode it puts out the exact past known conditions/ temperatures around the globe so it is (past) accurate. Which is what I want and expect from a good model.
    Excellent work boys.
    The problem consists in predicting which nag (temperature) is going to get up where.
    When they put the form in high climate sensitivity is rated very strongly and distance from real observation sites is ranked even higher.
    Sites that give low temperature readings are banned from use as these would make the horses race slower.
    The races are staged and results given at virtual tracks all around the Arctic. Any corroboration from satellite data does not seem to be mentioned.
    The best thing is that having put in high performance to moderate indicators any drop in Arctic temperatures is magnifying the drop in temperatures in their model. Give another 2 years and they will be able to proclaim their model showed the true start of the return to normal temperatures before anyone else.
    Nobel prize, gentlemen?
    I so hope so.

    • October – 1.06C, a record
      November -1.05C, a record
      December – ?. but likely a record

      In absolute terms, warmest month is July 2015, and the coldest January 1861.

      • Are these numbers from a surface statistical model or the satellites?

      • David, Need you ask?

      • Yes, the rising Confederacy had both submarines and satellites. When it comes to 2 meters above the land and the SSTs, I follow the advice of the chief scientist at RSS and ignore the politicized junk coming from up above.

      • Preferring statistical junk, JCH? But then if none of the data is any good we have nothing to explain. Works for me. Global temperature is not measurable.

      • “In absolute terms, warmest month is July 2015, and the coldest January 1861”

        Not merely comparing apples with oranges, but comparing apples with chocolate teapots…

      • JCH,

        And the atmospheric thermometer record covers what percentage of the earth’s surface?

      • Skepticgonewild, it covers no fraction of the earth. It is a convenience sample of data points. Even worse, on an area basis most of it is a proxy sample using water temperatures, because there is more water than land. Then it is massaged and adjusted using either area or field averaging techniques. This is a crude estimating method at best, in fact a well known one, not a statistical method. But JCH prefers it to the satellite instruments that were actually designed to measure global temperatures, because he likes the results. AGW is based on this pseudo-statistical junk.

      • Although average temperatures vary quite a bit within very small regions, the anomalies in average temperature (change over time) is very highly correlated over large distances often exceeding 1000 km. For that reason you do not need many observation points to provide good estimates of the global temperature anomaly. You need both land and sea records however.

        Cowtan and Way’s contribution is purely to fill in the missing grid points in various temperature data sets. They offer either a kriging version (which GISS uses anyway), or a version based on the use of satellite snapshots (which does not use the satellite trends or anomalies, but purely the temperature around a large region at one instant in time).

        Quite a few of the missing points Cowtan and Way fill in are around the Arctic and Antarctic, which have warmed faster than other regions. This is why their Global trends are higher than HadCRUT4 for instance. Cowan and Way do not alter existing data in the surface data set if it is provided for a particular grid point – only estimating missing grid points.

      • “fill in are around the Arctic and Antarctic, which have warmed faster than other regions”

        The Antarctic has warmed faster ?

      • Peter Davies: “Quite a few of the missing points Cowtan and Way fill in are around the Arctic and Antarctic, which have warmed faster than other regions.”

        The Antarctic is cooling, as can be seen from the temperature data and inferred from the increase in sea ice and – According to Zwally’s re-analysis of the assorted datasets – land ice too.

      • Roscoe Shaw,

        Yes, the Antarctic has also warmed faster than most of the rest of the world although not quite as much as the Arctic. Click on http://onlinelibrary.wiley.com/doi/10.1002/qj.2297/full , then click on figure 1 to expand it to a readable size. Even UAH (probably 5.6) shows this.

        Don’t be confused by the fact that Antarctic sea ice is growing. Some of this may be that the higher Antarctic temperatures increase land ice melting, releasing fresh water which has a lower density than salty sea water. When this fresh water it rises to the surface it is cooled by exposure to the air and freezes at a higher temperature than salty sea water it displaces. Offshore winds may also help spread the ice. You can probably find more details online. So more Antarctic ice does not prove Antarctica is getting colder.

      • From what I have seen…Antarctic temps have been flat or slightly colder, sea ice has been high, overall ice volume flat to higher. Changes from study to study but overall, I would describe it as “quasi-steady state”

      • Don’t be confused by the fact that Antarctic sea ice is growing. Some of this may be that the higher Antarctic temperatures increase land ice melting, releasing fresh water which has a lower density than salty sea water.

        Don’t be confused by an incoherent explanation for why there is more sea ice.

        https://notalotofpeopleknowthat.files.wordpress.com/2014/05/14southernssta_thumb.png

        It is unusual and surprising to claim “antarctic melting” is driving the sea ice increase when the sea surface temperatures are cooling and there is no LOD evidence of melting Antarctica.

  8. Interesting post, thanks.

  9. Lukes (like Pat Michaels, Roy Spencer et al) do the “cause” a great disservice because they implicitly agree that the AGW false physics is correct, but they just got the magnitude of the assumed warming wrong.

    What they got wrong is in fact the whole paradigm. See my first comment above for more detail. They cannot explain Earth’s surface temperature with radiation calculations and it is totally incorrect of them to add the back radiation to the solar radiation and then put the total into Stefan Boltzmann calculations. For example, if an electric bar radiator warms an object to 350K and you apply 16 times as much radiation with 16 identical radiators, then Stefan Boltzmann calculations give you double the temperature (because 2^4 = 16) and obviously the object does not get heated to 700K.

    The correct paradigm lies within a totally different ball park which is at the forefront of 21st century atmospheric physics research. I think you know where you’ll find it and it will blow your mind when you gain a correct understanding of what is really happening in all planets. Little wonder Hansen didn’t hit on it.

    • There’s always a problem when you take molecular behaviour (i.e. a CO2 molecule will radiate heat) and deduce macro-behaviour from it. Statistical Mechanics does not map directly onto macro fluid and heat flow – there’s only an approximation.

      Add that to the approximation inherent in all statistics and unreliability of time-series statistics, and floating point computer arithmetic, and …

  10. Michaels and Knappenberger say: “Note that the satellite and balloon-sensed temperatures are independent measurements.”

    Is this true? I thought someone (Happer, Titely ???) implied during the recent Senate hearings that the balloon data was used to calibrate the satellite data.

    • TMT weighting functions include part of the stratosphere which IS cooling as well as upper parts of the troposphere which are warming. The question is how well the Christy graph at the top accounts for this in calculating the CIMP5 mid-troposphere anomaly.

      Also, note that Gavin drew with uncertainty for the measurements.

      • Brian G Valentine

        We are confronted with a “possible” paradox pointed out by Gerlich and Tscheushner and some others, of an interpretation of this phenomenon as heat transfer from a cold reservoir to a warmer reservoir with no work apparently expended. Last time I heard, that just might be outside the realm of thermodynamics

        There are probably lots of ways to address this issue, the most common method being “laughter” followed by hurling “denier” epithets

      • See comment below. Same weighting function was applied to all three datasets.

      • That’s all well and good but it still leaves a huge whole where the Hot Spot would be for all the different RAOB and MSU analyses.

      • TMT weighting functions include part of the stratosphere which IS cooling as well as upper parts of the troposphere which are warming.

        Real but probably doesn’t matter much.

        The Fu treatment subtracts the stratospheric signal ( which Spencer says is wrong because that too introduces contamination ).

        But neither RSS, Fu RSS, UAH, Fu UAH, RATPAC A or B, v1 or v2 indicate a Hot Spot.

        http://climatewatcher.webs.com/HotSpot.png?update=HotSpot.png

      • The Sun does work.

      • Brian G Valentine

        You don’t understand the process. It is NOT a process that happens when there is already a state of thermodynamic equilibrium as you appear to assume when you state no work is done: rather it is a process which is increasing entropy (as per the Second Law) for the very reason that new solar energy absorbed at the top has created unbalanced energy potentials and of course that means entropy can and will increase. The “work” (energy) which you think is missing did of course come from new solar radiation in the morning. Perhaps you are not qualified in physics and thus this is all out of your depth.

      • Brian G Valentine

        I don’t know about “qualified,” I’ll let somebody else judge that. But the issue is not as simple as you think, because you have to prove that the heat (as an engine) is at least as large as the amount of work required – the absolute lower bound given by the Carnot efficiency

      • BGV: The quantification has been checked. The temperature gradient has had the life of the planet to evolve under the force of gravity. Maintaining temperatures by convection, diffusion and conduction does not require energy flux of the magnitude required to do so by radiation. What happens between the hot water in a bath tub and the wall of the tub just below the surface of the water? Read the comment about the planet Uranus which proves my point.

    • I thought someone (Happer, Titely ???) implied during the recent Senate hearings that the balloon data was used to calibrate the satellite data.

      I’m not intimately familiar with the details of the algorithms, but estimates of the absolute characteristics of atmosphere are necessary to make sense of the energies received.

      So climatologies are used ( or for RSS a model which is also based on observed input ).

      However, this month’s MSU anomalies are independent of this month’s RAOB anomalies.

      • The satellite data is really a series of 10-15 different satellites and their instruments joined together over time, each needing recalibration, so it is not as clean as you might think. It’s like slightly changing your thermometer location every couple of years. Most surface sites would not do that or count it as an acceptable long-term record. Calibration against balloons needs to be ongoing and would be essential to keep them on track and agreeing with each other. Adding in drift, it is just very messy. The disagreement between the two independent satellite records attests to that, and is why Mears of RSS says this dataset has a lot of “structural uncertainty”, so he thinks the surface trend is more robust.

      • Sure, the satellite temps are messy and complicated. But they are calibrated to the balloons which are consistent over a long period of time. Also, satellite measures a much more comprehensive area in a consistent way…not a hodgepodge of different measuring systems of a sparse, mostly land-based scatter.. The two satellite data sets, in their current version, show shocking agreement, but they have undergone many adjustments as well. Also, satellite data since the divergence from near-sfc temps 15 years ago, track more closely to the hourly model initializations and balloons. Mears of RSS hates his own data because it is the coldest one. Christy and Spencer love their UAH because it’s cold.

        Every new version of GISS or Hadcrut has increased the warming trend. Every single one. Skeptics told me this and I didn’t believe it, but no one (Stokes and Mosher?) has ever produced any contrary evidence when I have asked. This “adjustment warming”, if the skeptics are indeed correct, is beginning to get as statistically unlikely as Bernie Madoff’s incredibly consistent investment returns. I’m not sure what to believe.

        Everybody loves the data that proves their point and hates the other one. The most robust is clearly Wood For Trees. Everybody hates that.

      • What’s the problem with the little disagreement between the satellite series, yimmy? Just average them. It doesn’t seem to bother you that the spaghetti ball of 102 climate models are all over the freaking map.

      • “no one (Stokes and Mosher?) has ever produced any contrary evidence when I have asked”

        Well, you could look for evidence yourself. But I gave some downthread. At least for the last ten years, in the satellite period, GISS adjustments have no marked trend effect. UAH, OTOH, is adjusted downward by a lot.

      • Specifics…which version of which data set showed a new overall version that had a smaller warming trend?

        Or, where can I find the data?

      • The graph shows the difference between current GISS and versions of 2011 and 2005. The data is here. You can find wayback versions.

      • Thanks. I will investigate when I get a chance and let my skeptic friends chew on it and comment.

      • I gathered the 2005, 2011, and 2015 versions of GISS

        Each newer version shows a stronger warming trend.

        The 2015 version has a 0.20 deg per century stronger warming. The past was cooled and the recent years warmed.

        http://postimg.org/image/f5dwynf8v/

        The changes between 2005 and 2011 are pretty small but 2011 to 2015 have been substantial.

        I still haven’t ever seen a new version of GISS or Hadcrut that reduced the warming trend.

        I must be missing something or making data errors. Anybody have one ?

      • My comparison was in the satellite period. You have gone back to 1880. Comparing apples to apples – GISS from 1979, on the same anomaly base (1981-2010), you get the results I showed.

      • I ran the numbers from 1880, 1930, and 1945. All showed progressive warming of the GISS trend line from 2005-2015 adjustments.

        I still haven’t seen what I am looking for. A new version of GISS or HADCRUT that shows a lower overall warming trend than the previous one

  11. Maybe the authors of this post can explain why the figure presented by Gavin Schmidt comparing the TMT from the models and the satellite measurements appears quite different to what is presented in the first figure here.

  12. For some context on the HadCRUT4 comparison I’ve plotted what the obs. trend would be if 2015-to-date were included. I’ve also made a conservative estimate for the 2016 HadCRUT4 anomaly of 0.8ºC and plotted how that will change the appearance.

    What a difference two years makes.

    • That’s why you stop “in” 2015. So redo it to make it correct.

      • Wait a few weeks until 2015 is actually over and then include it.

      • 2016… what a great year it’s going to be in the advancement of surface air temperature improvement:

        The reconstruction methods have also been applied to the HadCRUT4 data set, yielding a range of estimates of global near-surface temperature change since the mid 19th century. Results show relatively increased warming in the global average over the 21st century owing to reconstruction of temperatures in high northern latitudes, supporting the findings of Cowtan & Way (2014) and Karl et al. (2015). While there is broad agreement between estimates of global and hemispheric changes throughout much of the 20th and 21st century, agreement is reduced in the 19th and early 20th century. This finding is supported by the climate model trials that highlight uncertainty in reconstructing data sparse regions, most notably in the Southern Hemisphere in the 19th century. These results underline the importance of continued data rescue activities, such as those of the International Surface Temperature Initiative and ACRE.

        The results of this study will form an addition to the HadCRUT4 global near-surface temperature data set.

      • What “reconstruction methods”?

      • Meteorological winter is DJF, so yes, we don’t even have to wait.

    • How can you make a ‘conservative estimate’ for 2016 if it isn’t even 2016 yet? What is your estimate based upon?

  13. Maybe someone has already asked this, but why in the first figure are the observational datasets averaged? Why not simply plot the individual datasets?

    • It was for Congress. They are not that bright so you need to keep it simple. Including all of the individuals along with the mean has been shown many, many times before by both “sides” and we all know what it looks like. A few of the model runs do fall closer to the observations. Showing the confidence intervals gives one the idea just as well as showing the spaghetti graphs. What would be interesting is to see if the models that give rise to those more realistic projections have lower sensitivities. It has been shown that there are models that run very hot, are unrealistic and are responsible for the higher end range of the IPCC projections and those models should probably be thrown out (I believe) in the next round of IPCC. And those that give projections closer to the observations should be looked at to see what they get right.

    • ATTP – There are dozens of ways this much data can be sliced and diced. Your questions are not criticisms. What do you see wrong with the way they actually did it? Be specific.

      • Okay, I think that it would be better to simply plot the different datasets, rather than averaging them. On the other hand, maybe one of the authors could explain why averaging them is likely to give a better representation of reality than any individual dataset.

      • If you are talking about the model output, I would agree. The runs of each distinct model with a distinct parameter set could be averaged and each distinct model/parameter set plotted separately.

      • I was talking about the observational ones, but you could indeed do the same for the models. On the other hand, the models are all aiming to represent the same system (rather than measure it) so there is an argument for using the model outputs to determine some range, as has been done in the figure of Gavin’s. However, there is no real reason not to do both.

        @RogerAPielkeSr Again – a more reasonable presentation shows far less divergence. @Tucano238 @DrShepherd2013 pic.twitter.com/eJYi15VP96— Gavin Schmidt (@ClimateOfGavin) December 11, 2015

        //platform.twitter.com/widgets.js

      • I guess the point of this post is to highlight the difference between obs and models. It seems like visually that idea can be best expressed with a graph with as few line as possible.

    • keep it simple for congress was my first thought. Fig4 does separate them out

    • That only shows that in an El Niño year temps go to the upper part or above the 75% interval… except this one, as they’re below the 50% mark, judging by eyeball.

      Of course, it’s what happens thereafter that matters.

      • 2016 may show that an El Nino year does that. The 1997-1998 El Nino is almost never called the 1997 El Nino, just as the 2015-2016 El Nino may never be called the 2015 El Nino.

        What could come later is additional warming. Will that matter?

  14. I like the apples-to-apples-to-apples comparisons.

    I do not like the apples-and-oranges/fruit salad metrics that combine metrics of dissimilar things which have dissimilar causes from different dynamic systems.

  15. The weighting functions by level that are in Christy’s TMT were also applied by Christy to the radiosonde data and to the model data. So it is a true intercomparison that does not suffer from any “Cowtan effects”.

    • My understanding of remote sensing (perhaps very old school and out of date) is that some form of “ground truth” is necessary in order to calibrate the indirect (satellite based) measurement with a direct measurement of the parameter being measured. My original question was whether the balloon measurements of temperature (direct measurement) is being used to “ground truth” the satellite based (remote) measurement of temperature.

      This is a different question than whether the same weighting functions were used in the inter-comparison.

      • I used to build hot-air balloons. Lots of them. But no, I did not blow them up. Each one had a big thermometer hanging inside at the very top.

      • The MSU anomaly of this month is independent of the RAOB anomaly of this month.

        Some calibration is necessary, but the fact that RSS uses a model and UAH uses a climatology, but both are quite similar should put perspective on the extent of this.

      • MS, I know for sure that UAH is radiosonde ground truthed. Not only that, but by latitude since troposphere height varies with latitude. I dunno about RSS, but recall Mosher saying it is calibrated to model temperatures (one would presume from YE 2005 initialization for CMIP5).

  16. Well, we know from actual measurement that the surface layer (up to 450 meters) is warming at decade levels at about 2/3rds the IPCC standard (64% of the CO2 alone immediate forcing).

    The higher layers where clouds form don’t seem to be warming at all.

    In previous thread it was noted that the models don’t do clouds very well.

    Is this temperature divergence a cloud modeling problem? Or some other fundamental flaw in the models?

    It is possible that more CO2 increases the acceleration of gravity and that is cooling the upper atmosphere. The acceleration of gravity is a model parameter so they could just tweak that.

    • PA, for the long version of what else is wrong, see my models post at WUWT earlier year. The short version: model grid cells are forced to be large, because anything smaller is computationally intractable. So sub grid scale processes (e.g. thunderstorm convection cells) have to be parameterized. There are two ways of doing so, but both require making attribution assumptions. The parameterizations are tuned to best hindcast, in the CMIP5 case from 2005 back to 1975. And the IPCC clearly states that the underlying attribution for that period’s warming is mainly anthropogenic CO2. By dismissing natural variation, the models now run too hotnin a phase that increasingly resembles 1950-1975, when there was a slight cooling. No amount of quibbling can hide the divergence. Worse, it was said in 2008 that 15 years divergence would invalidate the models, and in 2011 17 years. Well, this month we hit 19 years. Models are busted.

      • Worse than that, the only reason we care about temperature is because of how we expect it to affect the weather and ecology and long term engineering projects.

        When I was in school, back in the late 90s, the financial engineering industry was gearing up to sell weather/climate based derivatives and all kinds of planning tools (was going to be my bread and butter, since it would require new risk management tools). Any day we’d see global warming affects on weather. If the effects were big enough to make any kind of decisions, there should have been an obvious signal in the early 2000s.

        Know what happened? Financial Engineering News disappeared because global warming couldn’t market any legitimate products and all the cool stuff the industry could do was pretty much already done.

        15 years of “warmer than ever” and most “record temp years” later, weather signal is still nowhere to be found.

        Even with natural variability perfectly offsetting additional CO2 warming, we should have seen advese cost affects due to the persistently high temps after the 98′ el Nino step change.

        It’s absurd that we’re worried about warming 15 years after cost should have shown themselves. We can’t produce GHGs fast enough to be a problem, not without some Dr. Evil type release of methane using high explosives.

      • “The parameterizations are tuned to best hindcast, in the CMIP5 case from 2005 back to 1975.”

        Thank you for this information. Is this the main reason CMIP5 models are running hot? Why did they choose 1975 as a starting data and not something earlier (say 1950?)?

      • Euler identity, I don’t know. The last version of the CMIP5 protocol required the first submission to be a 25 year hindcast from initiation, YE 2005. Backwards to 1975. Now that year just happens to be about when the previous ~25 years warming started. That probably provides powerful motivation to all the modelling groups who want their next modelling grant.

      • aaron, a lot of money has and will be made around financial weather derivatives. Climate change…………….not so much.

      • Warren Buffet made a bundle off of climate change. The huge hurricanes of 2005 and dire warnings of “bigger and stronger” scared everybody driving up the cost of “risk”. Buffet went in with his re-insurance business and bought up all the risk… or more aptly…he “shorted risk”. The next decade was very very quiet for hurricanes.. and Buffet cleaned up since the risk was overpriced AND he got lucky with under-active hurricanes.

  17. I realize these models can be pretty stinky, but they seem to be what’s available.

    When i was running dynamic models i would eventually home in on the versions with the best history match. If time & budget allowed it I preferred to run a series of versions which gave a fairly wide range of outcomes.

    Why not select the 5 models which yield the better matches when they’re run with actual concentrations to 2014? Or is climatology so devoid of computer horsepower that a new ensemble can’t be generated?

    I suggest RCP8.5 be tossed into the dustbin of history. A 2015-2065 forecast can be run with concentrations extrapolated using both RCP6 & an intelligent extrapolation of the “COP21” pledges. The “gain” from the Paris pledges can be guessed from model results. This way you will see whether Paris makes any sense at all. I’m suggesting a 50 year run to save time steps, & because I know predicting the future beyond 50 years is pretty useless.

    Does anybody have the tools to do this? How much do they charge for it?

    • “I realize these models can be pretty stinky, but they seem to be what’s available.”

      Isn’t that pretty much like “I realize the emperor is naked, but those are all the clothes that were available?”

    • FL
      I too wonder why we have to keep 102 IPCC models unless for political reasons not to hurt anyone’s feelings.

      Is it reasonable to down select to the top most effective matching to observations for a subset of say 20 models and rerun the comparison.

      PCMDI at LLNL tracks the model performance and selecting a realistic subset could start with their evaluations.
      Scott

      • It keeps a bunch of warmists employed.

      • I suspect that if you keep just the models that “match to reality” “in” 2015, then you could easily be throwing away the very best models and keeping the very worst models.

      • JCH: who wrote in 2015? A history match uses history, meaning whatever is available from a point in time in the past to as late in time as possible. I assume the 2014 data is collected and available, that becomes the end point. I would use 1880 to 2014…but I’m not sure how they initialize a world wide grid using 1880 data (it must be pretty stinky).

      • You do not want to just pick the models that best fit your favored observed temperature series because the correlation could well be spurious, particularly if there is conscious or subconscious tweaking of assumptions to generate fits after the data have been seen. Even without tweaking, the one that fit past data best is not very likely to fit future data as well if, say, it was a good predictor when one of the natural cycles was in one phase but is a poor predictor now that that cycle is in a different phase.

  18. I found this on the NWS website:

    “Twice a day, every day of the year, weather balloons are released simultaneously from almost 900 locations worldwide! This includes 92 released by the National Weather Service in the US and its territories. The balloon flights last for around 2 hours, can drift as far as 125 miles away, and rise up to over 100,000 ft. (about 20 miles) in the atmosphere!”

    It seems to me that this is an extraordinary data set that appears to be largely ignored in the arguments between those that advocate surface thermometers and those that advocate satellite data.

    How far back does the balloon data set go?

  19. The main problem I have with highlighting TMT is that the tropospheric influence on that measurement is essentially determined by tropical sea surface temperature changes. It’s therefore quite redundant, and over-complicated to look directly at TMT without checking SSTs.

    Here is a plot of 1979-2015 trends in CMIP5 models (104 realisations, historical+rcp45) for tropical average SSTs. They’re plotted against the eventual change at end of century relative to pre-industrial. You can see observed trends are below all but one model run, indicating very low tropical SST warming over 1979-2015. That in turn means we should expect observed 1979-2015 TMT trends well below CMIP5 models.

    However, also plotted are trends from 1951-2015, which agree well with observations. This suggests the 1979-2015 observational period is strongly affected by cooling internal variability centred on the tropics and not a representative sample for a robust comparison.

  20. The first graph shows a drop in model temp forecasts from 1990-1995 which mimics reality. I see this in all the forecast verification articles.

    But this is not realistic. These “forecasts” were generated using hindsight and include the Pinatubo volcanic eruption forcing. This is equivalent to adjusting your stock market forecast in 2015 to account for the Great Recession in 2008.

    A real forecast does not have knowledge of the future built in.

    The scientific reason for this, I assume, is because they want to consider only the response to “known forcings”, not unknowns like volcanoes.

    But the real future doesn’t work that way. The real forecasts are worse that this graphs indicates. We don’t know future volcanic eruptions or how much GHG will remain in the atmosphere or what future emissions will be or many other variables.

    I have developed forecast models for 35 years. The errors bars are almost always larger than you would think. Error ranges are easy to pin down in weather because of the volume of cases. In climate modelling, the verification sample is essentially zero in real time and the distant future involved makes the unknowables huge.

    • Yeah, it looks like the pre-2000 part of the red line is a hindcast that benefits from being informed by actual observations and the post-2000 red line is a forecast that ain’t showing any skill. Feed the models decent data from observations and they can do a pretty good hindcast. How much is this costing us?

    • Volcanoes would have a negative effect. Less sun output, too. So, maybe we should be hearing about what “sporadic” events will make things go pos and neg along with the hype about those driving catastrophically pos. Seems like most “sporadic” ones will have neg impact. And, of course, that “cloud cover’ variable that modelers do not seem to have much of a handle on globally, but certainly is a major factor locally. Playing tennis in the SW USA is nice even at 40F when there are no clouds blocking the sun (feels like 60F; some players even wear shorts and T’s!), but is cold for playing when clouds cover the rays! So, when is a locale’s temperature taken at all of those sampling points? How about a “cloud-cover” factor (% and density) for each station to “normalize” their data? At least it would start to give some handle on the “cloud” knob! 2/3rds cloud cover of the earth must have a significant effect! Weather forecasting is dominated by “fronts” ( and their associated clouds and sunshine) moving in and out of areas. What will be the effect when cloud cover is 100% and dense? Surely, even ocean buoys can be adjusted to read the solar/cloud factor as well as the water temperature. Actually, we should all be concerned that there is such a lack in knowledge about the effect of clouds!

  21. 1. The comparison between surface temps and model predictions hasnt been done properly.
    A) observations are SAT and SST
    B) they compared SAT and MAT

    2. Satellites do not measure temperature. The changes in UAH and RSS over time indicate substantial structural uncertainty, that is changes
    due to the analysts choices made with regards specific adjustments.

    • We should put up 102 satellites and take the multi-satellite mean. Or just save a lot of money and go with the balloon data.

      • Thanks for stopping in to school Steven, Chip.

        When were the model runs done that provided the data to construct the red line?

    • Hey Steve,

      Re: 1) See our Figure 3 (and description)
      Re: 2) Wx-balloon obs are included in our MT comparisons

      -Chip

    • Steven Mosher | December 17, 2015 at 12:37 pm | Reply

      2. Satellites do not measure temperature. The changes in UAH and RSS over time indicate substantial structural uncertainty, that is changes
      due to the analysts choices made with regards specific adjustments.

      http://www.climate4you.com/images/MSU%20UAH%20MaturityDiagramSince20080508.gif

      GISS apparently doesn’t measure temperature either. The change in GISS over time indicates substantial structural uncertainty, that is changes
      due to the analysts choices made with regards specific adjustments.

      http://www.climate4you.com/images/GISS%20MaturityDiagramSince20080517.gif

      • Try this comparison:

        http://www.moyhu.org.s3.amazonaws.com/2015/12/uahadj1.png

        The changes in UAH are not because data being added either. That kind of changes in UAH demonstrates BIG structural uncertainty.

      • ehak (@ehak1) | December 17, 2015 at 3:17 pm |

        The changes in UAH are not because data being added either. That kind of changes in UAH demonstrates BIG structural uncertainty.

        This statement is simply wrong. I assume you misplotted the GISS adjustments by accident. The RMS value of the GISS adjustments to post 1980 data since 2008 averages about 0.1°C. The UAH adjustments average about 0.05 °C.

        The post 1980 GISS adjustments since 2008 are shown below:
        https://i.imgur.com/iMNTxqV.png

        The changes in GISS are sort of a mystery since data shouldn’t be getting added in massive quantities to 1940 to cool it off. It is presumed most of the change is due to zombie stations (virtual data) or waterboarding of existing data (to make it confess). That kind of changes in GISS (twice the UAH adjustments) demonstrates BIG structural uncertainty.

        Further, the variability in the TLT is about 50% greater than the surface measurement so more adjustment would be expected to UAH than GISS.

      • Look at your own plots. Barely change in trend for gisstemp after 1979. Big change for UAH.

      • “I assume you misplotted the GISS adjustments by accident.”
        Try plotting them yourself. Actual GISS data, to the same anomaly base as UAH (1981-2010).

      • You guys seem fairly sure of yourself.

        On the other hand Climate4you (Ole Humlum) has been tracking the changes since 2008. I wandered on to his site soon after he started tracking data maturity and have watched the data mutate.

        GISS is equally interested in cooling the past (30s and 40s) and warming the present. However I’ll look for some archived versions of the two data sets and see what I can draw (graphs and conclusions).

    • Thermometers don’t measure temperature either.

      • Well, they measure degrees Fahrenheit and Celsius.

      • Some “thermometers” measure the volume expansion of a fluid, some millivolts generated by two dissimilar metals, and some resistance. None measure the velocity of individual air molecules and use that data to calculate temperature.

      • What do degrees F and C look like, nicky? How does a thermometer know which is which? Is this really you, nicky? You usually aren’t this silly.

      • Roscoe is correct. The universe is an unknowable enigma. Thermometers are just social constructs that are part of a massive capitalist cabal using mind control processes to tell us what types of clothes we need to buy. This is why Maoists always wear pajamas: they refuse to buy into the deceit of thermometers.

      • “Roscoe is correct. The future is an unknowable enigma”

        I hope not! I develop forecast models for a living and get paid in proportion to their success.

        My point is that forecasting is tricky, especially when it involves the future. I certainly know that it can be done, but with climate, we have one case with ~27 years of verification. With weather we have millions of cases every year and the errors bars are precisely known. Even if the climate models were perfect, it could easily have been luck. I would argue that they have shown considerable skill. The warming has been less than forecast but the sign (positive) has been correct. But it’s only one case and the future has all kinds of crazy unknowns from physics to humans to “unknown unknowns”

  22. “Climate models versus climate reality”

    Climate models are known knowns
    Climate reality is a known unknown

  23. CHip knappenberger just sent a new first fig from Christy (now in the post) which clarifies balloons/satellite

    • That does not clarify the balloons/satellite questions. “Bulk atmosphere Surface- 50K F” means all the layers i Ratpac-a for example. Simple average of those will produce a flat or negative trend. To get a positive trend there must be some weighting. Where is Christy’s decription of that?

      Appearantly Christy has used climateexploer. Which variable(s)?

      Which radiosondes?

      And how is it possible to produce a 5 years average centered on 1979 for the satellites and at the same time produce a 5 years average centered on 2015 for both radiosondes and satellites? That is impossible. It is also very misleading. A 5 year centered average for 1979 would require data from 1977 and a 5 year average requires data from 2017.

      Why this obscurity concerning methods?

  24. The projection charts are supposed to show in a clear visual way how projections miss observations. But, the visual is artificially accurate because of the backfill of the volcanic activity in the 80’s and 90’s. It would be a more accurate depiction of the accuracy of the projections, if the volcanic activity was erased from the projection and the observations.

  25. “Which brings us to those surface temperatures.
    They’re a bit slipperier than the mid-tropospheric ones. The laboratories responsible for the three principal histories keep changing history, much more frequently than the satellite or balloon records are reconfigured.”

    Maybe more frequently. But to far less effect. Here (from here) is the plot of differences between published GISS of zero, four and ten years ago, compared with the difference made to UAH in just one “history change” this year.

    http://www.moyhu.org.s3.amazonaws.com/2015/12/uahadj1.png

    Which is slipperier? Here are the actual time series, with RSS as well:

    http://www.moyhu.org.s3.amazonaws.com/2015/12/uahadj2.png

    • It does seem that every method has to make adjustments that somebody else will complain about. But the title of this post is “Climate models versus climate reality” so what about the divergence of model projections from any/all observation methods? Is that gap important or should we not be concerned about it?

      • The version difference is important. The authors say of Fig 4 that “This is a devastating indictment of climate model performance.”. It shows UAH v6.0 at 0.11 C/dec (well, it should) and MMM at about 0.2. But S&C show the trends from 1979 on their files. For v6.0 it is 0.11, but for v5.6 it is 0.14. That seems more an indicator of the wobbliness of satellite measures. Are the climate models “devastated” because UAH changed its story?

        Or to put it another way, Fig 4 shows error bars on the MMM, but none on the comparing measures. But if two UAH versions can have such different results, what would the error bars be?

    • They are all being adjusted and all have errors. Since 2000, the SAT and near-sfc readings have diverged. Balloons and model initialization histories more closely track Sat temps since 2000…which IMO provides validation. The Wood For Trees Index (WTI) may not be perfect but takes away cherry-picked data by making everyone unhappy.

    • Note that the ordinate is scaled to milli-hairs.

    • Anyway, the real issue WRT adjustments is if they made the result more or less accurate. Tough to judge that in some cases, also there exists radiosonde data that can be used for comparison to sat temps.

    • Curious George

      Nick – how do you measure an “anomaly”? All your graphs show anomalies. I presume that the anomaly you show is a global average temperature anomaly. As a global average temperature does not change much, would you consider showing an actual global average temperature? I don’t like the additional degree of freedom implicit in an “anomaly”. Are you sure that the base for anomalies is the same for all your sources?

      • “As a global average temperature does not change much, would you consider showing an actual global average temperature?”

        No. Here’s why.. I’ve written about it in many posts, eg here and here. The basic reason is that temperatures are very inhomogeneous. Some are taken in winter, some summer, some in hot places, some cold. You can average inhomogeneous data, but correct sampling is critical. You have to have exactly the right proportions of each kind. And with temp data, you have little freedom to do that. Someone else decided when and where to measure, and you can’t do it again.

        But the source of inhomogeneity is in the expected values, arctic, winter etc. If you subtract the expected value (anomaly), then the result is pretty homogeneous. And the big sampling issues fade away.

        Yes, I do always check the anomaly base, as everyone should. But even better, you can put things you want to compare on the same base. Because, while it is important to have a base, it doesn’t matter much when it is, for most purposes. So if you want to put everything on, say, 1981-2010, just calculate a mean (of the anomalies of other base) for that time for each month, and subtract.

      • Curious George

        Nick, in your second graph I can pick an approximate point [1985, -0.3]. Can you associate a temperature – not an anomaly – with that point? If not, what is the point of creating a graph which has nothing to do with temperature?

      • Curious George,

        Nick seems to be parroting Warmist nonsense. From his first link –

        “For the global mean, the most trusted models produce a value of roughly 14°C, i.e. 57.2°F, but it may easily be anywhere between 56 and 58°F and regionally, let alone locally, the situation is even worse.”

        Notice, ” . . . the most trusted models produce . . . ”

        Why worry about inconvenient fact, when you have a convenient model to hand? Warmists have no idea, but use made up sciencey terms in an attempt to deny, divert, and obscure.

        They are just trying to avoid facing reality. At least their individual delusions appear to be taking them along separate paths. Some now call for bans on coal, some call for bans on all fossil fuels, some are now merely anti pollution. After having to accept the Earth has cooled for four and a half billion years, the notion that it has started to warm up again is becoming a bit harder to swallow.

        Any fool can produce meaningless and misleading graphs, and Warmists frequently do. Even Warmists are unable to provide examples of any benefit emanating from the brightly coloured pieces of nonsense they produce.

        I’d suggest adult colouring-in books would be more calming, but I’m not sure the average Warmist could cope with the level of technical expertise required.

        You might upset Warmists if you keep pursuing them about actual temperatures, ie facts. They cannot cope with reality, which is why they keep denying it.

        Cheers.

  26. It looks like NOAA caved to the committee request:

    Agency turns over subpoenaed climate documents
    By Devin Henry – 12/17/15 02:55 PM EST

    http://thehill.com/policy/energy-environment/263623-noaa-turns-over-subpoenaed-climate-documents-to-house-committee

    • Who caved?

      “He amended his request in December, specifying he first wanted to see communications and documents from NOAA’s political, policy and non-scientific staff rather than its scientists.”

      • You are not very bright, nicky.

        Smith got tactical:

        “He amended his request in December, specifying he first wanted to see communications and documents from NOAA’s political, policy and non-scientific staff rather than its scientists.

        “I am encouraged by NOAA’s acknowledgment of its obligation to produce documents and communications in response to the committee’s lawfully-issued subpoena,” Smith said in a Wednesday statement.

        “I am also glad to see that NOAA has committed to produce additional items as they are identified. We will carefully review these documents and expect additional productions from NOAA.”

        How is NOAA now going to say that some of their staff’s documents are fair game for a lawful Congressional oversight investigation, but others are exempt? Use your head, nicky.

  27. “…we conclude that future global warming will occur at a pace substantially lower…”

    In fact there is no right to conclude a thing. There is no modifying dead-wrong, only dismissal of dead-wrong.

    Trends trend, trends bend, trends end. Of the hour of the end, no man knoweth. (Because the climate is a fantastically complex flux and you don’t even know how pork bellies will perform a few months out, duh.)

    Don’t be luke-warmie men.

  28. Pat Michaels and Chip Knappenberger, thank you for the essay

  29. Brian G Valentine

    Models aren’t working out so well – so what.

    Tear down the system anyway. Make poor people poorer. Tell ’em it’s for their own good based on things they wouldn’t understand.

    Make up some numbers about how many smart people know it’s for their own good. Encourage the most strident to humiliate anyone who disagrees. Encourage school children to do likewise

  30. Thank you, Pat Michaels and Chip Knappenberger, for this information. After WWII, granting of research funds was designed to produce consensus for “Standard Models of the Atom, Climate, Cosmos, Nucleus, Sun,” etc., etc. Immediately after WWII, the internal composition of the Sun and the definition of nuclear stability were changed.

  31. El Niño Observations

    A significant El Niño has developed in the Pacific Ocean. Interest in this event is high because impacts across the U.S. and the world will likely be substantial. The images below show key spaceborne observations of the ocean and atmosphere as they are currently changing with El Niño. The images are produced by NASA’s Jet Propulsion Laboratory and partner agencies and compare the current conditions with the largest El Nino on record in 1997-1998, or with a ‘normal’ year for measurements that do not extend back to 1997.

    Many of the images are designed to show a data “anomaly”, revealing when data is outside of normal measurement ranges. For example, sea surface temperature (SST) data that stray from the normal range of variation are shown as SST anomalies. The color bar indicates how far from normal the measurements are.

    http://spaceref.com/earth/nasa-examines-global-impacts-of-the-2015-el-nino.html

  32. There are actually two related big picture issues. One, the main point of this post, is the degree of divergence between models and observations. The CMIP5 model archive at KNMI cannot be argued; what constitutes observation can be, a bit. Surface, radiosonde, satellite. We know that satellite and radiosonde tend to agree and that they increasingly diverge from surface estimates. But in the big picture, all three ‘observational’ groups increasingly diverge from models. That is a model problem.
    The second is the duration of the divergence, because in 2008 NOAA said 15 years would mean the models were invalidated. In 2011 Santer published that this would take 17 years. Well, we are beyond that duration for satellite and radiosonde, and at or approaching it for all the surface datasets. That is a bigger model problem, which this post draws less attention to.
    This years El Nino is not (yet) warming the atmosphere like 1998 did, or like warmunists hoped. The shift to negative PDO and AMO may be why. And if this continues, followed by the inevitable La Nina, then by AR 6 in 2019 the AGW meme will be on the rocks. No amount of IPCC fudging or data Karlization will be able to cover the core model failure up.
    And without those models and their 3.2 ECS, there is no AGW problem. As the post points out, all the recent observational effective sensitivity studies are coming in at the very low end of the AR5 range. ECS 1.5-1.7 means no CAGW.

      • In the thousand year cycle, Roman Warm, cold, Medieval Warm, Little Ice Age, Modern warm, which 17 year period would you use. No measurement is outside the bounds of the past.

        The most recent 17 year record has no temperatures that are out of the bounds of the past two thousand years and not out of the bounds of the past ten thousand years.

      • Humans may have an influence, but the thermostats for earth, in the north and in the south, is the temperature that Polar Sea Ice Thaws and that always increases snowfall and stops warming due to all causes. When cooling occurs, and it does after all warm times, the cooling allows the Polar Oceans to freeze and that always decreases snowfall and stops cooling due to all causes. Human influence cannot change this powerful natural bounding. Orbit cycles and solar cycles and CO2 cannot change this powerful natural bounding,

      • Look at actual data. Temperature is bounded in a narrow range. Cooling is always turned on and off. Polar oceans thaw and freeze to turn snowfall on and off. The data looks just like the temperature in my house where the thermostat turns cooling on and off. There is more ice on earth in cold times and less ice on earth in warm times. It always snows more in warm times and it always snows less in cold times. The data does show that. That is the reason that it always gets cold after a warm time and it always gets warm after a cold time.

    • Thanks for comment. While the first Figure in our post draws attention to the growing divergence (your point 1), the remainder of our post looks at the duration of the divergence (your point 1). The horizontal axis of Figure 2, 3, and 4 is “trend length” and the charts are designed to show that the obs/mods divergence is not confined to only the past decade or two.

      -Chip

      • Chip, you are correct. I stand corrected. Way too long between reading your post and commenting. Regards to an excellent post on both dimensions.

    • Curious George

      Is El Niño weather or climate? The Economist defines it as a “climatic weather”, which indicates that they don’t know enough. For me, El Niño is a repeatedly observed pattern starting with an abnormally warm ocean surface in Polynesia, followed by a drought in Venezuela and by a plenty of rain in California. No convincing physical explanation yet.

      Do all three parts occur always together, or can there be an El Niño with only two of them? How do models handle El Niño? If they do, are they CMIP5 models?

      • It would be interesting to hear a real scientist’s view on that.

        The real question is how active were ENSO, AMO, PDO, etc. during the Maunder minimum?

        If these cycles weren’t very active, then they are climate related to whether the earth is warming or cooling.

      • I’m living in El Nino central here on the mid-coast of NSW. The double El Nino of 1993-95 produced “classical” results in terms of heat, drought, fire, as did the weak (in SOI terms) of 2002-03. What’s interesting is that the so-called super of ’97-8 was fairly benign in Australia generally. The other super, that of ’82-3, was very severe in much of the continent, but not for long and not in my region.

        This El Nino has not gone to script at all – so far! I still believe ENSO is a handy indicator. Living on the land, I’m glad to have the info. While in 2009 something else was operating to override El Nino conditions in my region, there was plenty of nasty heat and fire action elsewhere. (Of course, except for the mid-’70s, Australia always has and always will deliver drought and fire at some point.)

        Australia’s driest known year was 1902, part of a “weak” El Nino, and it would have been the most fiery except there was nothing left to burn toward the end of the Federation Drought, which started with the scary global conditions of 1895-6. Incendiary 1939 was our year-from-Hell – but that was part of a La Nina flanked by neutral years!

        I guess the conclusion is that one should just keep observing. But we’re not really living in a good era for observation. People want their consoles, their mechanisms, their theories…and above all they want to publish.

        Anyway, got my fingers crossed for California. Improvise for Oz, but follow the script for SoCal, Nino!

  33. the climate change activists are good at limiting the discussion to surface temperature – something they can adjust – and keeping it away from fossil fuel emissions. the greater divergence between models and empirical data is in measuring the impact of emissions. see for example:
    http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2654191

    more at
    http://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=2220942

    • 4. CONCLUSIONS
      “The IPCC carbon budget contains uncertainties that are not taken into consideration in the balance computations. Since an exact account balance cannot be computed in the presence of uncertainty, we propose a stochastic method. The data are used to test the null hypothesis that the account is in balance. Only when this hypothesis can be rejected do the data provide evidence that the account is not in balance. We use a Monte Carlo simulation model to carry out uncertain flow accounting of carbon transfers to and from the atmosphere described in Figure 6.1 of the IPCC -AR5 report and find that the known rate of carbon dioxide accumulation in the atmosphere can be explained without including emissions from fossil fuel combustion. We conclude that the IPCC carbon balance is not sensitive to anthropogenic emissions in the context of uncertainties in natural flows. Natural flows are not known with sufficient precision to determine the sources of carbon that have caused atmospheric CO2 to rise since 1750. This conclusion is consistent with the findings of a previous paper that year to year changes in atmospheric CO2 are unrelated to the rate of anthropogenic emissions (Munshi, Responsiveness of Atmospheric CO2 to Anthropogenic Emissions: A Note, 2015).”

      ” We conclude that the IPCC carbon balance is not sensitive to anthropogenic emissions in the context of uncertainties in natural flows. Natural flows are not known with sufficient precision to determine the sources of carbon that have caused atmospheric CO2 to rise since 1750

      Jamal, So the rise in CO2 emissions is mostly natural presumably?

  34. Has anyone noticed that the modeled temperatures (uppermost red line) in FIg. 1 are similar to the ‘Keeling Curve’ of CO2 measurements at Mona Loa, HI?:

    ]It raises the question whether the Keeling CO2 curve was input into the models and weighted too strongly giving spurious (IPCC) results.

    George Devries Klein, PhD, PG, FGSA

    • Brian G Valentine

      The red curve is actually the increase (in thousands) of new cases of paranoid schizophrenia caused by climate change

      Or worried about it, anyway

  35. I would ask the question I have never seen addressed, is how we get from “here” to there. Whether you accept the satellite/balloon data or the new NASA/NOAA data, most of the IPCC scenarios are significantly above the observational record. Yet the scenarios don’t start out anew each year, while retaining their 2100 endpoints. None of the scenarios – none – show the rapid uptick required to get to those endpoints from 2015.

    So, possible conclusions? One, that the scenarios have self-destructed years ago and deserve elimination from the group. Two, that the natural variability that is denied is, in fact, an assumption of every one of the scenarios that needs to inform the error bar of each one and announce to one and all that natural variability is greater than the proposed radiative forcing some of Earth’s GHG. Three ….. The cognoscenti disbelieve any data such as from satellites, balloons, Argo floats or not “corrected” by NASA/NOAA/Obama that does not support the scenarios because their threat to self-esteem, careers and ideology is overwhelming and must not be considered.

    The last possible conclusion is the one I personally favour, however we do live in a world in which the appearance of checking in with reality still counts. So I ask again: how do the modelers and adherents say we get from the global temperatures of end 2015 to the projected temperatures of 2100?

  36. Geoff Sherrington

    A recent paper gives plausible reasons why added CO2 over the Antarctic land mass leads to atmospheric cooling, that is, a negative ECS. Models that by design exclude an ECS that is negative will, of course, run too hot.
    http://onlinelibrary.wiley.com/doi/10.1002/2015GL066749/abstract

    Has the thrust of this GRL paper of 14 Dec 2015 already been discussed and either refuted or gained acceptance? I have not, with my limited reading, seen prior reference to this proposed physical effect.
    Geoff.

    • Brian G Valentine

      Yes I have heard of it, and arises from the possibility of the surface being colder than the stratosphere in Antarctica.

      Since graduate school I have been struggling to reconcile the possibility that q = (sigma)[(epsilon)T(surf)**4 – (epsilon)T(atm)**4] of the heat flux trom the surface to the upper atmosphere is positive no matter what. The paper you cite (and others) give a “two-layer” model to give us the possibility of a “greenhouse” effect. I don’t see the physical basis for that.

      I’m totally stupid, in denial, probably a fool

      • Brian G Valentine – go back to this comment an wipe the smirk off your face because this is a very serious matter with huge ramifications in that it proves the whole radiative GH conjecture to be false. It will be common knowledge within 10 years I predict.

      • Brian G Valentine

        OK I’ll quit horsing around.
        But if you think this is common knowledge in 10 years then I think you are horsing around

  37. By United Nations scientific method – the UNscientific method – anything is possible. United Nations eats logical, and scientific, fallacies for breakfast!

    The models are panel beated – to death – to match the observations:
    «When initialized with states close to the observations, models ‘drift’ towards their imperfect climatology (an estimate of the mean climate), leading to biases in the simulations that depend on the forecast time. The time scale of the drift in the atmosphere and upper ocean is, in most cases, a few years. Biases can be largely removed using empirical techniques a posteriori …»
    (Ref: Contribution from Working Group I to the fifth assessment report by IPCC; 11.2.3 Prediction Quality; 11.2.3.1 Decadal Prediction Experiments )

    Further, IPCC used circular reasoning to exclude natural variability. IPCC relied on climate models (CMIP5), the hypotheses under test if you will, to exclude natural variability:
    “Observed Global Mean Surface Temperature anomalies relative to 1880–1919 in recent years lie well outside the range of Global Mean Surface Temperature anomalies in CMIP5 simulations with natural forcing only, but are consistent with the ensemble of CMIP5 simulations including both anthropogenic and natural forcing … ”
    (Ref.: WGI; AR5; TS.4.2.)

    Scientific fallacies like adjusting the theory/theory output to match the results? United Nations is happy with Scientific fallacies!

    Logical fallacies like circular reasoning?
    United Nations is happy with logical fallacies as well!

    By United Nations scientific method – the UNscientific method – anything is possible. Bend over and welcome idiocracy!

  38. I have been able to overcome the inclination to use the mean of CMIP5 historical climate model temperature series means. The problem with the mean is that it requires averaging that which is not at all alike, i.e. the individual models. It is better in my mind to compare the observed temperature series and trends with those of individual models where the model has multiple runs. A model that has listed only a single run is not a model result to take seriously with regards to temperature trends. All models with multiple or single runs can be compared to the observed or within the models by other metrics (and to statistical significance) such as autocorrelation of trend residuals, white noise variances and the ratio of southern and northern hemisphere warming trends. I have carried out this exercise and found none of the CMIP5 models passing all the comparison tests.

    Such a method with multiple metrics accounts for the detailed analysis that shows that a model run with a temperature trend near or even below that of the observed gets there by having a lower NH/SH warming trend. It also fights off the tendency to group the climate models for obtaining means where that being averaged is apples and oranges with regards to the ratio noted here, autocorrelation of trend residuals and white noise.

    This problem of thinking a mean of all individual model runs is required comes from having only a single observed realization of the earth’s climate and often too few multiple runs of individual models to precisely predict where the single earth’s realization fails within the distribution of multiple runs of an individual model. That problem could be mitigated, and should be if we are truly serious about climate modeling and comparing the results to the observed, by requiring at least 10 to 15 multiple runs of an individual model.

  39. All the above obvious confusion about the GH effect, the Second Law and the required heat flows needed to explain the ocean surface temperature need be no more, for all is explained here with correct physics. Solar radiation reaching the ocean surface does not explain the surface temperature. Back radiation does not penetrate oceans by more than a few nanometers, so it does not explain the temperature either.

    I have, and I predict it will be common knowledge within 10 years.

    • I’ve read your link and don’t agree with it. It doesn’t seem to distinguish at all between the incoming sunlight which is high-energy radiation (light) and the low-energy infra-red radiation exchanged during thermal processes.

      The surface is warmed directly by some of the high-energy radiation (sunlight) passing through the atmosphere. Thereafter there are exchanges of low-energy radiation between the surface and the greenhouse gases in the atmosphere. The greenhouse gases are readily identified because they are those which have more than 2 atoms in their molecules (CO2, CH4, O3), which gives them the necessary vibrational capability to absorb and re-emit thermal radiation emitted at around 20 degrees C.

      Your description of greenhouse gases as emitting infra-red radiation only upwards is not correct. The infra-red radiation is first emitted by the surface as a result of direct high-energy radiation sunlight warming it . This upwards-travelling radiation is absorbed by greenhouse gas molecules which then re-emit it in every direction (so half up and half down), and it is this process which returns infra-red radiation energy to the surface, causing it to warm more at the expense of higher layers which get less than their fair share of the low-energy infra-red radiation.

      • Peter Davies,

        Just one minor query.

        When the surface emits infrared radiation, I assume its temperature falls. It can’t go up, because it is losing, not gaining energy.
        It can’t remain the same, because it is losing energy.

        I hope you agree, because it’s true.

        Now, as you say, even if the GHG molecules absorb all the energy emitted from the surface, then the amount returned to the surface is only half of that emitted.

        This only compensates for half the drop in temperature when the surface emitted its infrared radiation.

        Would you agree that this mechanism explains why the surface cools at night, in the absence of sunlight?

        Cheers.

  40. Pingback: Global Warming Update | thePOOG

  41. Land temperatures have warmed much faster. The 65-year trend is 0.18 C/decade and the 30-year trend is 0.27 C/decade. That the global model trends agree so well with the observed land trend could indicate that their problem is warming the ocean surface too quickly. The ocean trend since 1980 has been only about half the land trend, and that dominates the global average. I would like to see a comparison between the model and observed land surface trends., which I think are much closer.

    • Brian G Valentine

      Probably hard to get that because GCM are modeling the whole thing. I’d like to see the DIURNAL daytime high and night time low modeled, which is probably awful, because AGW would overestimate the night time low

      • I am quite sure they can get it. We have seen GCM maps of temperature differences in the IPCC reports.

    • Curious George

      “the global model trends agree so well with the observed land trend”

      Congratulations, and happy holidays.

  42. nobodysknowledge

    I am not sure what is meant with mid-troposphere, exactly. But a definition could be the altitudes about 5 km. One question I have is how much of the air is in this part of the atmosphere? Is it 10 or 20 or 50%? The answer of this will tell us something about the energy budget.

    There is a close correspondence between Mid- troposphere and Hight of emission. From SoD: “Likewise for the OLR, the typical height is around 5km in low latitudes and 4km near the poles.” So the Christy figure should also show the change in outgoing long wave radiation, and how badly it is calculated from climate models.

  43. “That the actual ECS (at least as assessed over century times scales) is likely much lower than the average value of the climate models incorporated in the IPCC’s AR5 is an efficient explanation for why climate models tend to overpredict the amount of global warming which has taken place—which has huge significance in assessing the utility of climate model projections for future climate change.”

    “The refusal to acknowledge that the model simulations are affected by the (partially overestimated) forcing in CMIP5 as well as model responses is a telling omission.”
    – Gavin Schmidt; Unforced Variations: Nov 2015; 17

    The conclusions by United Nations IPCC are heavily based on models:
    “The climate change projections in this report are based on ensembles of climate models. The ensemble mean is a useful quantity to characterize the average response to external forcings, but does not convey any information on the robustness of this response across models, its uncertainty and/or likelihood or its magnitude relative to unforced climate variability.”
    IPCC;WGI;AR5;Box 12.1 | Methods to Quantify Model Agreement in Maps

    See a more thorough analysis of the reasoning by IPCC here:
    IPCC FIND THE AVERAGE OF CLIMATE MODELS USEFUL – EVEN THOUGH IT IS USELESS!

    The idea that United Nations has based their ideology on robust theory is wrong – it has been falsified.

    United Nations based their dogma on the UNscientific method. The only responsible thing to do by governments will be to suspend judgement and suspend actions based on United Nations climate theory.

    • Since Science or Fiction has decided to cherry pick one comment by Gavin Schmidt on on Figure 1 above, here is the full set :-

      “The use of single year (1979) or four year (1979-1983) baselines is wrong and misleading. The use of the ensemble means as the sole comparison to the satellite data is wrong and misleading. The absence of a proper acknowledgement of the structural uncertainty in the satellite data is wrong and misleading. The absence of NOAA STAR or the Po-Chedley et al reprocessing of satellite data is… curious. The averaging of the different balloon datasets, again without showing the structural uncertainty is wrong and misleading. The refusal to acknowledge that the model simulations are affected by the (partially overestimated) forcing in CMIP5 as well as model responses is a telling omission. The pretence that they are just interested in trends when they don’t show the actual trend histogram and the uncertainties is also curious, don’t you think? Just a few of the reasons that their figures never seem to make their way into an actual peer-reviewed publication perhaps… – gavin”

      See http://www.realclimate.org/index.php/archives/2015/11/unforced-variations-nov-2015/ response to post #17. Note realclimate has resurrected itself following some domain name ownership issues!

  44. Pingback: Messungen und Klimamodelle – eine überraschende Darstellung der Diskrepanz | RL-Rational

  45. Pingback: Weekly Climate and Energy News Roundup #210 | Watts Up With That?

  46. The change from the 2011 to 2015 version warmed the trend during the fossil fuel era.

    1930-2011 period
    2011 GISS trend +0.76 deg / Century
    2015 GISS trend +0.96 deg / Century

    1945-2011 period
    2011 GISS trend +0.96 deg / Century
    2015 GISS trend +1.17 deg / Century

    So the adjustments tacked on about 0.20 deg / Century
    Looking for other data… will report back.

  47. Pingback: Government-Funded Scientists: Never Hide Anything from the Public - Hit & Run : Reason.com

  48. Pingback: Government-Funded Scientists: Never Hide Anything from the Public | Michigan Standard

  49. Pingback: Why Calling 2015 the Warmest Year on Record Is Problematic

  50. Pingback: NOAA Cooking the “Climate Change” Data Again … Because “Science”! | The Universal Spectator

  51. Pingback: Why Calling 2015 The Warmest Year On Record Is Problematic | PA Pundits - International