Hiatus controversy: show me the data

by Judith Curry

The scientific and political controversies surrounding the hiatus have continued to heat up. Lets take a look at ALL the global temperature data sets.

So, what is the ‘hiatus’ or ‘pause’ or ‘slowdown’, and why does it matter? Here are three criteria for the hiatus to matter:

1) the rate of warming over a particular period of at least 10 years is not statistically significant from zero (with the context of a nominal 0.1C uncertainty). Note the IPCC AR5 cited: “As one example, the rate of warming over the past 15 years (1998–2012; 0.05 [–0.05 to +0.15] °C per decade is smaller than the rate calculated since 1951 (1951–2012; 0.12 [0.08 to 0.14] °C per decade)”

2) the rate of warming over a particular period of at least 10 years is less than the warming projected by the IPCC AR5: “The global mean surface temperature change for the period 2016–2035 relative to 1986–2005 will likely be in the range of 0.3°C to 0.7°C (medium confidence).” This translates to 0.1C to 0.233C/decade. (Note the AR4 cited a warming rate of 0.2C/per decade in the early 21st century).

3) Periods meeting the criteria of either 1) or 2) are particularly significant if they exceed 17 years, which is the threshold for very low probability of natural variability dominating over the greenhouse warming trend.

Conventional surface temperature data sets

A comparison of HadCRU, NASA GISS, NOAA/NCDC. Cowtan and Way, and Berkeley Earth global temperatures through 2014 was provided to me by Steve Mosher for my April House Testimony:

Slide7

The various data sets show pretty close agreement on the interannual variations and the magnitude of the trends over this period. While the trends for each data set vary slightly, all them have decadal trends sufficiently small to satisfy hiatus criteria 1) and 2).

Note: this figure was PRIOR to the new NOAA temperature dataset of Karl et al. A new document by Berkeley Earth [link]  clarifies the changes of the new NOAA data set relative to HadCRU, NASA GISS, and Berkeley Earth:

Slide4

The NOAA curve (red) is lower than the others in the earlier part of this record, and warmer in the most recent years of this record. Specifically with regards to hiatus significance, the SOM in the Karl et al. paper cites global trends of 0.106C/decade for the period 1998-2014, and 0.116C/decade for the period 2000-2014.

Trends exceeding 0.1C fail to pass criteria 1) and 2) above, hence the new data set does not satisfy either criteria for a hiatus. Actually, the criteria for hiatus is barely missed, with the trend greater than zero at the 90% confidence level.

JC note: The Karl paper was inappropriately criticized for ‘cherry picking’ periods (and hiatus proponents are also criticized for same). Given criteria 3) above, cherry picking is a non issue – ANY period approaching 17 years is fair game for challenging the climate models.

Regarding the comparison among the different data sets, the Berkeley Earth report states:

The differences we see between the various approaches comes down to two factors: Differences in datasets and differences in methods. While all four records fall within the uncertainty bands, it appears as if NCDC does have an excursion outside this region; and if we look towards years end, it appears that their record shows more warmth than others.

The source of the difference between NCDC and the other data sets lies in its analysis of the ocean data, which will be the subject of a follow on post.

Reanalysis data

Reanalysis was discussed on a previous post reanalyses.org 

Reanalysis  is a climate or weather model simulation of the past that includes data assimilation of historical observations.  The rationale for climate reanalysis  is given by reanalyses.org:

Reanalysis is a scientific method for developing a comprehensive record of how weather and climate are changing over time. In it, observations and a numerical model that simulates one or more aspects of the Earth system are combined objectively to generate a synthesized estimate of the state of the system. A reanalysis typically extends over several decades or longer, and covers the entire globe from the Earth’s surface to well above the stratosphere.

Data Assimilation merges observations & model predictions to provide a superior state estimate. It provides a dynamically- consistent estimate of the state of the system using the best blend of past, current, and perhaps future observations.  

[Using a weather prediction model]  The observations are used to correct errors in the short forecast from the previous analysis time. Every 12 hours ECMWF assimilates 7 – 9,000,000 observations to correct the 80,000,000 variables that define the model’s virtual atmosphere. This is done by a careful 4-dimensional interpolation in space and time of the available observations; this operation takes as much computer power as the 10-day forecast.

Operational four dimensional data assimilation continually changes as methods and assimilating models improve, creating huge discontinuities in the implied climate record. Reanalysis is the retrospective analysis onto global grids using a multivariate physically consistent approach with a constant analysis system. 

In the 1990’s, we were cautioned not to use reanalyses for long term trends, owing to discontinuities in the satellite observing system. However, the situation has improved in recent decades and looking at trends during the most recent 20 years is reasonable.

Here is the figure on the global average surface temperature anomalies from the ECMWF reanalyses (ERAi) published by ECMWF earlier this year [link]

Slide5Note: the lighter, broader bars denote averages that exclude the polar regions, whereas the narrower, darker bars are global and include the polar regions.

ECMWF’s analysis generally agrees with the year to year  variability seen in the conventional surface temperature datasets. Note that in their analysis, 2014 was not the warmest year, and the amplitude of 1998 is somewhat smaller. I have not done any formal trend analysis on this data set, but eyeballing the graph it appears that since 1998 the trend would exceed 0.1C/decade, although the trend since 2002 or 2003 appears to be less than 0.1C/decade.

The real significance of the ECMWF analysis is their global analysis that includes the polar regions. Their method is vastly preferable to the kriging and extrapolation used by NASA GISS and Cowtan and Way. Interestingly, including the polar regions does not always produce a warmer global average temperature; notably in 2013 and 2014 it did not, largely owing to the cooling in Antarctica.

US/NOAA also produces a reanalysis, the CFSR. I have not seen a figure from NOAA plotting this data, but Ryan Maue of WeatherBell provides this plot (I’ve combined two of the plots into one) [link]:

Slide1

The main features from the conventional temperature analyses are barely recognizable in the CFSR – 1998 is barely a blip and 2014 is nothing close to a warmest year.

I am not sure what to make of the differences between ECMWF and CFSR. ECWMF has the most comprehensive and best data assimilation system in the world, so I am inclined to pay serious attention to their global surface temperature analysis. In any event, I would like to see much more attention paid to interpreting the reanalysis products in terms of recent global temperature trends.

Atmospheric temperatures

The strongest evidence for the hiatus comes from the satellite (microwave) observations of bulk atmospheric temperature, pioneered by Christy and Spencer. Analyses of these data have shown a statistically significant hiatus for a period as long as 21 years.

The latest figure from Roy Spencer was given in his recent CATO talk:

Slide2Bob Tisdale has produced an interesting diagram comparing UAH, RSS and NASA LOTI:

Slide2

The largest discrepancy among the three datasets are 1998 and 2014: in 1998 LOTI is lower than the other two, and in 2014 RSS is lower than the other two. In any event, all three data sets qualify for warming hiatus since 1998.

At the CATO event, John Nielsen-Gammon showed a very interesting figure, that compares UAH and RSS with ERAi (the ECMWF reanalysis):

Slide6

Agreement is very close, which adds to the credibility of the ECMWF reanalysis. EXCEPT for 1998, where ECMWF is substantially lower. J N-G cites the following trends (presumably since 1997): UAH: -0.01 °C/decade, RSS: -0.03 °C/decade, ERAi: 0.08 °C/decade. All three are still in hiatus territory, but ERAi trend is much larger than the others owing the lower value in 1998.

The low value in 1998 also appeared in the ECMWF reanalysis surface temperatures. Note: ECMWF does not assimilate directly either the UAH or RSS temperatures, but rather assimilates the microwave radiances from the satellites. Not also the relatively low value for 1998 from the NASA LOTI.

Some work clearly needs to be done to sort out the differences among the bulk atmospheric temperatures determined directly from satellites (using the 3 different methods) as well as the reanalyses.

Diverging surface thermometer and satellite temperatures

Euan Mearns has an interesting post Diverging surface thermometer and satellite temperature records.

This post is already too long so I will just point you to the site. His concluding remarks:

Satellite and surface thermometer data agree over the oceans. They used to agree better over land until HadCRUT4 supplanted HadCRUT3, ending the pause and causing land surface thermometers to diverge from the satellite data sets.

JC reflections

The uncertainties in the various global temperature data sets are substantial relative to defining the existence (or not) of a warming hiatus and in assessing whether the observed trends are significantly lower than the model projections. I have stated this several times before: I think the error bars/uncertainties on these data sets are too low, particularly given the magnitude of the adjustments that are made.

With regards to 2015, the new Berkeley Earth report  cites an 85% probability of warmest year, owing the very large ocean warming, largely in the Pacific. However, Roy Spencer’s analysis of UAH shows no sign of 2015 being a warmest year.

The Berkeley Earth report concluded with the following statement:

2015 looks like it is shaping up to be an interesting year both from perspective of “records” and from the perspective of understanding how different data and different methods can result in slightly different answers. And it’s most interesting because it may lead people to understand that interpolation or infilling can lead to both warmer records and cooler records depending on the structure of warming across the globe. 

Here are the biggest uncertainties that I see:

  • sorting out what was going on in 1998, which was a year of discrepancy among the satellite atmospheric temperature data sets, and the ECMWF reanalysis
  • interpretation of what is going on in the polar regions, and I think ECMWF has the best approach on that one
  • sorting out the sea surface temperature issues (note this will be the topic of a post next week)

The bottom line with regards to the hiatus is all of the data sets except for the new NOAA/NCDC data set show a hiatus (with NASA LOTI being the other data set coming closest to not showing a hiatus).

The real issue of importance is comparing the climate models with recent observations. Here, even the latest NOAA/NCDC analysis still places the observations at the bottom envelope of the climate model simulations.

So it is premature to declare the hiatus dead.

 

674 responses to “Hiatus controversy: show me the data

  1. The real issue of importance is comparing the climate models with recent observations. Here, even the latest NOAA/NCDC analysis still places the observations at the bottom envelope of the climate model simulations.

    And if this “latest NOAA/NCDC analysis” has been purposefully corrupted to support a politically predetermined outcome, as increasingly appears to have been the case?

    It’s not that government agencies are necessarily to be distrusted in all matters touching upon the priorities of our Indonesian-in-Chief, but….

    Hm. No, that proposition works out with high levels of reliability, doesn’t it?

    One of the saddest lessons of history is this: If we’ve been bamboozled long enough, we tend to reject any evidence of the bamboozle. We’re no longer interested in finding out the truth. The bamboozle has captured us. It’s simply too painful to acknowledge, even to ourselves, that we’ve been taken. Once you give a charlatan power over you, you almost never get it back.

    ― Carl Sagan, The Demon-Haunted World: Science as a Candle in the Dark (1995)

    • We’re gonna feel sorry for the alarmists someday. Their chains will shiver long into the future.
      ==================

      • Kim – I agree, but unfortunately their ideology will only truly end when they do. The best we can hope for is that over the next years (and decades) this absurd alarmism becomes more ‘fringe’, and that a more balanced environmentalism becomes the mainstream. And that slowly the world will begin to look back with bewilderment that such absurdity managed to grip our political ruling class for so long. Such insight may hopefully constantly remind us that, for all our modern technological cleverness, we still have the same human fears and follies as we have always had through the ages.

    • “And if this “latest NOAA/NCDC analysis” has been purposefully corrupted to support a politically predetermined outcome, as increasingly appears to have been the case?”

      Just to be clear one politician is trying to intimate that before he’s received an evidence of corruption, no smoking gun yet. That doesnt count as evidence, that’s its own wishful thinking.

      • H, not so. As posted elsewhere, there is at least one smoking gun to be found, the hidden (not shown) uncertainty in the Karl and Huang papers. They expressly used the method of Kennedy 2011 to replicate Kennedy’s result, an adjustment of 0.1C. But Kennedy fot 0.1C +/- 1.7C. Useless uncertainty. Smith’s committee knows this for sure, because I wrote it providing evidence with the paper references, just in case they missed it in the original Climate Etc. comments. He may know of more, but that by itself after the ‘38% chance 2014 was the warmest ever’ kerfuffle suffices for his apparent purposes.

      • Poor quality science is not the same as “purposefully corrupted to support a politically predetermined outcome”. I got no problem with people pointing out the quality of the science, from JC’s previous post many consensus scientists seem to think its crappy science too. But thats not the same as what Tucci was suggesting.

      • Writes human1ty1st:

        Poor quality science is not the same as “purposefully corrupted to support a politically predetermined outcome”. I got no problem with people pointing out the quality of the science, from JC’s previous post many consensus scientists seem to think its crappy science too. But thats not the same as what Tucci was suggesting.

        Oh? If “Poor quality science” were the ONLY fault in this “latest NOAA/NCDC analysis” – honest, explicable (if embarrassing) human error and not an effort to concert purposeful corruption in support of a politically predetermined outcome – then why are these employees of a federal government agency (subject to Congressional scrutiny in everything they do while working ex officio) refusing to turn over “…emails and records from U.S. scientists who participated in the recent study, which found there has not been a pause or even slowing of global warming over the past decade”?

        Are the NOAA bureaucrats not proud of how their agency had turned out the published study?

        “This scandal-ridden Administration’s lack of openness is the real problem. Congress cannot do its job when agencies openly defy Congress and refuse to turn over information. When an agency decides to alter the way it has analyzed historical temperature data for the past few decades, it’s crucial to understand on what basis those decisions were made.”

        — Rep. Lamar Smith (R – Texas)

      • There is indeed a world of difference between poorly conducted science and purposely corrupted science, as humanity1st pointed out, but the salient point is we do not know whether the datasets which rely heavily on GISS are the result of poor science or malfeasance until such time as they are audited and the motivations and actions of those within the agency are scrutinized. There are trillions of dollars riding on this determination. We can’t afford to simply take someone’s word on it.

      • So Tucci you are all for organizations and institutions simply rolling over and giving in at the slightest push?
        ALL institutions, individually and especially bureaucrats are going to defend their space especially if the approach is percieved as a hostile atttack or bullying. Some third party will step in and legislate on how far this politician can go and how far NOAA has to give in.
        You can speculate as much as you want that this points to corrupted scientists but that is all that it is speculation. More likely this politican dislike what NOAA does and NOAA admin dislike him, and yes thats probably political, but there is no evidence of something corrupt, just disfunctional.

        As I said in my first comment to imagine the only explanation is corruption is wishful thinking without REAL evidence.

      • Flatulates humanity1st:

        So Tucci you are all for organizations and institutions simply rolling over and giving in at the slightest push?

        You friggin’ betcha, cupcake. If said “organizations and institutions” are agencies of civil government not engaged in national security operations (and therefore with arguable justification for the preservation of operational security), they’re running under the control of the legislature – in this case, the U.S. Congress – which funds them from the public purse.

        Meaning the taxpayer is mulcted to keep said “organizations and institutions” in existence, and pays the wages and salaries and funds the benefits packages enjoyed by these EMPLOYEES sucking at the public teat in those “organizations and institutions” accorded sacred status by fascisti of your character.

        Who but a fascist could fash himself if these “bureaucrats” are approached in a fashion which they (and idiots like you) perceive “as a hostile atttack or bullying”? Anything short of drawing and quartering is nothing less than the malevolent jobholder merits, particularly these malfeastant lying weasels.

        Oh, and you conceive that “Some third party” will ” step in and legislate on how far this politician can go and how far NOAA has to give in” when the U.S. Congress – y’know, elected representatives like Mr. Smith, who comprise the federal legislature – are themselves empowered by the U.S. Constitution (i.e., the law of the land) to “legislate” in precisely the matters under discussion.

        Is there a legislative body superior to the U.S. Congress, schmucklet? Cite article and paragraph in the U.S. Constitution to support your idiocy.

        Or stuff a sock in it. Yer stinkin’ up the joint.

        The climate is changing and, yes, humans play a role. But that does not mean, as Environmental Protection Agency Administrator Gina McCarthy would have us believe, that the debate — over how much the climate is changing, how big a role humans play, and what can reasonably done about it — is over. Still less does it mean that anyone who questions her agency’s actions, particularly the confidential research it uses to justify multimillion and billion-dollar air rules, is a denier at war with science.

        The EPA’s regulatory process today is a closed loop. The agency funds the scientific research it uses to support its regulations, and it picks the supposedly independent (but usually agency-funded) scientists to review it. When the regulations are challenged, the courts defer to the agency on scientific issues. But the agency refuses to make public the scientific research it uses.

        Scientific journals in a variety of disciplines have moved toward data transparency. Ms. McCarthy sees this effort as a threat. Speaking before the National Academy of Sciences in late April, she defended her agency’s need to protect data “from those who are not qualified to analyze it.”

        The EPA essentially decides who is or is not allowed access to the scientific research they use — research that is paid for with public funds, appropriated by Congress, on behalf of American taxpayers. This is wholly improper.

        Costly environmental regulations must be based on publicly available data that independent scientists can verify.

        — Lamar Smith (R – Texas), “What Is the EPA Hiding From the Public? “ (23 June 2015)

      • republicans are fundamentally anti-science. Their ranks are filled with creationists who believe the world is 6000 years old, or pander to that view. They will attack global warming even if it is true. In other words who cares what republican politicians do, they just aren’t credible.

      • nebakhet : “republicans are fundamentally anti-science.”

        Totally ridiculous, malevolent nonsense.

        You’re really getting desperate now, aren’t you?

      • their ranks are full of creationists. It’s empirical fact that they are anti-science no-nothings who cannot be trusted to be honest about data.

      • Blurts nebakhet:

        their ranks [i.e., the Republican Party] are full of creationists. It’s empirical fact that they are anti-science no-nothings who cannot be trusted to be honest about data.

        Should you really be using big words like “empirical” without knowing what they mean, doofus? Certainly, the religious whackjobs (including the various flavors of creationists) see the Red Faction as the political alternative preferable to the National Socialist Democrat American Party (NSDAP), but to claim that by your own personal experience you’ve found the party of Nixon “full of creationists” to a significant extent implies a fund of knowledge about ’em that you’ve yet to demonstrate.

        Far be it from me to defend the bunch whom Frank Chodorov had so eloquently characterized as “Rotarian Socialists” (their motto running something like: “Hooray for free enterprise, and keep them tariffs, quotas, set-asides, subsidies, sweetheart deals, no-bid contracts, and competition-suppressing regulations a-coming!”), but the machinating motherpokers who dominate the “establishment” tend reliably to be “Acela Republicans” (monied and well-connected influence-exerters from the metropolitan centers of the populous Blue States) who despise the religiously-motivated social conservatives in their own voting base, and do everything they can to reduce the influence of those chandala to nullity.

        I’ll certainly agree that the Republicans are about as “anti-science” as one might expect in any organized cadre of grabby bastards bent upon political plunder (for “science” supports the value of individual human rights and the economic system of free-market capitalism, which Republicans have always worked against), but how are the National Socialists – their nominal opponents – any better? There’s “science” in the gaudy fraud of the anthropogenic global warming – er, “climate change” – contention? The error-checking functions of scientific method have been circumvented in the climate catastrophe caterwauling since the idea was first fastened upon by the “Liberal” fascisti back in the ’80s.

        Admittedly, the Republican Party “cannot be trusted to be honest about data;” they lie. They’ve always lied. The history of their party (as well as “empirical” evidence I’ve myself encountered through a long lifetime of rubbing elbows with Republicans) demonstrates that perfectly well.

        But in what sense are the National Socialists (who ceased being “democratic” with dramatic finality in 2010, when the NSDAP-dominated U.S. Congress enacted Obamacare over the howling enraged protests of their own core constituencies) not far, far worse when it comes to blatantly deviating from standards of probity and accuracy in their mouthing of deceit in the guise of “data”?

        I like to think of the Republicans as cholera compared with the National Socialists’ Stage 4 metastatic bowel cancer.

        Both conditions can kill you in agony, but the chances of survival with Vibrio cholera are a little bit better.

        In this world of sin and sorrow there is always something to be thankful for. As for me, I rejoice that I am not a Republican.
        — H.L. Mencken

      • Tucci for all your unnecessary bile you still havent shown any corruption, its still just a product of your clearly fertile imagination until REAL evidence comes to light.

        BTW even elected representatives have to play by the rules, so no, NOAA dont have to simply bend over and take everything from Smith. And that one arm of government is playing politics with another arm doesnt equal fascism, again another product of your fertile imagination.

      • Soaked in flop-sweat, human1ty1st squeals:

        …you still havent shown any corruption, its still just a product of your clearly fertile imagination until REAL evidence comes to light.

        And the demands uttered lawfully by Mr. Smith’s committee constitute a proper and diligent search for that “REAL evidence” which you and your sputniki are so desperate to keep in the dark. Tsk. How you squirm!

        BTW even elected representatives have to play by the rules…

        You betcha. And the rules are found – in toto – in the U.S. Constitution. Ain’t that a kick?

        … so no, NOAA dont have to simply bend over and take everything from Smith.”

        If the Republicans – a notoriously spineless “go along to get along” bunch of ward heelers – don’t pre-emptively surrender, your beloved NOAA charlatans are not only going to be bending over but they’ll also be praying for Vaseline.

        Perhaps – if they’re incarcerated in the general population of some appropriate federal facility – quite literally.

        Just as you have dread, I have hope. And however the axe falls, each such public exposure of Obozo’s “climate” craptacular is a worthy audiovisual aid in educating the increasingly skeptical American populace about the utter bilge you and your co-conspirators keep trying to peddle in lieu of “science.”

        Popcorn, anybody?

        If you tell a lie big enough and keep repeating it, people will eventually come to believe it. The lie can be maintained only for such time as the State can shield the people from the political, economic and/ or military consequences of the lie. It thus becomes vitally important for the State to use all of its powers to repress dissent, for the truth is the mortal enemy of the lie, and thus by extension, the truth is the greatest enemy of the State.

        — Joseph Goebbels, Minister of Public Enlightenment and Propaganda, Deutsches Reich

      • “You betcha. And the rules are found – in toto – in the U.S. Constitution. Ain’t that a kick?”

        Presumably explains why you dont understand the concept of innocent until proven guilty. Really its not that difficult, stop foaming at the mouth and try learning something.

  2. Judy, thanks for highlighting the use of reanalysis datasets for global temperatures.

    ECMWF uses 4-D variational data assimilation to optimally combine all sources of data include satellites, surface stations, balloons, aircraft, ships, etc.

    A new reanalysis is being conducted called the ERA5 under the auspices of the ERA-CLIM2 project. This model will be coupled (atm+ocean) and provide the best estimate yet of the historical climate system from essentially the point of view of the best weather modeling system currently available.

    All of this controversy about NASA, NCDC etc. temperature “fixing” or adjustments will be a moot point very soon (2016-17).

    • On the contrary, Ryan, the result will be yet another data set, hence the potential for controversy will increase.

    • take a look at the surface data sets used for ingest into ECMWF.

    • This model will be coupled (atm+ocean) and provide the best estimate yet of the historical climate system

      Estimates for areas of missing data are verified by what?

      • “Estimates for areas of missing data are verified by what?”

        typically hold out analysis.

        its stats 101 TE.

        You take the entire dataset.

        You hold out a portion

        You use the rest to predict.

        You test your ability to predict the hold out.

        When I say that the average american male is 5’9″ its NOT because I have measured them all.

        Now suppose I had a thermometer every 10 feet.

        And thermometer 1 say 75F and thermometer 2 says 75F
        My model ( statistical or physical) will say that in between the temperature is 75 !

        And you will say how do we know?

        So suppose I have thermometers every 5 feet…

        you will ask the same question…

        till enternity.

        Luckily folks dont listen to you

      • Regarding so-called holdout analysis (and as I have said before),if two subsets of your data give the same result then something is wrong, because the chances of that are very slim, unless your variance is almost zero. And we know that the variance in temperature trends from place to place is actually quite high.

        In fact the concept of confidence intervals is based on the fact that different sample sets will give different results. Perhaps BEST’s use of a global temperature field swamps the local variance, in which case it is worse than useless.

      • On reflection, Mosher, I hope that this “hold out” analysis is based on the actual data, not the continuous field values, because the global field approach creates an enormous amount of false homogenization.

        For a simple example, suppose you have two actual measurements, at two points some distance away, On is 10 and the other is 20, which are significantly different. If you then draw a straight line from one to the other and impute the temperature at each intermediate point based on that line, then all of these points will have a smaller variance than the real points,. In many cases the variance will be very much smaller. As I understand it, this is basically what BEST does.

        Then too a lot of the temperature adjustments, in all of the statistical temperature models, seem to be a matter of imposing homogenization on the real points, where none actually exists. I have never seen two local records that were identical. Most are very different, thus making the variance large.

      • David Wojick: Regarding so-called holdout analysis (and as I have said before),if two subsets of your data give the same result then something is wrong, because the chances of that are very slim, unless your variance is almost zero.

        Not to worry about that: they compute the mean square error over the held-out measures.

      • hold out analysis.

        Would seem to apply only for domains that actually have data to be held out!

        If you have areas for which there is no data, you have no clue.
        The reanalyses make a guess based on other things that may have been measured. But with chaotic fluid flow, like the atmosphere, there are an infinite number of distributions, all of them physically valid, and many which are out of phase with one another. That is why there is such variation among even reanalyses which strive to be physically consistent.

        We have century long surface data, really only for CONUS, Europe and Japan:

        Thirty year data is a little better.
        We have RAOB data from about 1958 ( before that, no clue ).
        We have Satellite data since 1979.

        But it’s foolish and hubris to pretend we know what we don’t.

      • David

        “On reflection, Mosher, I hope that this “hold out” analysis is based on the actual data, not the continuous field values, because the global field approach creates an enormous amount of false homogenization.”

        1. of course it is Actual data.
        2. Even better its NEW DATA recently recovered.
        3. you have ZERO idea about what homogenization does

        “For a simple example, suppose you have two actual measurements, at two points some distance away, On is 10 and the other is 20, which are significantly different. If you then draw a straight line from one to the other and impute the temperature at each intermediate point based on that line, then all of these points will have a smaller variance than the real points,. In many cases the variance will be very much smaller. As I understand it, this is basically what BEST does.”

        1. ERR NO that’s not what we do.

        “Then too a lot of the temperature adjustments, in all of the statistical temperature models, seem to be a matter of imposing homogenization on the real points, where none actually exists. I have never seen two local records that were identical. Most are very different, thus making the variance large”

        Again, you don’t understand what you are talking about.
        There is no imposition of homogenization on the real points.

        Local records that are the same ? what are you jabbering about

      • Excerpt from another tour de force comment by Duke physicist Dr Robert Brown:

        “Occasionally I see a paper in climate science that does decent statistics, in particular one that openly acknowledges the enormous errors that are more typically minimized and/or openly misrepresented, especially in any material intended for “public” consumption. To be blunt, climate science in general makes claims of “confidence” across the board that cannot possibly be justified using axiomatic statistics. The worst instances of this abuse of terminology with a precise meaning in actual statistics in a context where the reader is deliberately invited to believe that that is the sense being used are in (for example) the summary for policy makers (SPM) of the various ARs from the IPCC, where the abuses are so egregious they have inspired a number of climate scientists to withdraw altogether from the process.

        It is, for example, amusing to examine the changes and differences in the temperature anomalies version to version and between two different products that are supposedly measuring the same thing. Consider this plot:

        http://www.woodfortrees.org/plot/hadcrut4gl/from:2010/to:2015/plot/gistemp/from:2010/to:2015

        This is gistemp and hadcrut4, both global temperature anomalies with a supposedly common base, plotted side by side over just the last five years. As you can see, GISS considers the anomaly to be 0.2C higher than the CRU. If one downloads the HadCRUT4 data and tallies all sources of acknowledged error, the error year to year is claimed to be 0.1 C. This error cannot be a standard deviation, as it sums three or four distinct estimated contributions to a total error, so we have to assume that it is supposed to be a confidence interval, that it is 95% certain that the actual anomaly is within 0.1 C either way of the number they publish. However, let’s be generous and assume that it really is supposed to be a standard deviation or normal equivalent.

        Either way it is amusing to note that a very simple way to interpret this graph is that it is 95% or better certain that GISStemp LOTI is wrong according to the CRU! If one plots BEST it is a lot more than 95% certain that BEST is wrong — the two differ by as much as a degree C. If one plots the BEST 95% confidence bound (which actually is available on WFT, amazingly) it is 99.99% certain that HadCRUT4 is badly, badly wrong, as is GISStemp LOTI. HadCRUT3 isn’t quite 95% certain to be wrong according to HadCRUT4, but it is close. One wonders what the error claims for HadCRUT3 were?

        To paraphrase Wesley in The Princess Bride, “I do not think this `confidence interval’ means what you think it means…”

        https://t.co/py2MvC6qjQ

        Oooh, gotta hurt SM

      • David Springer

        Steven Mosher | November 6, 2015 at 1:10 pm |

        “Luckily folks dont listen to you”

        Judging by the global lack of significant action to reduce CO2 emission despite 30 years of hand waving and purse clutching by people far more credible (read actual scientists) than Mosher I’d have to say it’s you who folks don’t listen to.

        They say the definition of crazy is doing the same thing over and over with the same result while each time expecting something different to happen. Fitting for all the nattering nabobs of warming negativity, eh?

        HAHAHAAHAHAAHAHAHAHAHA!!!!!!!!!!!!!!11

      • Steven Mosher: “Luckily folks dont listen to you”

        Heh! You wish!

        Here is the current state of the United Nations ‘My World’ Global survey concerning worldwide causes of concern, covering 8,582,414 respondents.

        http://data.myworld2015.org/

        Note that ‘Action taken on climate change’ comes stone last, 16th of 16 categories.

        So despite over two decades of hooting, screeching, Mannipulating the temperatures and just flat out making stuff up, it seems that only a small proportion of the World’s citizens take you lot and your chicken little alarmism seriously.

        So I contend that it’s YOU and your fellow climate “scientists” that nobody is listening to – for a very good reason.

        Now, why what would that be, do you think?

    • “Near-surface and lower-tropospheric warming of the Arctic over the past 35 years is examined for several datasets. The new estimate for the near surface reported by Cowtan and Way in 2014 agrees reasonably well with the ERA-Interim reanalysis for this region. Both provide global averages with a little more warming over recent years than indicated by the widely used HadCRUT4 dataset, which has sparse coverage of the high Arctic. ERA-Interim is more sensitive than the Cowtan and Way estimate to the state of the underlying Arctic Ocean.

      Observational coverage of the Arctic varies considerably over the period. Surface air-temperature data of identified types are generally fitted well by ERA-Interim, especially data from ice stations, which appear of excellent quality. ERA-Interim nevertheless has a warm wintertime bias over sea-ice. Mean fits vary in magnitude as coverage varies, but their overall changes are much smaller than analysed temperature changes. This is also largely the case for fits to data for the free troposphere. Much of the information on trends and low-frequency variability provided by ERA-Interim comes from its background forecast, which carries forward information assimilated from a rich variety of earlier observations, rather than from its analysis of surface air-temperature observations.”

  3. We learn that a person was born and then died 80 years later and it’s all we need to know to understand that person because anything else we want to know can be provided by interpolation or infilling, all without ever leaving our couch.

  4. Wonderful data review. The pause/hiatus has two important consequences. First, its generally >17 year length casts great doubt on all CMIP3 and CMIP5 projections. Yet those projections and the associated ECS ~3 are what drives COP21. Second, it highlights the IPCC anthropogenic attribution error present ever since TAR (AR3) and Mann’s infamous hockey stick. Natural variation on decadal scales is significant and has been insufficiently comsidered. Partly this is because of UNFCCC and IPCC’s charter. Natural variation greatly weakens the CAGW case for mitigation.

  5. Meanwhile, over at Skeptical Science, Dana Nuticelli shows a temp vs modeled graph that shows NOAA observations a) almost exactly the same as modeled, and b) no hiatus at all.

    It mystifies me how different people, even with different agendas or takes on uncerrtainties, can come up with diametrically opposed conclusions.

    I’m tempted to think that actual delusional behaviour is common in the community – not just in the minds of our political or business leaders.

  6. I don’t know how cherry picking got its semantics. If you’ve picked cherries, which involves dropping them into a bucket and getting paid for how many buckets you fill, you’re completely indifferent to quality.

    In fact it’s a firing offense to strip less than the whole treeload.

    • This statement is simply incorrect.

      I grew up on a cherry and yes you are expected to do some sorting.

      • That must have been one big honking cherry.

      • I grew up on a cherry farm and yes you are expected to do some sorting.

        There that’s better.

        The other thing about being a cherry farmer is you aren’t impressed by other men’s exploits. I’ve picked over 300 tons of cherries in a single year.

    • Prunus avium typically grows fast and high so that you cant pick cherries from top branches. The birds will always get those. You get only what is called the low-hanging fruit.

      Commercial cherry-growers might have a different way to see this. OTOH, cherries need some more global warming to grow here well. Commercial cherries come here now from Spain. It’s a pain how easily winter kills a cherry tree. That’s why my language does not pick cherries, it uses cream-peeling instead.

      • I had a cherry tree in central Illinois, winter lows were usually below 0 F and sometimes down to 20 below.

        Cherry tress grow fine in that climate.

      • Driving through central and eastern WA to spend a weekend in Walla Walla I noticed how they are now shaping fruit trees into a Y shape to make harvesting more efficient by mechanical means

      • I currently live in Southwest Lower Michigan and Winter lows are not routinely below 0F. It happens occasionally but it is not routine. I lost two cherry trees when temperature dropped to about -20F each night for about 2 weeks. The apple and pear trees were fine, but it killed the cherry trees. They can handle the occasional low spikes but not the persistently cold nights.

      • Cherry trees are killed by extremely cold during the winter, but also less extreme cold during the spring. They also need a long growing season and high heat sum to by ready for winter. And it is not only the cold; they can be killed by sunshine and wind in the spring. They can also be killed by pruning, for example because pruning accelerates growth hindering winter hardiness.

        Global warming is projected to add moisture to air, and make winters milder, so it’d be very good on average for cherry tree. Unluckily one degree C is not very much.

        The best varieties are also more difficult to grow, and choosing the right rootstock can change results considerably.

    • I thought the phrase came from picking the best cherries out of the bowl on your table, not off the tree in your orchard.

      • Quite so: “cherry-picking: selectively choose (the most beneficial or profitable items, opportunities, etc.) from what is available.”

      • Planning Engineer

        Yes-I thought it was about picking out the cherries from a source containing mostly lesser fruit and misrepresenting the overall source.

      • Whenever we found a stone in one of her pies, my mother would say ‘That’s how you can tell the cherries are real’.

        It’s rare to find stones in the Climate Pie.
        ============================

  7. These graphs look completely idiotic, by the way. It’s data but in the context of gaussian random processes they tell you nothing that you need to know.

    Small cycles are on big cycles and big cycles are on bigger cycles. The bigger the cycle, the less you know about it from data.

    So, speaking as one experienced in stationary gaussian random processes, you have zero information about there the temperature is going, long term. It might be up, it might be down, 50% either way.

    I suspect climate science badly needs a course in stationary gaussian random processes.

  8. dikranmarsupial

    “1) the rate of warming over a particular period of at least 10 years is not statistically significant from zero (with the context of a nominal 0.1C uncertainty). ”

    This is a nonsensical criterion from a statistical point of view. Over a period as short as a decade the statistical power of the test is too low to expect to a statistically significant trend of the expected magnitude even if it is actually present. A lack of a statistically significant trend does not mean there has been a change in the rate of warming.

    • @dikranmarsupial

      Good. Finally someone brings up statistical power. Now if only the use of the use of the term uncertainty‘ could be cleaned up.

      • nobody talks about power in these discussions

      • Yes and that makes tend to dismiss them pretty quickly. I am very glad to see it raised. Same with ‘uncertainty’ [as a statistical term] and ‘significance’. IMO opinion when people use statistics for inference all the details have to be presented. That can be a demanding task.

      • dikranmarsupial

        It is easier in a Bayesian setting, where the terms have meaning more in accord with their everyday usage. Frequentist statistics are useful, but mistakes happen because people tend to interpret frequentist terms as if they were Bayesian ones. The problem with frequentist tests is that you can’t assign a probability to the truth of a particular proposition or hypothesis (as it has no long run frequency), so they instead talk about significance instead. Unfortunately what we really want is a numerical indication of the relative plausibilities of H1 and H0, which no frequentist procedure can provide.

      • Steven Mosher: nobody talks about power in these discussions

        Santer and McKittrick have both published on power. It’s where they came up with recommendations of how long a series should be in order to conclude that a change had occurred. Plenty of people have commented on the low statistical power to detect a long period oscillation based on a short data series.

      • Since you guys are discussing statistical power, one aspect is that a greater sample size, all else being equal, implies greater SP. But what happens as in the case of temp measurements when the samples are widely distributed in space and time. It seems that would imply less SP.

        How is that taken into account?

    • +10 The lack of a statistically significant trend does not mean much at all, either way.

    • dikranmarsupial – What is the statistical power of HADCRUT4 global mean and UAH lower trop global mean from 2000 to 2010? How would you even be able to calculate that for something like HC4?

      • dikranmarsupial

        means don’t have statistical power, tests do.

      • So, using the entirety of HADCRUT4. H0 is a zero trend. H1 is a positive trend – not sure what magnitude of trend to use. Any suggestions?

        At any rate, how would you proceed? For this scenario, let’s consider 2 hypothetical cases. 1. The precision of all thermometers used anywhere is +/- 1 C. 2. The precision of all thermometers used anywhere is +/- 10 C.

        How would you carry the precision of the thermometers through in the test?

      • dikranmarsupial

        jim2, you could try reading the Santer paper, from which the 17 year figure originates. Or you could read the 2009 paper by Easterling and Whener paper which shows that decadal periods with little or no trend happen every now and again in both the observations and the model runs, which suggests that tests based on a decade or so will have little statistical power.

        A proper analysis of statistical power is not straightforward, which is why they tend not to be used anything like as much as they should. Unfortunately they matter most if you are trying to make an argument for the null hypothesis, which is why most scientists know to choose a H0 that they don’t want to be true. If you are arguing against H0, the need for analysis of statistical power is rather less.

      • At this point, I’m interested in something more fundamental than which horse is winning the race. I’m curious how through all the data processing, the precision of the thermometers affects both the path of the mean global temperature over time and the error bars at any particular time in the period of record.

      • dikranmarsupial

        jim2, the uncertainty in the measurements, if included in the analysis, wii reduce the statistical power (meaning the lack of a statistically significant trend means even less). I suspect the reduction in power is not that great as the effects of e.g. ENSO are much larger than the measurement uncertainty in the observations. If you really want to know how to compute the power of the test, then Tamino would be a much better person to ask as he is a time-series specialist (“Understanding statistics” is an excellent starting point for someone wanting to learn the basics of statistics).

      • dikran… lol, no doubt the right direction, but it would be more humane to send jim2 to H.

      • I already know the basics of statistics. But sparse data spread out over space and time using measuring instruments with +/- 1.5 C precision and questionable accuracy is a bit beyond “basic.”

      • JCH – I realize you are upset that Dikran is telling you the upward trend is statistically insignificant.

      • the uncertainty in the measurements, if included in the analysis, wii reduce the statistical power (meaning the lack of a statistically significant trend means even less).

        And, simultaneouly, the presence of a statistically significant trend would mean even less.

      • dikranmarsupial

        jim2 wrote “I already know the basics of statistics.”

        jim2 also wrote “JCH – I realize you are upset that Dikran is telling you the upward trend is statistically insignificant.”

        I’m sorry jim2, but if you understood the basics of statistics, you would know that there is no reason for JCH to be “upset” by this; the test has little statistical power, so the lack of statistical significance doesn’t mean much, it certainly doesn’t mean that the trend has not continued at the same rate as before.

        The bottom line is that if you want to argue for H0, then a non-significant result does not support your argument unless you can show that the statistical power of the test is high. That is why scientists generally argue against the null hypothesis rather than for it.

      • dikranmarsupial

        opluso wrote: “And, simultaneouly, the presence of a statistically significant trend would mean even less.”

        I don’t think that is true, I would have thought that if this uncertainty were taken into account in computing the p-value it would make it more difficult to get a low p-value (as there is more noise/uncertainty, so H0 can “explain” a wider range of observations).

        NHSTs are not symmetric (note that they only use H0 and although your are supposed to specify H1 it isn’t used in the test at all, unless you evaluate the statistical power).

      • Dikran – you’re persistence in attributing to me an argument about H0, H1, H2, or H anything is wearing a little thin. Other people can read, I’m not so sure about you though.

      • And on top of that, if a “scientist” favors H0 over H1 without a strong statistical test to back it up, then that person isn’t much of a scientist.

      • dikranmarsupial

        jim2 wrote “And on top of that, if a “scientist” favors H0 over H1 without a strong statistical test to back it up, then that person isn’t much of a scientist.”

        That is exactly what happens when someone argues for the existence of a hiatus based on the lack of statistical significance (favoring H0 over H1) from a test with low statistical power (which is therefore not a strong test). That is exactly what is wrong with Prof. Curry’s first criterion.

      • jim2 – I’ve always known that current warming, while red hot, is not statistically significant. That it has continued warming into the teeth of the zenith of cooling factors by natural variation… all that NV has to offer… I’ll take that in a second.

      • dikranmarsupial:

        I would be interested in your thoughts on the possibility that researchers might sift through null definitions and/or data sets until a publishable result can be produced.

        Given that a failure to reject the null simply is not acceptable in today’s publish or perish, er, climate isn’t there great incentive to continue digging though data sets or trying alternative methodologies until “Eureka!”?

        It is obviously easier to score points if you are allowed to move the goalposts or bring in more favorable referees in the middle of the game.

        If such practices exist, do you have suggestions for exposing or limiting this behavior? Or would you consider it a non-issue?

      • Since the MSU sats collect 25 GB per year, that’s about 68 MB PER DAY. Of that, I’m not sure how much is the radiance data, but there has to be a lot of that in the data set. And it has to far outweigh sampling by land and ocean thermometers. Statistical power depends on sample size, so sats have surface beat hands down on that count.

        I’m not sure how to deal with the the effect size, but it seems that the sat record would have a chance of detecting a pause over 10 years.

        From the article:
        How do I calculate statistical power?
        The power of any test of statistical significance will be affected by four main parameters:

        the effect size
        the sample size (N)
        the alpha significance criterion (α)
        statistical power, or the chosen or implied beta (β)

        http://effectsizefaq.com/2010/05/31/how-do-i-calculate-statistical-power/

      • dikranmarsupial

        I don’t think that is true, I would have thought that if this uncertainty were taken into account in computing the p-value it would make it more difficult to get a low p-value (as there is more noise/uncertainty, so H0 can “explain” a wider range of observations).

        As I’ve asked a few other times with no responses on this blog, should it make any difference if the researcher decides to use p<0.1 instead of p<0.05 ?

        Should it make any difference when that decision is actually made?

        You have taken Prof. Curry to task for her statements regarding one possible null hypotheses relating to the "hiatus". Hopefully you will continue to build upon these exchanges in the future.

        In the meantime, it would be helpful to hear your opinions concerning the fact that the Karl, et al., hiatus-killing paper did not achieve p<0.05 (and did not achieve p<0.1 for land or ocean for the years 1998-2012).

        From Karl, et al., at page 2:

        It is also noteworthy that the new global trends are sta-tistically significant and positive at the 0.10 significance level for 1998–2012 (Fig. 1 and table S1) using the approach described in (25) for determining trend uncertainty. In contrast, IPCC (1), which also utilized the approach in (25), reported no statistically significant trends for 1998-2012 in any of the three primary global surface temperature datasets. Moreover, for 1998–2014, our new global trend is 0.106± 0.058°C dec−1, and for 2000–2014 it is 0.116± 0.067°C dec−1 (see table S1 for details). This is similar to the warming of the last half of the 20th century (Fig. 1). A more comprehensive approach for determining the 0.10 significance level (see supplement) that also accounts for the impact of annual errors of estimate on the trend, also shows that the 1998–2014 and 2000–2014 trends (but not 1998–2012) were positive at the 0.10 significance level.

      • dikranmarsupial

        Opluso asks: “As I’ve asked a few other times with no responses on this blog, should it make any difference if the researcher decides to use p<0.1 instead of p<0.05 ? "

        Yes. The use of a fixed significance level without justification is something I doubt Fisher would have approved of (c.f. "The Null RItual"). In practice the selection of alpha (the threshold) depends on the nature of the analysis, and basically partially fills part of the role taken by the prior in a Bayesian analysis. This is illustrated very nicely by this xkcd cartoon https://xkcd.com/1132/

        In this case, null hypothesis is that the Nun has not gone nova, the prior probability is almost one, so we should really require very strong evidence that H0 is unlikely to be correct before we reject it. Choosing alpha = 0.05 in those circumstances is clearly absurd and a competent frequentist statistician would not do so. Similarly, the probability that CO2 is no longer a greenhouse gas from 1998, or that there is some new forcing that we don’t know about and haven’t measured is somewhat unlikely. So if someone is claiming that global warming has suddenly stopped needs strong evidence to support that, so even if they used H0 = warming continues at the same rate, then they probably ought to use an alpha less than 0.05. The Bayesian approach to statistics has the advantage of making the priors explicit in the analysis, they are still present in the frequentist analysis, but only implicitly and often ignored entirely.

        As to publications, it is only necessary to have a statistically significant result if those observations are the only support for the claim. This is generally not the case in science and there is often supporting evidence from other sources that the reviewers ought to take into account. P-hacking (searching for the result that suits your argument) does go on in the journals, and it is fairly easy to spot, however similar statistical flaws also happen in the opposite direction. Looking for the longest period without a statistically significant trend (and then looking only at the dataset that makes this as long as possible) is another example of p-hacking, it is just that the game is to make p as large as possible, rather than as small as possible (or rather finding the biggest argument for which is just greater than 0.05). The real problem is that people have a tendency not to use statistics in a self-skeptical way, hence the old joke “he uses statistics as a drunk uses a lamppost – more for support than illumination”.

      • dikranmarsupial:

        Thank you for your reply.

        This is yet another example of why so many of us value Dr. Curry’s site as a source of helpful insights and useful information.

      • dikranmarsupial

        Cheers opluso, while some of the comments are undoubtedly useful, I would point out that the blog article itself is deeply misleading, none of the three criteria are reasonable as stated.

      • For those who are playing along at home, the methodology for determining trend uncertainty used in Karl, et al. (2015) was based on Santer, et al., (2008). Each paper utilizes statistical testing of a null hypothesis in examining the question of the hiatus/pause. Very interesting reading.

        Karl (2015):
        http://sciences.blogs.liberation.fr/files/noaa-science-pas-de-hiatus.pdf

        Santer (2008)
        https://www.gfdl.noaa.gov/bibliography/related_files/bds0801.pdf

        Key background papers for Karl, et al.’s construction of ERSST v4 are paywalled (at the links on NOAA’s site) — which is an extremely serious obstacle to citizen scientists and another issue worthy of multiple blog posts. Persistent searching or a respectful request to the authors often turns up non-paywalled sources, such as:

        Huang (2015)
        http://rda.ucar.edu/datasets/ds277.0/docs/ERSST.V4.P1.JCLI-D-14-00006.1.pdf

        Liu (2015)
        http://rda.ucar.edu/datasets/ds277.0/docs/ERSST.V4.P2.JCLI-D-14-00007.1.pdf

        Of course, Ross McKitrick helped kick off this public controversy and his first look at Karl (2015) may be found here:
        http://www.rossmckitrick.com/uploads/4/8/0/8/4808045/mckitrick_comments_on_karl2015.pdf

      • This is from the supplemental material for Karl 2015:

        …The factor that contributed the largest change in SST trends over this period was continuing to make corrections to ship data after 1941.These corrections are based on information derived from night marine air temperature. This correction cools the ship data a bit more in 1998-2000 than it does in the later years, which thereby adds to the warming trend. To evaluate the robustness of this correction, trends of the corrected and uncorrected ship data were compared to co-located buoy data without the offset added. As the buoy data did not include the offset the buoy data are independent of the ship data. The trend of uncorrected ship minus buoy data was -0.066°C dec-1 over the period 2000-2014, while the trend in corrected ship minus buoy data was -0.002°C dec-1.

        This close agreement in the trend of the corrected ship data indicates that these time dependent ship adjustments did indeed correct an artifact in ship data impacting the trend over this hiatus period. …

      • JCH:

        Your quote from Karl (2015) raises another question I haven’t been able to answer.

        As I understand your quoted description, uncorrected buoy data (that is, prior to adding 0.12 C) was used to test of the validity of extending the mid-century ship adjustments into recent years. It states that the buoy data was tested “without the offset added.”

        Thus, here is my question:

        If extending pre-1941 ship data adjustments into recent decades by itself produced nearly perfect agreement (-0.002 C/decade) with the uncorrected buoy data — why did they need to add an additional adjustment to the buoy data afterwards?

        Or perhaps I misunderstood their testing procedure?

  9. So it is no surprise that this is a political problem, and in fact “warmists” make sure each passing year is the “warmest” year on record … by restating the data.

  10. Hiatus or not, all of the temperature dataset trends since 1997 are quite different from their more rapidly rising trends during the 1980’s and 90’s when so much momentum regarding an imminent AGW crisis developed. But, the Mauna Loa yearly average atm. CO2 levels have increased 9.5%(35 ppm) in the 17 years from 1997 through 2014, while they only increased 7.4% (25 ppm) from 1980 through 1997. Hiatus or not, what does this say about the sensitivity of surface temperatures to atm. CO2 level? Clearly there are important factors involved in global surface temperature variation other than atm. CO2 levels. It is time for the IPCC and its disciples to get more realistic about their CO2 climate sensitivity estimates (that our EPA uses and inflates to justify its proposed CO2 emissions regulations) and that look more ridiculous with each passing year of CO2 and global average temperature data, as well as a growing body of CO2 climate sensitivity scientific research focused on observational data.

    • Yep! The way this layman sees it the crux of the matter is that as per the satellite data, temperatures have come no were close to being where they were projected to be based on the amount of increase in atmospheric CO2. Thus the Hypothesis has already been falsified El Nino warming or no!

    • Writes Mr. Doiron:

      Hiatus or not, all of the temperature dataset trends since 1997 are quite different from their more rapidly rising trends during the 1980’s and 90’s when so much momentum regarding an imminent AGW crisis developed.

      It appears that “the temperature dataset trends since 1997” have been subject to comparison against instrumental thermometry less subject to artifact (inadvertent and purposefully imposed) than were the varying numbers of surface stations from which had been derived the “more rapidly rising trends during the 1980’s and 90’s”.

      Indeed, suspicions that those “rapidly rising trends” in the datasets had been deliberately and duplicitously exaggerated for the explicit purpose of fostering “an imminent AGW crisis” the better to panic the great gullible majority of voting populations throughout the democratic Western polities and facilitate the power-grabbing greed and sheer bastardliness of the governing classes in those nations.

      “Manmade Global Warming” is a collection of ideas that have been thoroughly discredited by real science for years. Yet you would never know it by observing the behavior of politicians, media personalities, and certain corrupt academics and scientists. There is not now, nor was there ever any scientifically respectable evidence for global warming. Like Lysenkoism, it is a complete and total fabrication, a hoax.

      Yet it continues to have a strictly political life because, just as Lysenkoism served Stalinism by backing up Marx’s flawed notions — Global Warming serves today’s collectivists by offering them an excuse to seize control, not merely of the means of production, but of each moment, every aspect of the lives of every individual under their thumbs.

      To be absolutely certain the opportunity isn’t missed, dissenters — meteorologists and others willing to dismiss Global Warming as the crock it happens to be — have found themselves intimidated, denied funding and tenure, even fired. Here and there you’ll even see demands that “climate change deniers” be prosecuted, imprisoned, or executed. Somewhere, the ghosts of Stalin and Lysenko are having a huge laugh together.

      — L. Neil Smith (30 August 2009)

  11. dikranmarsupial

    In statistical hypothesis tests you take H0 (the null hypothesis) to be the hypothesis that you need to nullify in order to continue with your alternate, or research hypothesis. I.e. it is normally the opposite (in some appropriate sense) of your research hypothesis. So if you want to claim that there has been a pause, then your null hypothesis should be that the rate of warming has continued unchanged. You don’t adopt the null hypothesis that there has been a hiatus (as in criterion 1) as that totally circumvents the self-skepticism that the hypothesis test is supposed to enforce.

    So, has anyone performed a test for a change in the rate of warming (with “no change” as H0)?

    • See my publication nullifying the null hypothesis
      https://judithcurry.com/2011/11/03/climate-null-hypothesis/

      In simple laboratory experiments or in clinical trials, the null hypothesis is easily formulated in
      the context of “no effect.” Although the null hypothesis is rarely discussed explicitly in the
      context of climate research, it is implied in analyses that result in a confidence interval. The null
      hypothesis is typically of the “no effect” type, although it need not be. The choice of the null and
      alternative hypotheses is determined by the question that the scientist seeks to investigate. There
      is no unique argument of logic to determine the appropriate null hypothesis for a particular
      alternative hypothesis, or to determine which hypothesis in an opposing pair should be considered
      as the null hypothesis.

      • dikranmarsupial

        Sorry Prof. Curry a blog post does not change acceptable statistical practice. The standard null hypothesis test starts out from the assumption that the null hypothesis is correct, and then determines if the observations allow you to reject the null hypothesis. The test in 1 starts out assuming that there is a pause (H0 no trend), so it is biased from the outset towards the conclusion that there is a pause. This provides useful self-skepticism if you are arguing that there is no pause, but none whatsover if you are arguing for a pause. If you understand that the purpose of the NHST is to prevent the scientist from getting carried away with their research (i.e. to enforce self-skepticism) , then the logic of the choice of H0 is pretty straightforward.

      • Read my published paper linked to in the blog post and the references cited in my published paper

      • dikranmarsupial

        Example: If I flip a coin four times and get a head each time, the usual statistical test for the bias of a coin would give a non-significant result. Now, if I am trying to prove that the coin is unbiased, does this result establish my case? The answer is “no, none at all” (because I am taking H0 as the thing I am trying to prove and the power of the test is too low for it to be meaningful).

      • Judith,
        Sorry, but isn’t this obvious? If you select no trend as the null, then if we can’t reject the null, the strongest statement you can make is that we can’t rule out that there’s been no warming. We certainly can’t claim that there’s been on warming because we also can’t reject the null of continued warming.

      • dikranmarsupial

        Prof. Curry, I took part in the discussion on that thread, where the same issues were brought up. A paper in Wires Climate Change does not override statistical practice either. A paper in a statistics journal perhaps.

        Now, as I said, if I am trying to claim that a coin is unbiased, does a failure to reject the null hypothesis in the traditional test support my claim if I only flip the coin four times? No, this illustrates why you shouldn’t take H) to be the thing you are trying to establish.

      • dikranmarsupial

        ATTP expresses it well. The last time I looked at this, if you take H0 to be “no warming” then it can’t be rejected over these sorts of timescales, if you take H0 to be “no change in the rate of warming”, that can’t be rejected either. This basically tells you that the timescale is too short for meaningful analysis BASED SOLELY ON THIS SET OF OBSERVATIONS (if you include physics and Occam’s razor you have a bit more to go on).

      • Climate alarmists are sticklers with regards to acceptable statistical practices. (This dickrant character always shows up here with a nasty attitude.)

      • dikranmarsupial:
        Are these tests being set up as one-tailed or two? And do you think it should matter?

      • dikranmarsupial

        opluso – it depends on what question you are trying to answer with the test, but first of all it is important to get the basic mechanics of the test right first.

      • Don Monfort

        Climate alarmists are sticklers with regards to acceptable statistical practices.

        Everyone should be. There are alternatives. If you use statistics then do it right or it is meaningless. That is just simply the nature of that beast. … and I do not think that I qualify as an alarmist.

      • dikranmarsupial:

        Don’t you have to establish what question you are trying to answer before you can set up the mechanics of the test?

      • dikranmarsupial

        opluso Yes, that is indeed another problem, if you want to define a hiatus as just a lack of a trend in GMSTs, then that is different from asking whether there has been a pause in global warming, for example. However, if you are arguing for H0, then you are forced to perform an analysis of the statistical power of the test, you can’t just use the normal quasi-Fisherian approach.

        There is a form of statistical test procedure for cases when you are arguing for H0, but I can’t remember what its called and hardly anybody uses it, but it basically comes down to an exchange of roles between alpha (the threshold for the p-value) and beta (1 – power of test). However for most problems it is much easier just to transform it into a test where you are arguing against H0 and use the standard procedure.

      • Climate alarmists are NOT sticklers with regards to acceptable statistical practices.

        I fixed it.

      • Re: Hiatus & null hypothesis 11/06/2015

        Dkranmarsupial takes issue with Dr. Curry re hypothesis testing. Neither is correct. The null hypothesis is not so easy to formulate, and the questioning of the hiatus is a perfect example. First, the problem is not a matter of accepting or rejecting the null hypothesis. The null hypothesis is the contradiction of the affirmative proposition, the removal or replacement of the Cause & Effect relationship that produced the facts at hand. Then the null hypothesis provides a mathematical basis to answer the question of what are the statistical chances that the observed facts were do to statistical noise or different sources. This formulation leads to an objective measurement of the confidence in the affirmative proposition.

        In science confidence is the direct or residual noise in the data, while in less than scientific pursuits, confidence is doubt in the minds of observers or practitioners.

        For the hiatus, the affirmative proposition might be, for example, either (a) the slope of the recent Global Average Surface Temperature (or if one wishes, some other direct or derived temperature record) is within some range of zero (or if one wishes, less than some small positive value greater than zero) or (b) the increase in GAST, e.g., is due to or consistent with human CO2 emissions over the entire record (e.g., since satellites, since thermometers).

        Hypothesis (a) seems to be the one used by maybe everyone worrying himself with the apparent hiatus. The prevalent analysis seems to assume that data have inertia, when in fact neither temperature nor heat has inertia (inertia meaning the resistance to changes in motion, and not heat capacity, where the latter is the meaning given by IPCC and in climatology). Behind this worry seems to be the assumption that the Cause, i.e., anthropogenic CO2, is assumedly persistent. It hides the real problem, namely that CO2 emissions continue to rise and should have caused a rise in temperature according to the AGW model, an IPCC model with established confidence levels, namely the Equilibrium Climate Sensitivity.

        A strictly objective analysis under Hypothesis (a) begins with forming piecewise continuous linear fits (hinged straight lines) to the entire temperature record of choice, and measuring the residue and the variance reduction ratio from the best fit for each number of segments. This is necessary to produce an objective analysis of the data: what is the strength of hiatuses, how frequent have they been, what are their duration, is a hiatus on-going now and how long has it existed.

        Hypothesis (b) precisely addresses the real problem. Now the null hypothesis is that the observed temperature is due to residual noise after removing the effects of other variables, if any, we might be able to model.

        Of course, it turns out that the Sun accounts for the entire HadCRUT3 record since thermometers, confirmed in part by the decline in solar output manifest in weakness in the latest and previous Solar Cycles, #23 and #24.

        CO2 is the Effect and GAST the Cause, not the reverse. Now the problem is to rationalize or repair the Keating Curve. Atmospheric CO2 is a by-product of the ocean, and man’s contribution is trivial. The arrow of causation is the reverse of that of the AGW model.

      • “There is no unique argument of logic to determine the appropriate null hypothesis for a particular alternative hypothesis, or to determine which hypothesis in an opposing pair should be considered as the null hypothesis.”

        True, because null hypothesis p-value tests are nothing but a crude rule of thumb for determining if your data is “surprising” if there is “no effect”. It is no good at all if you are pretty sure there is an effect but want to know what caused it.

        To compare the strength of evidence for hypothesis H1 over hypothesis H0 under observed evidence E you need the likelihood ratio:

        P(E | H1) / P(E | H0)

        Better, take the log of that and call it “weight of evidence”. (base 2: bits, base 10: bans). It does not matter one jot which hypothesis you put on the bottom of the fraction.

        Some people are surprised to hear that statistical tests can only tell you about the hypotheses you have bothered to test. If H1 = “2C of warming per doubling of CO2 plus Gaussian noise”, and H0 = “no warming plus Gaussian noise” then you have not learned anything about H3: “decadal oscillations plus Gaussian noise”, nor any of the variants of TCS and different noise spectra, etc.

        Of course all this can be handled, including estimating the unknown parameters, but it is a bit more complicated than p-value testing. Everybody, please go and read Jaynes.

        P-values have had their day and should be banned from journals, and from the classroom.

      • Jeff,

        The null hypothesis is not so easy to formulate, and the questioning of the hiatus is a perfect example.

        I presume that you mean that we can’t easily define an appropriate null, and I would agree. The point that Dikran is making is that if you select your null to be “no trend” and find that you can’t reject this null, then you cannot claim that there has been no warming. The best you can claim is that we can’t reject that there has been no warming.

      • In fact, this post makes a good point about the relevance of statistical significance.

      • dikranmarsupial can’t prove there’s been any warming. He’s upset because of it. Once again, he like other warmists attempts to split millihairs.

      • dikranmarsupial

        As ATTP suggests, the point I am making is that you can’t use the traditional quasi-Fisherian null hypothesis statistical test if you are arguing FOR the null hypothesis, at laest not without properly dealing with the issue of statistical power. While the NHST has its problems (c.f. the “null ritual”), it does have its uses in enforcing a degree of self skepticism, provided you are arguing against the null hypothesis.

        Now if someone can show that there has been a reduction in the underlying rate of warming, my response would be “good, tell me more”, however none of Prof. Curry’s three criteria are a basis for that from a statistical perspective.

      • Gareth:

        P-values have had their day and should be banned from journals, and from the classroom.

        I agree with your comments and note that banning p values has already begun (e.g., Basic and Applied Social Psychology journal). As well, the abuse of statistics in medical research is finally being openly acknowledged. http://rsos.royalsocietypublishing.org/content/1/3/140216

        There are obvious differences between climatology and, say, biomedical research. But all people have personal biases and interests/goals, regardless of the field of research. Statistical analysis is presented as a filter to protect against (or at least limit the influence of) such bias. If it is failing to do so, the current approach should be reconsidered.

      • Re: … and Then … 11/7/2015 @ 7:11 am said,

        The point that Dikran is making is that if you select your null to be “no trend” and find that you can’t reject this null, then you cannot claim that there has been no warming.

        Actually, that isn’t what dikranmarsupial (aka Dikran, presumably) said, and regardless, it’s no improvement. The mere existence of a hiatus is a matter of definitions and fact. It requires no hypothesis, null or experimental (cause & effect). Define hiatuses, choose (or cherry pick) a data set, and then use mmse or other criterion to measure the slope. QED.

        Some posters here have worried themselves over the meaning of cherry picking. Regardless of its etiology, it means to pick the data that supports the preconceived hypothesis. It violates principles of science.

        Statistical models are similar to scientific models, the difference lying in Cause & Effect. Statistical models silently presume that whatever C&E relationships in effect during data collection persist for the future. Scientific models postulate Cause & Effect relationships and then test whether they make predictions better than chance.

        A competent statistical hypothesis about the apparent hiatus is that it exists by chance during a time when all the C&E relationships are extant — present and continuing unchanged. To decide that, one needs the probability distribution of hiatuses and the weight to give to them. Those things are done objectively by data reduction of the data record, as described above (11/6, 2:24 pm), and science dictates using the entire data record or establishing an objective criterion for using a partial data record.

        Publish or Perish is easy; predicting the future is hard. Climatology is easy; science is hard.

      • Writes Dr. Glassman:

        Publish or Perish is easy; predicting the future is hard. Climatology is easy; science is hard.

        Pardon me for this “mirabile dictu” moment, but I’ve been dining out – so to speak – on your “Conjecture, Hypothesis, Theory, Law: The Basis of Rational Argument” (2007) for some several years now.

        Your essay provides a pikestaff-plain approach to “An understanding of the validity of science and scientific criticism,” and its value has not diminished in the interval since you’d published it.

        I’ve yet to encounter a warmista who doesn’t retreat in shock and dismay from your concluding paragraphs.

        “When in the course of human events somebody does something that puts somebody else to the trouble of adjusting the numb routine of his life, the adjustee is resentful. The richer he is and the more satisfactory he considers his life, the more resentful he is at any change, however minute. And of all the changes which offend people, changes which require them to think are most disliked.”

        — Murray Lienster, The Pirates of Ersatz, March 1959

      • Jeff,

        Actually, that isn’t what dikranmarsupial (aka Dikran, presumably) said, and regardless, it’s no improvement. The mere existence of a hiatus is a matter of definitions and fact. It requires no hypothesis, null or experimental (cause & effect). Define hiatuses, choose (or cherry pick) a data set, and then use mmse or other criterion to measure the slope. QED.

        Ummm, no.

    • Regarding your excellent question about statistical tests for rates of change of warming…… A long term plot of a long global mean surface temperature anomaly such as HadCRUT4, indicates the anomaly has risen by about 0.8C since 1850 and can have a data scatter of 0.4C within a 5 year period. But without getting into the statistical weeds of various Null Hypotheses, your eyeballs can detect in the temperature plot vs. time that the rate of warming has varied significantly over any period of time you select, and at + or – rates over periods of 20 – 30 years. As long as current temperature trends continue, the long term warming rate since 1850 will continue to decrease.

      But, atm. CO2 levels have increased monotonically over the 165 year period since 1850, by about 40%. Because of the obvious non-CO2 natural and cyclical phenomena affecting the temperatures over the entire 165 year period, I recommend that the natural cycles in the data first be identified, and that observational CO2 sensitivity be determined from periods of time corresponding to an integral number of temperature cycles in the dataset, to minimize the contributions of natural temperature cycles for observational determination of CO2 sensitivity, that are not well (enough) understood to be getting so alarmed about CO2.

      The obvious natural cycles in the HadCRUT4 data have periods of 60-70 years, but there could be longer term natural temperature cycles that affected the paleo-temperature reconstruction (Ljungqvist 2010) during the 1850 years prior to the start of the HadCRUT4 thermometer data in 1850, and that may still be causing some natural warming. A natural cycle of about a 1000 year period would be consistent with temperature extremes associated with the Roman Warm Period, Medieval Warm Period and the Little Ice Age. If this temperature cycle is still intact, it should peak out in about 2100 and is very difficult to confirm or reject in the HadCRUT4 data since 1850. However, its presence (or not) in the HadCRUT4 data can have a significant effect on the determination of CO2 climate sensitivity from observations of temperature and historical values of atmospheric GHG and aerosol levels.

      A conservative CO2 climate sensitivity estimate can be obtained by assuming the long term temperature cycle established by paleo-data mysteriously disappeared in 1850, although cyclical phenomena don’t typically behave in this fashion at a time of highest momentum of the cycle providing a period of the most rapid long-term temperature change of the cycle.

      The most important question related to the debate about government tinkering with temperature datasets or whether there is a “hiatus” or not, is how much physical evidence do we have for the high CO2 climate sensitivity values the EPA is using to establish CO2 emissions regulations with potentially severe economic impact to our nation?

    • And yet, there is a pause. hahaha

    • “In statistical hypothesis tests you take H0 (the null hypothesis) to be the hypothesis that you need to nullify in order to continue with your alternate, or research hypothesis. I.e. it is normally the opposite (in some appropriate sense) of your research hypothesis. ”

      if our research hypothesis is stipulated by the model means, then that hypothesis would seem to be

      Warming continues at .2C per decade.

      Not sure how you form the appropriate “opposite” of that as null
      or in other terms what do we have to nullify to continue on course with
      the science as we see it?

      I suppose one could say that .2C is predicated on an ECS of 3C
      (mean of the models ) and perhaps that .1C of warming or less
      would indicate responses somewhere south of 1.5C ECS which would
      be a challenge to the science as we know it ( all ballpark thinking here)

      so we’d have to say that the null should be something like

      Trend < .1C decade.

      Let's put it this way..

      What trend would we have to see ( <.1C per decade? ) to give us
      pause about proceeding with the research?

      • dikranmarsupial

        If you want to test if the trend is consistent with the models, the easiest test is to see if the observed trend lies in the spread of the (appropriately baselined) model runs. Although if they lie near or outside the spread, it might be because the models are running a bit warm, or they underestimate internal climate variability, or a bit of both (from what I understand “a bit of both” is the most likely).

        “What trend would we have to see ( <.1C per decade? ) to give us
        pause about proceeding with the research?"

        Judging from Easterling and Wehner's paper, over a period as short as a decade, the threshold would probably be rather negative as the effects of e.g. ENSO are large compared to 0.2deg per decade on that timescale. I'm not sure tests based on such short a timescale are going to be very useful as the effect size needed would be too large to be reasonable.

        I'd say a better test would be to see if there was statistically significant evidence for a breakpoint (the autocorrelation makes this difficult for monthly data, last time I looked at it for annual data the evidence was not significant).

      • Put another way <0.1oC trend is the point you stop freaking out my kids in school about the earth going to hell in a handbasket, plz.

      • dikranmarsupial

        I forgot to mention that the forcings used in the model runs may also be a bit different from those that actually happened as well, so that needs to be accounted for (with a caveat if nothing else).

      • Dikran
        “If you want to test if the trend is consistent with the models, the easiest test is to see if the observed trend lies in the spread of the (appropriately baselined) model runs.”

        An alternative might be to test the make-up of the ensemble of models to look for models that look unrealistic (statistically) in comparison to observations, cut these from the ensemble to give us a new spread of future projections. There seems to be a huge objection to do this based on what I have seen to be rather flimsy subjective reasoning. The caveat would still have to be that the new ensemble is on the hot side but less so.

      • If you want to test if the trend is consistent with the models, the easiest test is to see if the observed trend lies in the spread of the (appropriately baselined) model runs. [mwg bold]

        And the appropriate metric [interval] is? :O)

      • dikran

        ‘If you want to test if the trend is consistent with the models, the easiest test is to see if the observed trend lies in the spread of the (appropriately baselined) model runs.”

        That cannot be right. because adding a bad model, for example, one that predicted cooling, would give you a pass.

      • dikranmarsupial

        human1ty1st My objection to that would be that the spread of the ensemble includes both the stochastic uncertainty in the projection (i.e. simulated weather noise) and also the uncertainty in the model physics. I don’t think that second source of uncertainty really goes away if you cut the models that don’t fit the single realization of the real world that we can observe over a short period of time. Note that if we were to do the same exercise in the run up to 1998 we would be throwing away the models with the least warming that might be giving the better match to observed climate now. This is especially true if the models underestimate the stochastic uncertainty. It would have to be done with very great care, to say the least.

      • dikranmarsupial

        “That cannot be right. because adding a bad model, for example, one that predicted cooling, would give you a pass.”

        If you are testing for consistency, a “pass” doesn’t mean anything much, for much the same reason that a lack of statistical significance doesn’t mean anything if you are arguing for H0. The reason a lack of consistency is such a powerful finding is exactly because a “pass” is such a low hurdle. Frequentist approaches tend to be asymmetric in this way.

      • Sure dikran but the general rule seems to be models with multiple runs tend to have all their runs clustered in only in a limited part of the ensemble ( a very hot model is always very hot even though there is variation from run to run). And as SM alludes what ends up making obs and models match in your approach is the single model or two that allows obs to sit within the model spread.

        Personnally I think there is room in science to make multiple different analysis of data sets to answer different interesting questions. I think the objection to this is that it throws up answers that complicate the policy decisions

      • dikranmarsupial: If you want to test if the trend is consistent with the models, the easiest test is to see if the observed trend lies in the spread of the (appropriately baselined) model runs.

        Why exactly is that better than taking the mean of the model outputs? Why not be a Bayesian and take the distribution of trends as the prior, and compute the posterior distribution? In this case, the result is not a lot different from computing confidence limits on the model mean trend and the data estimated trend, and noting that they have very little overlap.

        Not saying you are wrong, I am just interested in your thoughts on this.

      • dikranmarsupial

        “Why exactly is that better than taking the mean of the model outputs? ”

        Because the mean of the model outputs is an estimate of only the forced response of the climate system, whereas the observations are the result of the forced and unforced responses (i.e. there is internal variability as well). So you would only expect the two to be the same if the effects of internal climate variability were approximately zero. The forced response is the best point estimate, but essentially only if you expect the internal variability to have a symmetric distribution. The spread of the model runs shows what is plausible range of effects of internal climate variability around the forced response.

      • I am getting a little wary here with the mixing of means, observations, and point estimates.

      • mw..

        I need to think about this. Perhaps we can convince dikran to write something more expository on the topic… banter back and forth on blogs isnt very fruitful..

        mathew also said some thoughtful things… time to shut up and readmore

      • Mosh…I agree. This has the potential to be interesting and useful.

      • dikranmarsupial: So you would only expect the two to be the same if the effects of internal climate variability were approximately zero.

        That has in fact been assumed by some people. Though not as much recently.

        But how much of the model spread actually reflects differences in the treatments of natural variability? It looks to me as though some weighting of the model outputs is needed (as in the Bayesian approaches to testing non-point null hypotheses, like the Bayes Factors approach of Kass and Raftery), and it isn’t sufficient for the data to fall barely within the range of model outputs (as it does now.)

        Since I mentioned it, I should probably be the one to do it.

      • davideisenstadt

        Mosh:
        you write:
        “That cannot be right. because adding a bad model, for example, one that predicted cooling, would give you a pass.
        Perhaps unaware, you’ve stumbled onto one of there reasons it is unacceptable to compute means and CIs from ensembles of multiple runs of multiple models…these aren’t independent observations…they should never be aggregated, to do so is to commit malfeasance.
        I really suggest you go and read some of RGB@duke’s comments regarding the propriety of combining models’ results into some sort of ensemble mean.
        Its a fool’s errand.

      • Without adding my own comments, I think these quotes tell important things about the limitations of models:
        (Ref. Contribution from Working Group I to the fifth assessment report by IPCC ; my emphasis )

        «When initialized with states close to the observations, models ‘drift’ towards their imperfect climatology (an estimate of the mean climate), leading to biases in the simulations that depend on the forecast time. The time scale of the drift in the atmosphere and upper ocean is, in most cases, a few years. Biases can be largely removed using empirical techniques a posteriori. The bias correction or adjustment linearly corrects for model drift. The approach assumes that the model bias is stable over the prediction period (from 1960 onward in the CMIP5 experiment). This might not be the case if, for instance, the predicted temperature trend differs from the observed trend. It is important to note that the systematic errors illustrated here are common to both decadal prediction systems and climate-change projections. The bias adjustment itself is another important source of uncertainty in climate predictions. There may be nonlinear relationships between the mean state and the anomalies, that are neglected in linear bias adjustment techniques.»
        (Ref. 11.2.3 Prediction Quality; 11.2.3.1 Decadal Prediction Experiments )

        “The climate change projections in this report are based on ensembles of climate models. The ensemble mean is a useful quantity to characterize the average response to external forcings, but does not convey any information on the robustness of this response across models, its uncertainty and/or likelihood or its magnitude relative to unforced climate variability.
        (Ref: Box 12.1 | Methods to Quantify Model Agreement in Maps)

      • Warming continues at .2C per decade.

        Wrong.

        The thirty year trend in land/ocean index has never reached 0.2C per decade, though it came very close for ~1975 thru 2004:

        The backward looking trends to date peaked for 1974 to present at about 0.17C per decade and has declined significantly since:

        If you believe in AGW, you expect this because annual rates of RF increase peaked in the late 1970s and have declined since.

      • Steven Mosher: “Warming continues at .2C per decade.”

        No it doesn’t.

        Stop making stuff up.

      • “I really suggest you go and read some of RGB@duke’s comments regarding the propriety of combining models’ results into some sort of ensemble mean.
        Its a fool’s errand.”

        I have read his stuff.
        It is beside the point.

    • The no-change null hypothesis has been tested by tamino. Conclusion:

      “not only is there a lack of valid evidence of a slowdown, it’s nowhere near even remotely being close”

      https://tamino.wordpress.com/2014/12/09/is-earths-temperature-about-to-soar/

    • dikranmarsupial: So, has anyone performed a test for a change in the rate of warming (with “no change” as H0)?

      There are difficulties in establishing a legitimate hypothesis for the random variation under the null hypothesis, and for representing the alternative hypothesis. Consider the fit of a model consisting of linear trend plus sine curve with independent noise: according to the fitted model, in which the sine term is statistically significant if you use all the data since 1880, the “trend” now is different from what it was 30 years ago. But some have written that the apparent sine term is really natural variation, and the only “trend” is the linear component, which is approximately constant over the last 100+ years.

      People have also tried “switching regressions” to estimate the time of the change and its statistical significance: that result also depends on what is known about the background variation.

      Much, much has been written along these lines, and I have only hinted at the total.

      • dikranmarsupial

        matthewrmarler – this is pretty much my point, people have tried “switching regression” to look at this problem and as far as I can see, the only way to get evidence of a significant breakpoint is to use statistical assumptions that are not valid (e.g. that monthly GMSTs are uncorellated). However, if someone could provide such an analysis, that *would* be a much better basis for arguing that the apparent hiatus was more than just an artifact on internal variability.

    • Wow, dikran where did you get the idea that Climate Science was trying to prove that their was a “pause”. The null hypothesis approach is supposed to make it difficult to prove your conjecture. Pretending that no effect or Ho is what they are trying to prove is simply reversing the whole burden of proof. The skeptics are not out to prove that their is a pause, they are merely, perhaps gleefully, pointing out, to the discomfiture of the warmist community, that the whole theory of Global warming seems to be at odds with the data. They are saying H1 has not been proved. Or would you hold that the minute someone notes that the test seems to be failing that that becomes a new null hypothesis. Of course, were that to be true, then the those who thought that the new H1 was failing would have to turn around and make the old Ho the new H1 and around and round we go.

      • dikranmarsupial

        “, they are merely, perhaps gleefully, pointing out, to the discomfiture of the warmist community”

        It would be better if people gave up this sort of childish partisan behavior and actually engaged with the science. I’m glad that not all skeptics take that view.

      • dikranmarsupial: “It would be better if people gave up this sort of childish partisan behavior and actually engaged with the science.”

        Go on then.

        Show us how it’s done.

      • dikranmarsupial

        My posts have been on the technical aspects of statistical hypothesis testing, I have been engaging with the science.

      • My posts have been on the technical aspects of statistical hypothesis testing, I have been engaging with the science.

        Indeed they have.

        Earlier,

        It would be better if people gave up this sort of childish partisan behavior and actually engaged with the science.

        A reasonable observation.

      • Dikran and mwg +10. Focus on the words (science or not) and less on the people is the wat to go!

      • Thick fingers again! way is the word!

      • An emotion of happiness that we won’t need a one-world government to “fight” “climate change” and that we will be able to continue our comfortable life fossil fuels have enabled is entirely appropriate. One can have emotions and do science. In fact, every scientists has emotions and those sometimes blind them to the facts. It is possible that the emotion of fear (of losing funding) warps the interpretation of climate data.

      • Let’s see, oceans about 70% of the surface roughly 417 Wm-2 plus roughly 120 Wm-2 latent and convective bring that up to 537 Wm-2 with a specific heat capacity 3 to 4 times greater than land. Poleward transfer of that energy is around 120 Wm-2 so if you are interpolating or kiriging land/ice areas you are conflating poleward transfer with actual total energy. When you struggle to eke out a tenth of a degree of “global” warming in regions that are 25% significant if that with respect to energy the founders of thermodynamics laugh at your ignorance.

        But do carry on, it is humorous.

      • The ginger SkS Kid says:”It would be better if people gave up this sort of childish partisan behavior and actually engaged with the science.”

        We wonder if he has similarly lectured little dana nutticelli and the rest of that nasty school of climate barracudas.

      • dikranmarsupial

        I think this demonstrates the point has been reached where further discussion of the science is unlikely, so I’ll leave it there.

      • Well, you can always go over to nutticelli’s echo chamber at the Guardian and have a good consensus climate science dogma discussion with the rest of nutticelli’s minions. You can let your hair down there and get nasty with the heavily outnumbered skeptical contingent.

  12. I think applying a 90% confidence level should be as bothersome as the data adjustments. Particularly since it can be selected post-analysis to support publication.

  13. If you can’t say for sure that it is not warming, or that it is warming, that surely indicates a pause in temperatures.

  14. That there is natural variability and pauses should not be surprising.
    I don’t believe the 17 years is significant because, presumably, the 1910-1945 warming was natural, demonstrating at least 35 years of variability comparable to what we expect from global warming and recent relative stasis does not preclude AGW.

    The significance of the pause or slowdown is the IPCC AR4 predictions:
    “A temperature rise of about 0.2 °C per decade is projected for the next two decades for all SRES scenarios.”

    It is an indicator of the ideology and lack of understanding that the ‘expert’ body actually entails. The AR5 is conspicuously missing any actual predictions of rates ( the failure of AR4 made sure of that ).

  15. “The source of the difference between NCDC and the other data sets lies in its analysis of the ocean data, which will be the subject of a follow on post.”

    Err that is not the WHOLE story.

    The WHOLE story involves both DATA and METHOD

    basically NCDC grid cells at 5 degrees are too small. This leads to a lack of coverage at the South Pole which is COOLING.. Antartica may experience its coldest year on record in 2015.

  16. Judy – As Ryan Maue wrote

    “ECMWF uses 4-D variational data assimilation to optimally combine all sources of data include satellites, surface stations, balloons, aircraft, ships, etc.”

    Thus, if surface stations are used, this reanalysis is not completely independent of the surface temperature trend analyses.

    Also, with respect to http://euanmearns.com/the-diverging-surface-thermometer-and-satellite-temperature-records-again/

    see

    Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2009: An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841. http://pielkeclimatesci.wordpress.com/files/2009/11/r-345.pdf

    Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2010: Correction to: “An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841”, J. Geophys. Res., 115, D1, doi:10.1029/2009JD013655. http://pielkeclimatesci.wordpress.com/files/2010/03/r-345a.pdf

    • Hi Dr Pielke,

      Always great to see you commenting here. However I don’t quite see your point. Obviously the ECMWF is not going to be independent of the surface temperature since it is a re analysis of all the various measuring methods. That’s what it’s for isn’t it?

  17. stevenreincarnated

    Has anyone kept track of all the data sets the consensus no longer believes are valid yet were quite robust previously?

    • I only remember hadCrappy3.

    • stevenreincarnated

      C&W only lasted a few months as the greatest data set ever before it failed to show 2014 as the warmest year ever.

      • Got what it deserved.

      • Near-surface and lower-tropospheric warming of the Arctic over the past 35 years is examined for several datasets. The new estimate for the near surface reported by Cowtan and Way in 2014 agrees reasonably well with the ERA-Interim reanalysis for this region. Both provide global averages with a little more warming over recent years than indicated by the widely used HadCRUT4 dataset, which has sparse coverage of the high Arctic. ERA-Interim is more sensitive than the Cowtan and Way estimate to the state of the underlying Arctic Ocean.

        Observational coverage of the Arctic varies considerably over the period. Surface air-temperature data of identified types are generally fitted well by ERA-Interim, especially data from ice stations, which appear of excellent quality. ERA-Interim nevertheless has a warm wintertime bias over sea-ice. Mean fits vary in magnitude as coverage varies, but their overall changes are much smaller than analysed temperature changes. This is also largely the case for fits to data for the free troposphere. Much of the information on trends and low-frequency variability provided by ERA-Interim comes from its background forecast, which carries forward information assimilated from a rich variety of earlier observations, rather than from its analysis of surface air-temperature observations.

    • stevenreincarnated

      The water vapor trend made it in the IPCC report when it was going up. There was a caveat in small print that it might not be accurate but the graph was fairly prominent. The most recent IPCC report have a graph saying we were really really wrong but if you look at the small print it may be the data that is wrong?

  18. And the the pause became a “warming hiatus”. And nobody can know what that is. Judith just fills in the criteria that best serves her political purposes.

    Why not discuss all the earlier pauses and “warming hiatuses”?

    • Because the most recent period is the one for which we have the best data, and the greatest increase in anthro GHG. The recent hiatus is important because it tests the hypotheses that most of the warming since the industrial revolution is manmade.

      That isn’t to say previous hiatus’s aren’t interesting as well.

  19. Why don’t we quit obsessing on obviously a limited data length ending in a rise due to a strong ongoing El Nino, and at least wait for the end of the El Nino, and the following drop due to a very likely La Nina, and then average. This would almost certainly result in a >20 year clear lack of trend up.

    • The strong rise started long long before the 2015-2016(?) El Nino, and the strong part, in terms of boosting the GMST, of the current EL Nino has barely gotten started.

      • in terms of boosting the GMST, of the current EL Nino has barely gotten started.

        Each ENSO seems to differ, but the 97/98 event peaked in GMST in January.
        So, you’ve got 3 or 4 months before cooling back down.

    • But, but, Leonard, that would mean waiting for a year or two, and the conference is now.

    • Absolutely.

      The hiatus is a bit ambiguous, at the moment we see that the rate of warming is surprisingly non accelerating and non alarming. Returning in five years would be sensible, but Fuller’s Klimate Krazies won’t give us the time.

    • But, but, but … it doesn’t matter whether you look at El Nino years, La Nina years or neutral years .. Predictable response from fake sceptics banking on the new forthcoming pause.

  20. Thank you, Professor Curry, for your efforts to promote real climatology instead of group-think climatology.

  21. There is one more Dataset for Troposhere-Temps after 2002: GPS Radio observations. These Data are free of calibration-issues, really globaly and they seem to confirm RSS. See: http://www.researchgate.net/profile/Peter_Thejll/publication/270597966_Recent_global_warming_hiatus_dominated_by_low_latitude_temperature_trends_in_surface_and_troposphere_data/links/54be146f0cf218d4a16a4c01.pdf , fig.1. Here they used the Version 5 of UAH I think. The Ver. 6 beta is almost identical to RSS when it comes to trends.

  22. Dr Curry

    Yes thanks for the hard work and interesting blog.

    Always fascinating and informative even with some of the nasty attacks but you persevere in a mature professional manner.

    An example to us all.
    Scott

  23. Keep in mind that what we are here calling climate science is actually the world’s largest environmental impact assessment and these are always conjectural at best.

  24. The University of Maine (UM) Climate Change Institute (CCI) provides the Global Forecast System (GFS) based CFSR data and has final monthly global temperature estimates through June 2015 and daily through September 2015. I compiled monthly averages from the August and September daily data to make the graphs below through September 2015. I also compared their output with WxBell and it matches closely most of the time for 2014-2015 (not shown below).
    UM CCI GFS vs NOAA NCEI monthly global temperature anomaly estimates for 1979-2015:

    Notice that for much of the period the UM CCI and NCEI estimates compare well, but for some reason NCEI estimates are slightly lower in the early 1980’s and for 2002-2008 and substantially higher since 2010. The NCEI high offset since 2010 is especially disturbing.

    The UM CCI GFS trend for the 21st century so far (2001-2015) is -1.68C per 100 years:

    The UAH satellite lower tropospheric trend for the 21st century so far is +0.06C per 100 years:

    The BEST trend for the 21st century so far is +0.72C per 100 years:

    The NOAA NCEI trend for the 21st century so far is +1.08C per 100 years:

    The large discrepancy between the downward GFS based UM CCI trend versus the highly adjusted upward BEST and NCEI trend since 2001 raises a red flag for me. It appears that adjustments are making the BEST and NCEI estimates less accurate, which is certainly not good and may be greatly misleading. The UAH flat trend is actually somewhat of a compromise. I tend to favor the GFS based global surface temperature estimates because I suspect the input data are more comprehensive and have better coverage than what goes into BEST and NCEI, although I have not studied this in detail. Would anyone like to use only the same data sources that go into the BEST or NCEI estimates to drive the weather forecast models? I can’t imagine that would work as well as the GFS approach.

    More details here, including links to the above global temperature estimate data sets:
    https://oz4caster.wordpress.com/

    • Thanks for these links

    • UMCCI didn’t get the memo. Measuring temperature is difficult. If your trend points down, you fix it by comparing it to the CO2 curve and locate the breakpoints where it stops correlating.

      Really, there is no trend pointing down that would be called robust before it’s fixed from its obvious biases. You are spreading discredited misinformation etc etc.

      Do I need to add /sarc? No, I don’t really know what to think.

    • ” I tend to favor the GFS based global surface temperature estimates because I suspect the input data are more comprehensive and have better coverage than what goes into BEST and NCEI, although I have not studied this in detail.”

      if you want to test GFS try something simple.

      Go get CRN data. triple redundant sensors at pristine locations.

      about 200 locations.

      Then go compare what GFS does to that data …

      CRN is used as INPUT… watch what GFS does to this pristine data

      go ahead

      https://madis.noaa.gov/madis_datasets.shtml

      here is some of the other data that gets ingested

      https://madis.noaa.gov/madis_rwis.shtml

      https://madis.noaa.gov/madis_cwop.shtml

      Then you can do simple tests.

      The input data says it was 75F… look at what GFS says after processing the data.

      • Steven, thanks for your comment. By the way, I enjoyed seeing the work on air quality in China on the BEST web site, since I have worked in that field for many years. I have looked a bit at the USCRN and really like what has been done. I wish there was an analogous GCRN that included measurements in the oceans. Maybe one of these days.

        I have not researched what goes into the GFS initialization, nor how it is ingested and processed. Are you trying to tell me that this system does not do a good job of utilizing the data inputs used for the global weather model forecast runs four times each day? If so, what is the problem?

    • oz4caster, thank you for the links.

    • UAH is still inexplicable showing La Nada as of October 2015. Someone needs to look at why, considering how 1998 was overcooked by the same dataset. This explains its flatness. Other surface warm areas are reproduced, but not the big one.

      • Jim D, I have noticed that too. However, if you compare the UAH and Multivariate ENSO Index (MEI), it shows several months lag time in the UAH response to strong El Nino events. So, it will be interesting to see what happens over the next several months.

      • Jim D:

        UAH is still inexplicable showing La Nada as of October 2015. Someone needs to look at why, considering how 1998 was overcooked by the same dataset.

        1998 was overlooked?

      • overCooked. Hence the contradiction. Who knows how they do this. Their fields look very smooth and perhaps their satellites just keep missing that area. There should be a big orange spot on the east Pacific equator. GISTEMP is not out yet for October, but that will show it.

      • Jim D:

        Oh, I “C”. Thx.

      • Or heat simply isn’t being transferred to the troposphere from the el Nino like before. How does el nino index progression and satellite troposphere temp by time of years compare for these el ninos?

      • How do winds, sunshine, and rain compare during the two periods?

        We are likely missing the mechanism for big latent heat transfer from the ocean.

      • Probably why this el Nino seems so strong despite IPWP not being charge by a prior la nina like 1998.

      • Must be CO2 is preventing evaporation!

      • The current El Nino is its own thing, but it is just getting started, perhaps like the 1997-1998 El Nino was at about the same time.

      • JCH, I agree. But Jim D was suggesting that there is a big difference in the satellite (troposphere) between the two at this state. Also, total heat availible might be different because the 97/98 was charged by a la nina burst. The Northern pacific may also affect wind and clouds during this event, affecting heat transfer from the ocean and forcing during the event.

        Polar vortex/jet stream may also come into play this time, affecting albedo and heat distribution (heat moved to artctic atmosphere where it radiates away faster).

      • The latest ONI is in at 1.7. Models seem to be backing of on it. In some models. it was projected to peak at over 3.0; now high 2s.

      • Jim D, see JCH’s plot. UAH didn’t show a big jump until the winter of 1998.

        Notice 1996-1997.83 and compare to 2014-2015.83: http://www.woodfortrees.org/plot/uah/from:1997.75/to:1999/plot/uah/from:1996/to:1997.83/plot/uah/from:2014/to:2015.83

      • In previous months, all the warm spots in GISTEMP had counterparts in UAH except that one which is one of the biggest. We don’t have October yet, but we will see that again when we do.

      • Jim, I recommend looking at precipitation and cloud cover. Also air vs. sea temperatures, absolute rather than anomaly. Humidity.

        My guess is that heat transfer from the ocean to atmosphere isn’t happening.

  25. Does anyone know why cosine weighting is applied to surface temps? This makes no sense to me. It makes cooling/warming trends, if any, at the poles disappear.

    Am I understanding this correctly? The supplemental seems to have many iffy procedures in it. For example, I don’t see where the uncertainty of the actual measurements come into play. The whole formula for error estimation doesn’t make sense. It seems to be applying statistics that would be used for multiple measurements of the same spot in a short time interval. For example, if I took 5 temp measurements of one spot on my kitchen floor at one minute intervals. Then, I would apply statistics similar to those applied in the supplemental. But, then again, I’m not a stats pro.

    • Never mind, the grids aren’t equal area, they are equal degrees squared.

    • So, the sat data was excluded because it introduced a “cooling bias.” I think I know where the real bias lies.

      From the supplemental:
      Previous versions of our SST analysis included satellite data, but it was dis-included in a 72 later release because the satellite SSTs were not found to add appreciable value to a monthly 73 analysis on a 2° grid, and they actually introduced a small but abrupt cool bias at the global scale 74 starting in 1985 (30).

      • ‘So, the sat data was excluded because it introduced a “cooling bias.” I think I know where the real bias lies’

        Yes, it is human intervention.

  26. The ERA interim reanalysis bar chart that Judith reproduces in this post also strongly suggests that the claims, e.g. by Cowtan and Way, that the incomplete spatial coverage of HadCRUT4, particularly in the Arctic, has led to it underestimating global temperature trends over the last two or three decades are wrong.

    The section of the page (http://www.ecmwf.int/en/about/media-centre/news/2015/ecmwf-releases-global-reanalysis-data-2014-0) containing the chart says this about it:

    “The effect of data coverage on estimates of global warming.

    The bar graph below shows two estimates of yearly average surface temperature change both derived from ERA-Interim.

    The narrower, darker bars denote complete global averages, while the lighter, broader bars denote averages taken only over grid boxes which exclude most of the Arctic and Antarctic. Evidently the ranking of average temperatures depends on the data coverage, although the differences are within the bounds of uncertainty associated with the dataset.

    For the purpose of this illustration we used HadCRUT4 geographical coverage for each month to sample ERA-Interim estimates.”

    Close examination shows that there is only1-2% difference between the complete and incomplete (HadCRUT4) coverage versions in their rise over 1979-2014. And the rise from any year in the range 1980 to 1986, to 2013 or 2014 is in all cases greater for the version with the HadCRUT4 coverage than that with complete global coverage.

    • Agreement from this paper: http://www.researchgate.net/profile/Peter_Thejll/publication/270597966_Recent_global_warming_hiatus_dominated_by_low_latitude_temperature_trends_in_surface_and_troposphere_data/links/54be146f0cf218d4a16a4c01.pdf
      “Omission of successively larger polar regions from the global-mean temperature calculations,in both tropospheric and surface data sets, shows that data gaps at high latitudes can not explain the observed differences between the hiatus and thepre-hiatus period. Instead, the dominating causes of the global temperature hiatus are found at low latitudes. The combined use of several independent data sets, representing completely different measurement techniques and sampling characteristics, strengthens the conclusions.”

    • Nic,
      You might want to look at Simmons & Poli and Saffioti et al.. I don’t think their conclusion was

      claims, e.g. by Cowtan and Way, that the incomplete spatial coverage of HadCRUT4, particularly in the Arctic, has led to it underestimating global temperature trends over the last two or three decades are wrong.

    • Nic Lewis,
      There have been studies which looked directly into the issue (e.g. Simmons and Poli, 2015; Saffioti et al., 2015; Dodd et al. 2015) which support the approach employed by Cowtan and Way (2014) when compared to ERA-Interim for validation. Any discussion you make regarding ERA-interim when compared to CW2014 should undoubtedly include the findings of those studies (generally supporting our approach and results). I would pay special attention to the paper by Simmons and Poli (2015) and by Dodd et al (2015).

      We also provided a comparison document for Gleisner et al (2015) showing that their method of infilling across latitudinal bands provides higher cross-validation errors compared to our method using kriging. We’ve written a fair bit about this here:

      http://www-users.york.ac.uk/~kdc3/papers/coverage2013/gleisner-response.pdf

      Dodd, E. M., Merchant, C. J., Rayner, N. A., & Morice, C. P. (2015). An Investigation into the Impact of using Various Techniques to Estimate Arctic Surface Air Temperature Anomalies. Journal of Climate, 28(5), 1743-1763.

      Gleisner, H., Thejll, P., Christiansen, B., & Nielsen, J. K. (2015). Recent global warming hiatus dominated by low‐latitude temperature trends in surface and troposphere data. Geophysical Research Letters, 42(2), 510-517.

      Saffioti, C., Fischer, E. M., & Knutti, R. (2015). Contributions of atmospheric circulation variability and data coverage bias to the warming hiatus. Geophysical Research Letters, 42(7), 2385-2391.

      Simmons, A. J., & Poli, P. (2015). Arctic warming in ERA‐Interim and other analyses. Quarterly Journal of the Royal Meteorological Society, 141(689): 1147-1162.

      • Robert Way,
        Noted. However, the Dodd and Simmons papers appear to be wholly about arctic temperatures. The Saffioti paper seems to concern just temperatures during the hiatus – 1998-2012 or thereabouts. None of them appear to contradict my statements about GMST trends over the bulk of the satj period.

      • Nic,
        I think they contradict your suggestion that Cowtan & Way is wrong, which wouldn’t be all that hard to acknowledge.

      • Didn’t you see the first word of Nic’s reply, kenny?

        I will help you. The SkS Kid said:

        “Any discussion you make regarding ERA-interim when compared to CW2014 should undoubtedly include the findings of those studies (generally supporting our approach and results).”

        Nic Lewis replied:

        “Noted.”

    • “None of them appear to contradict my statements about GMST trends over the bulk of the satj period”

      I don’t disagree when you extend the period out farther to cover the whole satellite period. Its rather apparent that over the satellite era the CW2014 trends are similar to other datasets. We directly show this decreasing bias in Table 4 and Figure 6 of the paper.

      However, if you look at only the late 1990s and 2000s then this trend bias is readily apparent. There are other differences in earlier time periods as well.

      Our approach generally shows more arctic amplification (in both directions) than datasets which do not have full coverage so it will impact trends over periods where there are strong contrasts between high latitude trends with lower latitude areas. As a result, our dataset shows more century-scale warming (in terms of magnitude) than some of the other datasets because we capture the cool periods of the end of the Little Ice Age. I would say Berkeley Earth is pretty comparable.

      • “However, if you look at only the late 1990s and 2000s then this trend bias is readily apparent.”

        Actually, the ERA-interim chart shows that the full-coverage series has a lower trend to 2014 than the HadCRUT4 coverage version for all years starting from 2001 on, as well as starting from 1979 and 1980.

        On my calculations, over the full satellite era, 1979-2014 CW2014 has a ~10% greater GMST trend than does HadCRUT4v4 (which is itself in line with the full-coverage ERA-interim 2m GMST). BEST (with air temperatures over sea ice) is between HadCRUT4 and CW2014 (and slightly closer to CW2014).

        Noted re arctic warming in the early period. However, I’m not sure that sea and land data coverage then is adequate for kriging to produce reasonable results – even assuming that it has been during recent periods.

      • Nic,

        Actually, the ERA-interim chart shows that the full-coverage series has a lower trend to 2014 than the HadCRUT4 coverage version for all years starting from 2001 on, as well as starting from 1979 and 1980.

        Do you have a link for this. This doesn’t seem to be what the Simmons paper shows.

      • …and Then There’s Physics wrote:
        “Do you have a link for this. This doesn’t seem to be what the Simmons paper shows.”

        I acurately digitized the ERA-interim bar chart shown in the post and calculated the trends. If you doubt what I say is correct, may I suggest you try doing the same.

  27. Mining for gold on the tip of an iceberg

    Understanding climate change must consider that temperature data at any point in time is the culmination of layers of factors, affected by complex terrestrial conditions and influenced by relationships with stellar connections that are years in the making. Temperatures of a relatively few places on the surface of the earth, no matter how many adjustments we apply to them, are just the tip of the iceberg and provide a peek at best into the murky waters of the future. A statistical analysis of temperatures over time based on a monomaniacal focus on atmospheric CO2 levels only blinds us to the many other unseen factors like the underwater portion of an iceberg that materially affect weather.

    • The thing is that with the right understanding all you need to see is the tip of the iceberg to know that much more lies beneath. Specifically we know that with respect to icebergs, approximately 90% is below the water line. Your analogy sorta proves the opposite of what you want. Sometimes limited data can give us a good estimate of much more.

      • Even by your restatement, latching on the one thing you see as the sole cause misses the fact that unseen other causes are 9 times as great.

  28. C&W was likely ‘motivated’ warmunism. Look where and how Way hangs out (SkS). Krigging was invented to estimate mineral ore body reserves in an otherwise ‘homogenous’ but sparsely drilled deposit. Krigging across ice, water, and land is suspect even though the math is possible cause the underlying assumptions behind the math are violated. They also krigged from polar UAH to surface to manufacture a second CW version of surface warming when USH itself showed none.

    • C&W was likely ‘motivated’ warmunism.

      You mean they made it up to get the answers they wanted. Are you of the opinion that if the research supports AGW or negative impacts, it’s made up?

      • From my part, if someone supports openly c-agw scenarios, then there is no way to expect from that person any science that would be impartial or pure science, but it is part of person’s activism.

        Scared people have a freedom of speech, but still their credibility suffers from their panic.

      • From my part, if someone supports openly c-agw scenarios

        I don’t know what you mean by “support.” Do you mean they there are risks of severe consequences? And why is this activism. There is a lot evidence in the scientific literature for these risks. So what would make this activism?

      • Joseph, I check into things pretty deeply before commenting. In this case, CW is commented on in two essays in Blowing Smoke. Judith also commented on the suspect statistical methods in CW when the paper first came out.
        Not all AGW or negative impact research is bad. But much of it in fact is, surprisingly so. There are numerous examples in my ebook if you care to look further into the matter. Essays A High Stick Foul, Shell Games, and By Land or by Sea provide examples of arguable academic misconduct.

      • Ristvan,

        here is the CW data for the arctic

        compared to UAH

        Sure there are minor differences that probably impact the minutae of hiatus discussion but generally both data sets look similar. Hardly enough to start accusing people of misconduct.

      • These allegations seem to be made all the of the time by “skeptics” and I haven’t seen anything ever come from any of them. I will pay attention when something is actually proven.

      • impugn motives, etc..

      • Joseph

        Kind of like trying to “prove” jesus was not god

      • Yeah so you can feel free to engage in rampant speculation?

      • Joseph: “Yeah so you can feel free to engage in rampant speculation?”

        But Joseph, all your posts are nothing but rampant speculation.

        Apart from those that are borderline abusive, that is.

      • So Joseph,

        You choose to ignore certain allegations but accept without question other allegations, namely those claiming significant risk for negative impacts.

        You continuously claim here that there is extensive evidence for negative impact. Yet you have yet to show us any of it.

      • Tim, the only think I can suggest to you is to go through the IPCC report on impacts and read the literature that it is based on. How else would you want me to show you the evidence? What would be an acceptable format?

      • I would also add that Exxon has a vested interest in downplaying AGW, while Karl does not (as a government employee his job is pretty secure).

      • Do you think that the folks at Exxon are scared that people are going to stop buying oil and natural gas over a little AGW, yoey? This type of hysterical foolishness is not going to get you any points here, yoey.

      • Mitigation efforts are going to make their business less profitable, Don

      • You are silly, yoey. Exxon Mobill will sell all of the oil and gas that they can extract from the ground. What we have seen for decades out of you alarmist clowns is mitigation lip service and failure to do anything substantive. If we burn up, it will be just as much your fault as anybody else’s.

      • They can sell it but their profit margins will decrease. Is that too hard to understand? Are you that dense?

      • I know all about profit margins, yoey. What impact on profit margins have we seen so far, from your feckless alarmist mitigation schemes? Aren’t you the same dips who keep hollering about how much evil profit the oil companies are making? You are assuming facts that are not in evidence, yoey. Pipe dreams. That’s all the time I have for you.

      • No, the big players in oil have a vested interested in promoting AGW concern to boost their lobbying power. There is a lot of potential in boosting profit margins with AGW (ghg reducing) mitigation policy.

        It is new competition that face lower margins. Constrained supply means higher profit margins, particularly with some sort of cap-and-trade policy. Oil gets pumped more slowly and the emissions rights increase in value and become a claim on the future producers oil as current production tapers off and are sold or leased to the owners of tighter oil plays.

      • There is a lot of potential in boosting profit margins with AGW (ghg reducing) mitigation policy.

        If that is true, why are they giving most of their money to Republicans?

      • humanity, that’s more than a dozen millihairs of difference – very significant in the Cooter world.

      • Joseph | November 6, 2015 at 2:17 pm | Reply
        C&W was likely ‘motivated’ warmunism.

        You mean they made it up to get the answers they wanted. Are you of the opinion that if the research supports AGW or negative impacts, it’s made up?

        Yes, that is correct, I am surprised you understand that and still support global warming.

        Joseph | November 8, 2015 at 11:17 am |

        If that is true, why are they giving most of their money to Republicans?

        Stable business climate. Business reserves capital instead of investing during uncertainty and Democrats provide a less stable business climate – in particular the regulatory climate.

        You need to be able to evaluate the potential rate of return to make business decisions and Democrats like to make that difficult.

      • I was responding to Aaron’s claim that oil companies support CO2 mitigation because it is good for business.

      • Yes, that is correct, I am surprised you understand that and still support global warming.

        I don’t believe wacky conspiracy theories, PA.

      • It isn’t a conspiracy theory (well, climategate shows there is a conspiracy), I blame a combination of a failed educational system, cognitive dissonance, world-view-bias, careerism, self-delusion, and group-think.

        If we required true diversity, 40% conservatives in academia and science, stupid things like this wouldn’t happen. 40% happens to be the rate in the general population.

        There aren’t many conservatives in academia or science so eco/regressives are operating in a vacuum. People operating in a vacuum produce ideas that suck.

    • Those Krazy Krigging SkS Kidz.

    • Reply in wrong place…

      mwgrant | November 6, 2015 at 2:57 pm |
      https://judithcurry.com/2015/11/06/hiatus-controversy-show-me-the-data/#comment-741811

      Krigging[sic] across ice, water, and land is suspect even though the math is possible cause the underlying assumptions behind the math are violated.

      The boundary may not be quite as simple as you suggest. After all air moves across the boundary. e are not talking about stationary deposits. Things are complicated but still gray.

      BTW some of the early geostatistics development was by the Soviets in meteorology.

    • “Krigging was invented to estimate mineral ore body reserves in an otherwise ‘homogenous’ but sparsely drilled deposit. Krigging across ice, water, and land is suspect even though the math is possible cause the underlying assumptions behind the math are violated. ”

      https://en.wikipedia.org/wiki/Regression-kriging

      Imagine that ! they moved beyond the original definitions and uses?

      This guy hangs out in the spatial stats R list.
      ask him

      http://onlinelibrary.wiley.com/doi/10.1111/j.1467-9671.2006.01015.x/abstract?systemMessage=Wiley+Online+Library+will+be+unavailable+on+Saturday+7th+November+2015++from+10%3A00-16%3A00+GMT+%2F+05%3A00-11%3A00+EST+%2F+18%3A00-00%3A00+SGT+for+essential+maintenance.++Apologies+for+the+inconvenience.

      • Krigging – otherwise known as “Making Stuff Up”.

        Mosher, would you let your children fly on an aeroplane that was designed by climate “scientists”?

    • Gee. I might as well lose the hounds…

      Jan Merk, etc.

      e.g.,

      http://www.kriging.com/correspondence/

  29. If 2014 was a warmest year or not does not matter. If 2015 becomes the warmest year does not matter. If we have a hiatus or we just barely don’t have a hiatus, it does not matter. What does matter is that the highest temperature from all the real data sets is getting further and further from the Climate Model Output.

    They cannot make any climate model start where we were at any time in the past and get to where we are now. They want people to continue to believe they are really still right. That does not work for me.

    They are trying to scare is with data that is not scary.
    They have showed us the data and the data is not so scary.
    The have made adjustments. The adjustments may well be right, but the adjusted data is still not scary.

    Now, Exxon Mobil is being investigated for not trying to scare us. The actual data does support Exxon Mobil for not being that stupid. The good news for Exxon Mobil is that flawed climate model output will not be likely to hold up in a real, honest court and not in front of congress. To go after one of the biggest companies in the world during a hiatus for something that did not happen is really stupid.

  30. The Congressional investigation of the Karl et al adjustments is heating up, even if the world is not:
    http://www.washingtonexaminer.com/gop-science-chief-threatens-prosecution-over-climate-study/article/2575687

    Woohoo!

    • Should we be cheering for politically motivated fishing expeditions?

      • When they’re as maroonik as this one, maybe so.

      • Joseph

        You are presumably against the Exxon enquiry therefore? It was big news on the BBC today.

        tonyb

      • Joseph: “Should we be cheering for politically motivated fishing expeditions?”

        You mean like the fishing expedition currently being carried out against Exxon by Schneiderman?

        Ah, don’t tell me – “it’s different” when you Warmies do it, right?

      • So calls for RICO investigations by Sen Whitehead are ok, but requests for information by a congressional committee charged with oversight of a federal agency is not?

      • There is no doubt that Exxon has funded organizations that spread misinformation, downplay AGW, and or advocate against action. There is no evidence whatsoever that Karl has falsified his results.

      • Yes, Joseph. Exxon is pure evil.

        Exxon-Led Group Is Giving A Climate Grant to Stanford
        By ANDREW C. REVKIN
        Published: November 21, 2002 http://www.nytimes.com/2002/11/21/us/exxon-led-group-is-giving-a-climate-grant-to-stanford.html

      • It’s called trying to improve your public image. 99% of their efforts have been to funds organizations and politicians that downplay AGW or prevent action.

      • Thank you for that objective assessment, yoey. We had a good laugh.

      • What’s not objective about it. The facts speak for themselves

      • Joseph:

        It’s called trying to improve your public image. 99% of their efforts have been to funds organizations and politicians that downplay AGW or prevent action.

        Now you’re slipping into the realm of climate denialism. The Exxon grant to Stanford alone was $100,000,000.00. That is “slightly” more than they gave to Anthony Watts or Jim Inhofe.

      • Not counting all the money they have given to “skeptic” organization(i will try to get that later) they have given
        POLITICAL CONTRIBUTIONS
        $17,568,663 since 1990

        LOBBYING
        $223,436,942 since 1998

      • Joseph:

        Life must be so easy when you can only see in black and white.

        Your numbers are meaningless without context. Spending to influence congress = climate corruption? I suspect you know better.

        Even with pre-Citizens United restrictions, I also can’t believe that the largest company in the world gave less than $18M in political contributions since 1990?

      • I didn’t say it proves there was corruption. But most of it is going to Republican who want to do nothing about climate change and dismiss the science. If Exxon is so concerned about climate change as you seem to think, why would they almost exclusively fund groups that deny most of current climate science? That is why I said that the donation was only to improve their public image. Companies do it all the time. It’s nothing new..

      • opluso, I can assure you after supporting Newcastle United for 60 years that seeing only black and white is far from easy! Masochistic, in fact.

      • Faustino:

        Just for you :-)

        (Hope the link works)

      • Joseph:

        When all you have is a climate hammer, every political donation looks like a nail. (Insert your own screw joke here)

        I didn’t say it proves there was corruption.

        You obviously imply that it should.

        But most of it is going to Republican who want to do nothing about climate change and dismiss the science.

        That climate stance comes as a package deal with opposition to raising corporate taxes, reducing regulatory costs, promoting domestic refinery capacity, and working to promote foreign trade deals among other corporate-standard issues. Most of which I oppose, by the way. It’s unfortunate that the American system is specifically designed to produce two party control since this sort of sharp dichotomy is the logical result.

        If Exxon is so concerned about climate change as you seem to think, why would they almost exclusively fund groups that deny most of current climate science?

        Do I have to point out that Exxon is not run by idiots? (that last word will probably throw me into moderation) All I am suggesting is that self-interest does not necessarily exclude all other public interests. Again, life isn’t black and white — a lesson that could also be learned by quite a few skeptics, as well.

        That is why I said that the (Stanford) donation was only to improve their public image.

        It was to research carbon sequestration, among other things. It obviously failed to generate goodwill for Exxon with Oreskes or you.

    • I do want the Climate Models to go to a real honest court.

      When model output disagrees with real data and the model output is used to tell us what we must do to fix a problem that does not show up in real data, they must go to court. Their only court so for has been from their consensus clique and from the alarmist media and the alarmist administration.

      The Climate people must be investigated. They get a huge amount of our tax money. They must not be allowed to hide anything.

    • Never underestimate the ability of a Republican investigation to self-destruct.

      I would much rather have a Congressional clown car chasing after me than any state attorney general.

  31. Now, Exxon Mobil is being investigated for not trying to scare us.

    Is Exxon Mobil being investigated?

  32. “Interestingly, including the polar regions does not always produce a warmer global average temperature; notably in 2013 and 2014 it did not, largely owing to the cooling in Antarctica.”

    I thought that the recent (last decade) trends in temperature in the arctic are quite interesting given its status as ‘climate canary’. Trends in the arctic seemed to have leveled off, maybe more significant given that the expectation has been that warming would be be about twice as fast then the global average (say about 0.4oC). I haven’t seen very much about this but the re-analysis data looks like a good place to start given what JC says here.

    • As examples

      C&W

      ERA reanalysis

    • Ice on land has been increasing overall, for 50 million years, until the major ice ages of the most recent million years. .
      The major ice ages in the past million years succeeded in getting enough ice in Polar Regions to stabilize the climates in the North and South Hemisphere. During Major ice ages, the ice was not close enough to the poles and it could not be protected during the warm times. The last major ice age finally got enough ice on Greenland, Mountain Glaciers and Antarctic to keep the ocean from rising as much. Now we have ice ages equivalent to the Little Ice Age and we have warm periods equivalent to the Roman, Medieval and this Modern Warm Period. That is all, nothing out of these bounds.

      The Arctic is not a ‘Climate canary’. The warm periods are the time that the oceans thaw and ice in Polar Regions and Mountain Glaciers is replenished. These warm periods are a necessary part of a natural cycle. Greenland Mountain Glaciers and Antarctic are losing ice at higher rates because that is part of the necessary cooling that will come.

      Consensus Alarmist Theory makes earth cold freezes the oceans, the source for moisture, and then they increase ice on earth. They have no water to provide the Moisture. The ice on Mountain Glaciers, Greenland and Antarctic grows in warm times when water is abundant in Polar Regions, when they are not covered by ice. You do not get Lake Effect Snow from a frozen lake. You do not get Ocean Effect Snow from a frozen ocean.

      Climate on earth is well bounded because we have Polar Oceans that freeze and thaw and turn snowfall off and on. Just look at the ice core data.

  33. So when it comes to climatology, cherry-picking leads to a hot-Karl?
    Or is that O/T?

  34. None of this will be answered by just one single additional data point. It will take at least 5 years (probably 10) to judge whether the hiatus, or whether the trend dominates – that is what happens after this current El Nino. Everyone – Mann, Schmidt etc al agrees the models have been running hot (well, except for the propagandists at SkS). So in the background, warming is probably happening, but the trend rates are proving lower than the models, which would make the existence of a hiatus more likely than the current models would project. Is that good or bad news? We’ve had close to 1C of warming so far and it’s difficult to see in what way that was bad. A continuing slow change for the next 1C – it’s hard to see why we won’t be able to cope.

  35. Krigging[sic] across ice, water, and land is suspect even though the math is possible cause the underlying assumptions behind the math are violated.

    The boundary may not be quite as simple as you suggest. After all air moves across the boundary. e are not talking about stationary deposits. Things are complicated but still gray.

    BTW some of the early geostatistics development was by the Soviets in meteorology.

    • one reason why we did the version using the temperature of water under ice,

      • Hmmm, yes I now remember you brought that up before. Thanks for that reminder.

      • the issue about interpolating over land/ice/water can be broken down as follows.

        1. We know from a regression of satellite data that Latitude and altitude
        captures over 90% of the variability.

        2. That leaves 10% of the variability do to:
        A) OTHER deterministic factors ( like land class , topography ect )
        B) Weather
        C) Error

        When we interpolate from over land to over ice there is definately a
        determinsitic component we ignore. land class

        When the ice withdraws during certain times of the year, there are small
        areas where we interpolate from land,, over open water,, and then over ice. generally we might argue that air over the open water will be warmer. whether or not this impacts (underestimates) trends is a good question.

        the other question is the actual spatial area that has this problem. ?

        All in all I dont see anything that argues for a one way bias in the trends that are reported.

        If we choose to look at SST under ice, then arguably this gives the lowest trend as the temperature is constant.. I actually prefer that method because it avoids the whole krigging over different land classes.
        Still folks who have a cow about the land class issue have never actually tried to see if adding land class as a regressor actually improves the regression ( topography does ) and they never ask the land class question about Reanalysis models..

        Put simply: GFS and other systems ALSO compute physics ( as opposed to mere statistics) over changing land class…

        How?

        I should look at that.

        The other hilarious thing is that when a weather model runs it definately changes raw data… and no one can explain how it is adjusted…. “physics” yet, these adjustments are apparently acceptable

      • Mosh,

        Sometime I would like to know a little more about how you approached the topography aspect. As mentioned before I have looked at the variograms (correlation) for different physiographic regions–obvious, crude, but certainly effective in lowering the variance and (usually) cleaner/simpler variograms.

        However, from your statement it seems you used it as a regressor variable. [ I assume like latitude and altitude in the regression stage before and not in the kriging stage–safe assumption given effort in entirely new code. I am assuming categorical data and that suggests using dummy binary variables for all but one of the categories or maybe created logistic variables. I really have no ideal, but am curious. There are many ways to slice and dice it. Sounds like fun…just fun and no damned politics.

        Then again, as you said elsewhere maybe comments isn’t the place for such things.

      • for topography I started to look at things like slope and aspect which are not categorical..

        Also looked at NVDI.

        For land class the dummy variable approach, but the lack of long historical series was problematic.

        One avenue to go down is using Hyde 3.1 which has urban area grassland area, crops.. all continuos variables… BUT the data is only decadel so you have a issue in th temporal dimension.

        There is a variable in topgraphy that will get you something and that is
        a topographical wetness index— captures areas where cool air can pool.

        Dont have a reference now.. its on the R page spatial analyst .net

      • Thanks…

        http://spatial-analyst.net/wiki/index.php?title=Analysis_of_DEMs_in_R%2BILWIS/SAGA

        Wasn’t difficult to find given the homepage link “Analysis of DEMs” !

        For land class the dummy variable approach, but the lack of long historical series was problematic.

        Maybe using soft data in the sense of Journel?

        Bottomline: everywhere one turns there is interesting stuff…not enough time. Could be frustrating.

        Ha! BTW until now I missed the obvious. Did you folks by chance at some time look at land-ocean using a simple land-ocean dummy!?

  36. Thank you Prof Curry for another good essay.

  37. I think the alternative “hypothesis testing” procedure one of the previous posters was referring to is equivalence/inequivalence testing. Most easily implemented by using confidence intervals (for either a rate of change or a specific predicted value, one or two-tailed depending on the question of interest). The attitude towards risk can be defined two ways, precautionary where null hypothesis is inequivalence, or benefit of doubt where the null hypothesis is equivalence. The equivalence region has to be defined somehow (e.g., what interval of temperatures in 2015 would be agreed upon as no increase from 2000, or something similar) based on previous data analyses and scientific expertise, etc. but NOT based on the data used to test. Depending on the location and length of the confidence intervals it is possible that both the equivalence and inequivalence tests will yield consistent conclusions. But with more uncertain (i.e., wider CI) data they can diverge. I suspect this is very likely with the temperature records. But it would be worth someone trying an analysis (can be based on frequentist confidence intervals or Bayesian credible intervals). The analysis via confidence intervals is straightforward, but defining an equivalence region might be more challenging and subject to disagreement. But multiple equivalence regions could be used and this would allow all to see how the conclusions depend or don’t depend on the width of the interval of values (temperatures) that are used as well as whether precautionary or benefit-of-doubt risk. See for example equivalence/inequivalence testing McBride (1990. Australian and New Zealand J. Statistics 41:19-29) and Cade (2011. Ecological Applications 21:281-289).

  38. There were three horses in a race: Warmer, Cooler, and Sorta. They were old nags who had been at this treacle-paced competition for yonks. Nobody gasped, wept or cheered as they changed position. Cooler and Warmer had once had a dramatic tussle in the Younger Dryas Handicap, but that was about it for sudden and drastic shifts. There was just the odd bad stumble, the odd dash. The track was rigged by corrupt Mr Cloud and everyone knew it.

    Then some amateur bookmakers came along and started to take bets on the race. Then they took bigger bets, got busy and professionalised. Some media types started to drum up emotion, speculated, quoted the bookies. Then political types saw a chance to regulate, tax and spend. Quoting the bookies, they speculated. The main speculation was to extrapolate and pretend that the latest change of position in the slow and tedious race was a permanent change. Because the punters were in no position to view the whole long race or know its future, many got excited for a while, laid lots of bets, opinionated.

    But it was just three old nags lumbering along. The bookies, media types and politicians kept shilling and ranting. The punters had left.

    We should leave.

  39. What happens if both the surface data sets (no hiatus) and the MSU data sets (yes hiatus) are correct? Would this suggest even less climate sensitivity to so called greenhouse gases? If yes, then what’s warming the surface?

    • What is warming the surface? The sun warms the surface. Ice cools earth.

      We just came out of the Little Ice Age. There was more ice on earth in the little Ice Age and less ice now. Earth warmed because there is less ice. Earth did not warm and take away the ice. The ice melted and retreated and depleted and that warmed the earth. This is the same as it was when earth warmed out of a cold period into the Roman Warm Period. This is the same as it was when earth warmed out of a cold period into the Medieval Warm Period.

      This suggests that Greenhouse Gas Sensitivity is not important. If it has an influence, it does not matter. Polar oceans thaw and turn on the cooling snowfall when the polar sea ice melts. If CO2 causes warming, the snowfall will be turned on sooner, just like your Air Conditioning system at home when you invite some hot people to your house.

  40. Hiatus? What hiatus! From the article:
    Record-breaking heat wave…in November

    Stores around the country may already be preparing for the holidays, but it’s warm enough in much of the U.S. to walk around in shorts.

    Record-breaking temperatures have hit eastern regions across the United States this past week, from the northeast to Florida.

    http://www.cnbc.com/2015/11/06/record-breaking-heat-wavein-november.html

    • Jim2, one of the extraordinary things about the Chicago/Peshtigo/Great Lakes fires of 1871 is that the freak heat and drought conditions occurred in mid-autumn, at 45 degrees north.

      Stick around long enough and everything will happen to you, and at the strangest times. Let’s hope it’s just a late autumn warm spell.

    • I just heard on the radio that here in Ottawa the temp is a balmy 20 oC, the record high for todays date is 24 and low -15. Quite a spread for one day.

      • human1ty1st,

        Here in Toronto (airport data so caveat emptor) it got up to ~18°C yesterday (Nov.6) close to the record high of 18.9°C set in 1975. The record low for the 6th was -12.8°C set in 1951, so like you say quite the range.

        The highest November daily temperatures here was 25.0°C set on the 1st and 3rd in 1950 and 1961 respectively. The lowest was -15°C set on the 30th (surprise) in 1958.

        For what it’s worth, November here would have to warm up considerable to beat out 2001 as the warmest November EVEH!

    • We just came out of the Little Ice Age into this Modern Warm Period. We are supposed to to be warm now. It is a natural cycle and we did not cause it. We will bounce along the upper bound for a few hundred years, just like in Roman and Medieval times. We will break a lot of warm records, but not by a huge amount. We have a lot of unprecedented records because in Roman and Medieval times they did not have thermometers yet. We do not have thermometer records for cold and warm periods before 130 years ago. We have proxy records and we must look at and understand that data.

  41. I find it interesting that JC favours the ERA reanalysis and its this data set that comes close to lacking a hiatus given subdued 1998 El Nino peak.

  42. All I am going to say…as a layman that can barely follow this discussion is….for “settled” science…there sure is a lot of disagreement and discussion.

  43. I have a concern regarding the satellite Remote Sensing System RSS.
    Carl Mears is Vice President / Senior Research Scientist at RSS

    Here is a quote by Carl Mears»:
    “(The denialists really like to fit trends starting in 1997, so that the huge 1997-98 ENSO event is at the start of their time series, resulting in a linear fit with the smallest possible slope.)»
    It is remarkable that he uses the term «denialists». A term which can be regarded as nothing else than name calling.
    http://www.remss.com/blog/recent-slowing-rise-global-temperatures

    Wikipedia: “Name calling is abusive or insulting language referring to a person or group, a verbal abuse. This phenomenon is studied by a variety of academic disciplines from anthropology, to child psychology, to politics. It is also studied by rhetoricians, and a variety of other disciplines that study propagandatechniques and their causes and effects. The technique is most frequently employed within political discourse and school systems, in an attempt to negatively impact their opponent.”

    Further, Carl Mears is involved in the current project:
    “Improved and Extended Atmospheric Temperature Measurements from Microwave Sounders. The purpose of this project is to completely redo the current MSU and AMSU atmospheric data records using more advanced and consistent methods. This project is funded by the NASA Earth Sciences Directorate.”

    Being a Vice President I imagine that Carl Mears is quite influential in that project.

    My guess is that we will soon see dramatic changes in the RSS temperature data series. I would be greatly surprised if we will see a cooling trend.

    • Back peddling is require because his data doesn’t support the social meme so he has to through out some BS just to show he’s still part of the right team.

      That’s my guess.

    • He’s smart and honest.

    • http://www.remss.com/about/who-we-are
      Our research is supported by NASA, NOAA, and the NSF, with many of our researchers participating in NASA science research teams and working groups, collaborating with other fore-front industry leaders and the scientific community.

      Carl is just following his masters voice (and the grant money).

      On the other hand RSS claims to have commercial clients so unlike NASA/NOAA there is a limit to the dishonesty. The private sector wants accurate data not politically correct data.

    • Mears, also rather unhelpfully to Smith’s case said
      “Mears, March 24: All datasets contain errors. In this case, I would trust the surface data a little more because the difference between the long term trends in the various surface datasets (NOAA, NASA GISS, HADCRUT, Berkeley, etc) are closer to each other than the long term trends from the different satellite datasets. This suggests that the satellite datasets contain more “structural uncertainty” than the surface dataset.”
      Is Smith going to target Mears with a subpoena next?

      • Mears, also rather unhelpfully to Smith’s case said
        “Mears, March 24: All datasets contain errors. In this case, I would trust the surface data a little more because the difference between the long term trends in the various surface datasets (NOAA, NASA GISS, HADCRUT, Berkeley, etc) are closer to each other than the long term trends from the different satellite datasets. This suggests that the satellite datasets contain more “structural uncertainty” than the surface dataset.”

        Or that the surface datasets are looking at the wrong thing and following the same ham-fisted method of making up data.

      • So should Smith target Mears to see exactly what he means when he says satellite temperature trends show “structural uncertainties” due to not agreeing with each other? It seems Smith should be very interested in that owing to his desire to get the best possible temperature record, and he would not want to be relying on something with “structural uncertainties” when he makes a case against Karl.

      • All the datasets could be problematic. Scientists seem to fall in love with their methodologies, and finding scientific misuse of statistics seems to be like shooting fish in a barrel with a minigun.

        A combined engineering/statistical team should evaluate the temperature datasets and weigh in on methodology, accuracy, precision, and error sources/error bounds.

        Someone objective needs to evaluate the various data products to see if any of them are suitable for the purposes intended..

      • It should be a clue to them when the providers of one dataset point to another one as being better.

      • It is a little more complicated than that.

        The TLT datasets really only measure down to 1.5 km at best which is above the ABL

        The .land datasets measure at head top level (at least for me). This is at the bottom of the ABL.

        The ocean datasets measure as far as I can reach down with a bucket. This is below the ABL.

        The land and ocean datasets really don’t have anything to do with each other and we combine them anyway.

        A independent team needs look at the datasets and determine if any of them provide value.

        They might. They might not. Perhaps something completely different should be done. The “climate” datasets have mutated from what was weather data. Someone needs to do a sanity check.

      • ” A independent team needs look at the datasets and determine if any of them provide value.

        They might. They might not. Perhaps something completely different should be done”

        I’m an independent data specialist, and I’ve done something completely different.

        And there’s no loss of nightly cooling since 1940.

      • PA, with satellites it is much more complicated than that. A series of 10-15 satellites provide this dataset over the period since 1979, each with their own calibrations and drifts. This is what provides structural uncertainties. Some might say that the true signal is irretrievable from them because there are no independent measures to check with.

      • Jim D | November 14, 2015 at 8:20 pm |
        PA, with satellites it is much more complicated than that. A series of 10-15 satellites provide this dataset over the period since 1979, each with their own calibrations and drifts.

        The ABL ends at about 450 m so the surface measures really don’t tell you anything about the atmosphere.

        The data sets are as different as cats, dogs, and badgers. The cats and badgers are kept in cages and the dogs are observed with binoculars.

        The surface people combine cats and badgers to produce a hairy mammal data set (LOTI=HMDS).

        There are problems with all the data sets. An independent audit is needed to evaluate if what is being done makes sense and pick between bad and worse.

        If you are claiming the surface measures are superior … There isn’t the data available to support or refute the conclusion (please provide a link to an independent audit that supports your conclusion if you have one). The satellite data sets are more consistent and there is a maxim that it is better to be consistent than to be right. It certainly is less impeded by other human influences.

      • Surface datasets have a level of redundancy that makes it possible to use different subsets of surface stations to independently check each other. This also allows them to accurately gauge their uncertainty error bars. The agreement between independent sets of surface records is an important point about the robustness of their signals. Satellites are basically a daisy chain of individual sensors with little to no redundancy, and the error bars are not easy to determine due to a lack of independent measures to check them.

  44. I think it is a bit strange to use reanalysis instead of real measurements.
    Some places may be sparcely covered like the poles, but could they not be delt with separately. Anyway they are only a minor part of the surface.
    Ole Humlum has shown how the adjustments changes the anomalies for a specific year, so i am a bit sceptic to those compilations of some average global temperature.
    The problem is, that the concept of anomaly and the way it is computed leaves no reference to compare with. It is impossible to find out if a changed anomaly is caused by a change in real temperature, or the reference has changed.
    It is a wise way to compute it, but when earlier anomalies changes remarkedly every year with new compilations, i feel some doubt.

  45. Dr. Curry — when you look at sea surface temperature, my thought, having lived literally on the surface of the sea (in boats, large and small) for fully half of my adult life, is that sea surface temperature is driven by an entirely different dynamical system than surface air temperature. That looks so obvious when typed out that simply…but the Global Temperature series combine them as if they were *not* apples and oranges. It is nearly as nutty as combining cholesterol counts and blood pressure into a single metric of health.

  46. The inexorable movement toward a more nuanced view of climate change — that increases in CO2 levels may have less of an impact on future global warming than official climate models previously predicted — is a tacit admission that solar activity has a greater impact on global warming than government climate scientists wished to acknowledge.

  47. I have some problems interpreting the following expression of uncertainty. “with the context of a nominal 0.1C uncertainty”

    There exists a freely available international guideline on expression of uncertainty – “Guide to the expression of uncertainty in measurement. ”

    This is the only broadly recognized guideline on the expression of uncertainty. The following seven organizations* supported the development of this Guide, which is published in their name:
    BIPM: Bureau International des Poids et Measures, IEC: International Electrotechnical Commission, IFCC: International Federation of Clinical Chemistry **, ISO: International Organization for Standardization, IUPAC: International Union of Pure and Applied Chemistry, IUPAP: International Union of Pure and Applied Physics, OlML: International Organization of Legal Metrology ”. The guideline is freely available:
    https://www.oiml.org/en/files/pdf_g/g001-100-e08.pdf

    Simply put, by Guide to the expression of uncertainty in measurement, the result of an estimate should be reported by:
    – give a full description of how the measurand Y is defined
    – state the result of the measurement as Y = y ± U and give the units
    – give the approximate level of confidence associated with the interval y ± U and state how it was determined;
    (Ref. section 7.2.3)

    Further, from wikipedia, there is a valuable piece stating “In statistics, the so-called 68–95–99.7 rule is a shorthand used to remember the percentage of values that lie within a band around the mean in a normal distribution with a width of one, two and three standard deviations, respectively; more accurately, 68.27%, 95.45% and 99.73% of the values lie within one, two and three standard deviations of the mean, respectively.»

    The following statements are valid quantifications of uncertainty:
    0,1 °C at 68 % confidence level.
    Which is the same as:
    0,2 °C at 95 % confidence level.
    or:
    0,3 °C at 99,7 % confidence level.
    or even:
    0,4 °C at 99,99 % confidence level.
    All the statements above are equivalent statements about uncertainty. They mean exactly the same even if the figures are very different.

    I will suggest to use the freely available international Guideline when expressing uncertainty.

    • I tend to agree, the uncertainty you are looking for is not the uncertainty monster uncertainty.

    • I recall that a couple of decades (or so) ago David Wojick (IIRC) expressed the same kind of misgivings about global temperature data sets.

      I used to measure the parameters of flows through ducts using ASTM and ISO test standards. Using calibrated, ice bathed thermocouples in specified arrays throughout a duct, I was not allowed to report temperatures to the level of precision and certainty used for the global surface temperatures. Of course this was for commercial contracts, not for saving the world.

  48. A possible starting point point for any atmospheric climate model claiming predictive powers, is to assume it is wrong.

    If one has one hundred models, and all have different outputs given the same inputs, then at least 99% are wrong. The initial assumption that climate models are universally wrong is looking good.

    The problem is exacerbated by not knowing which, if any, of the hundred outputs might be correct. Averaging the past results of a chaotic data series provides no useful information to the next value to be generated. Sad but true.

    As to the “hiatus”, there are precisely no accurate measurements over time of anything resembling “global surface temperature”. There may or may not be a plateau in a data series, but it may have nothing to do with the assumed reasons for the data profile.

    Supposed sea surface temperatures are not. Supposed land surface temperatures are not. A moments reflection should indicate why.

    Metrology can be tricky, and the vast majority of scientists have no real understanding of the difficulties involved. Ridiculous figures such as MSL or RSL to the hundredth or thousandth of a millimetre abound. Supposed land surface temperatures taken under ever changing atmospheric conditions, ignoring local radiative heat sources, height of temperature sensor above ground, lapse rate profile from surface to sensor, absolute insolation at the time . . .. Well, you get the idea. Probably meaningless, depending on what you expect from the data. As a rough guide to a pilot figuring out power settings for take-off, at least it gives a check on the aircraft’s temperature sensors

    Much ado about nothing. Locally, some places get hotter over time, some cooler. Antarctica no longer supports the flora and fauna it once did. It cooled. Some arid deserts were once green. They got hotter.

    Droughts, floods, storms, earthquakes, volcanoes. They happen. Maybe one day, we will will know where, when, and the magnitude, all to a fine degree. But not yet, I feel, and the hopeless analysis of historical temperature data is unlikely to help. Of course, I may be wrong.

    My assumptions that atmospheric climate models are wrong and generally useless, and that analysing past temperature data is useless, seem to be holding up, to date.

    Cheers.

    • If one has one hundred models, and all have different outputs given the same inputs, then at least 99% are wrong.

      What is your definition of “wrong”, MF? To be off from the truth by one millionth of a degree?

      Models can differ, but if they all agree that the planet is going to hell in a handbasket then how does it matter whose handbasket you get there in?

      • Vaughan Pratt,

        You must be a little slow. Where did I mention one millionth of a degree? I’ll go a little further. Even if one model happens to be accidentally correct, you don’t know which one. Completely useless waste of time.

        If a pack of loonies all agree they are brilliant, it doesn’t make it so, does it?

        The world has been cooling for four and a half billion years, atmospheric composition notwithstanding. How stupid would someone have to be, to believe that Nature would suddenly reverse itself, for no particular reason.

        Sorry Vaughan, but you’ve backed another loser. Maybe you could try predicting the future – there’s money to be made predicting stock market prices. Should be a walkover compared with predicting the weather, and hence climate.

        Have you tried? Let me know how you get on, if you think it’s easy!

        Cheers.

      • Models can differ, but if they all agree that the planet is going to hell in a handbasket then how does it matter whose handbasket you get there in?

        They have no regional fidelity, why would you assume that just averaging it all together means anything? That’s nonsense.
        http://icp.giss.nasa.gov/research/ppa/2002/mcgraw/
        http://icp.giss.nasa.gov/research/ppa/2001/mconk/

      • Models can differ, but if they all agree that the planet is going to hell in a handbasket then how does it matter whose handbasket you get there in?

        Which models indicate that?

        The more humid the atmosphere gets, the less intense atmospheric circulation ( storminess ) needs to be to balance energy spatially because more latent heat is available.

        Warmer = less sever climate.

        The principle of global warming appears to have a sound basis.

        The exaggeration of the extent and the imagination of effects are what are in error.

  49. Hiatus controversy: show me the data

    The basic problem is we don’t have real data.

    For some reason climate science adamantly refuses to do things the right way.

    Measuring climate is an engineering problem and should be turn over to engineers so it is done correctly. Time to kick the amateurs off the field and let the pros take over.

    Measuring surface temperature is not a big problem. Drive stakes 20 feet deep in pristine areas and measure the temperature from 20 feet deep to the surface. This combined with sea temperatures from ARGO probes will give a valid consistent measurement of the planet.

    We have thousands of ARGO probes and the system was considered online as of 1993. It is inconceivable we are even bothering with ship measurements after 1993. We don’t care. Toss them. This eliminates the ship/buoy adjustment problem No ships, No problem.

    If people want surface air temperatures for historic comparison – a new official surface monitoring system can be deployed near the pristine land sensors. The sensors would conform to an engineering standard and replacement with a non-compliant sensor would be a felony. This eliminates the adjustment/UHI issues. Again – adjustment by the government agencies of the raw pristine data should be a felony. Problem solved.

    Looking backward the temperature record is a mess. However we now have an opportunity to parse through the data and separate GHG from other influence – it isn’t like the data is referenced to the new official monitoring system. The legacy surface monitoring system would provide an indication of the local warming effects by comparison to the new official surface monitoring system. The current staff would be encouraged to keep adjusting up the data from the legacy surface network – since that magnifies the signal of local warming. The official historic temperature report would be detrended by the indicated local warming trend.

    We need to collect some honest data before anybody can show anything.

    The GHG warming is greater than claimed by “non-warmers” and less than claimed by global warmers and that is all that can be said with any certainty.

    • Measuring climate is an engineering problem and should be turned over to engineers so it is done correctly. Time to kick the amateurs off the field and let the pros take over.

      Measuring surface temperature is not a big problem. Drive stakes 20 feet deep in pristine areas and measure the temperature from 20 feet deep to the surface. This combined with sea temperatures from ARGO probes will give a valid consistent measurement of the planet.

      We have thousands of ARGO probes and the system was considered online as of 1993. It is inconceivable we are even bothering with ship measurements after 1993. We don’t care. Toss them. This eliminates the ship/buoy adjustment problem No ships, No problem.

      If people want surface air temperatures for historic comparison – a new official surface monitoring system can be deployed near the pristine land sensors. The sensors would conform to an engineering standard and replacement with a non-compliant sensor would be a felony. This eliminates the adjustment/UHI issues. Again – adjustment by the government agencies of the raw pristine data should be a felony. Problem solved.

      Looking backward the temperature record is a mess. However we now have an opportunity to parse through the data and separate GHG from other influence – it isn’t like the data is referenced to the new official monitoring system. The legacy surface monitoring system would provide an indication of the local warming effects by comparison to the new official surface monitoring system. The current staff would be encouraged to keep adjusting up the data from the legacy surface network – since that magnifies the signal of local warming. The official historic temperature report would be detrended by the indicated local warming trend.

      We need to collect some honest data before anybody can show anything.

      [Emphasis added.]

      All apologies to Dr. Curry, when it comes to instrumental analysis, most of the people trained and practicing as “climate scientists” are like toddlers pushing buttons on microwave ovens. They really don’t understand what’s making the “beep-beep-beep” noises much less how their Cream of Wheat is being heated.

      When you work in experimental physics, you have it drilled into you that without proper calibration, at the end of the experiment you will have, as my professor one time screamed at me, no data. …

      When I was working with Dr. Van Zytveld to measure the thermopower of liquid rare earth elements, recalibration of our instruments had to be done all the time. One reason for this was that the thermocouples we used to measure temperatures were essentially consumed after each experimental run. Even if not visibly damaged, after one use where they were called upon to measure temperatures above a thousand degrees C for many hours, they were unlikely to survive a second run, let alone remain accurate. Also, we frequently rebuilt the ovens we used to achieve those high temperatures. After each experimental run, I would have to experiment with my rebuilt rig and make sure it would track along the same curve as the previous runs had. That is, I had to calibrate it with the previous work.

      When doing experimental physics, the test rig used to make measurements is a separate experiment in its own right. If you haven’t experimented with your test rig enough to know exactly how it works, you will never be satisfied that the measurements you make with it are valid, or at least you shouldn’t be.

      For my junior year laboratory requirement, I measured the speed of light in gases. The methodology for this experiment was quite clever. I had to fill a small cylindrical chamber with various gases, then pass a laser beam through it, the chamber being in one arm of an interferometer. When the split laser beam was recombined, it formed an interference pattern. As the gas was slowly pumped out of the chamber, I could see fringe shifts in the interference pattern, and the number of shifts allowed me to calculate the speed of light in the gas.

      The experiment was an interesting mix of high tech with low. The interferometer has been around since the 1800s, the laser since the 1960s, and to count the fringe shifts I used a very modern (for the 1980s) trace storage oscilloscope attached to a light sensor. To measure the pressure, I used a U-tube mercury manometer, which goes back to the Middle Ages.

      The way you read a manometer is to measure the difference in height of the mercury column between the right and left sides. What I did was to measure the height on one side from the unpressurized position and then double it. I thought I was saving time. Unfortunately, this method would only be valid if the right and left sides were volumetrically uniform, and they were not.

      I was a bit slow in accepting that all my labor might be worthless, at which point Professor Van Baak screamed at me, “You have NO data!” (Fortunately, there was a simple, albeit tedious, way to recover my data and so save my experiment.)

      As embarrassing as it was at the time, now, 25 years later I’m glad I made that mistake and learned that lesson. It greatly sensitized me to the need to examine all the assumptions that go into a measurement, and helped me notice when others were less than punctilious about it.

      — Jeffery D. Kooistra, “Lessons From the Lab” (Analog, November 2009)

      • Geoff Sherrington

        Tucci,
        Quote “April 22, 1915, members of a special unit of the German Army opened the valves on more than 6000 steel cylinders arrayed in trenches along their defensive perimeter at Ypres, Belgium. Within 10 minutes, 160 tons of chlorine gas drifted over the opposing French trenches”
        At age 33 I manages an industrial pilot plant using 10 tonnes of chlorine per day, set in a town of some 10,000 people. The chlorine was at 1050 degC and was pressurised. (Steel burns in chlorine at those temperatures).

        Just as you had a memorable introduction to accurate measurement, so did I. Fortunately I had several years of analytical chemistry behind me, where accuracy and precision are prominent on the daily agenda.

        I strongly support the use of the international guidelines of Bureau International des Poids et Measures. Many times I have blogged about the juvenile and unhelpful use of uncertainty and variability estimates by climate workers. For example, taking an average of a few dozen CMIP runs from GCM has absolutely no basis for justification, never has, never
        will.

        (A few years later I was involved in the pre-mining estimate of the weight and grade of uranium in the Ranger One orebody, about the largest in the world at the time. This included about 100 models of the open cut pit, whose optimisation was critical to good mining. After mining ceased some 20 years later, the reconciliation put both grade and weight of U recovered within 5% of each estimate. I mention this because statistical skills seem to improve when the consequences of being wrong are high. In climate work, there is little to no personal accountability this way. Put more directly, many climate workers need to grow up and learn.)

      • Writes Mr. Sherrington:

        Just as you had a memorable introduction to accurate measurement, so did I. Fortunately I had several years of analytical chemistry behind me, where accuracy and precision are prominent on the daily agenda.

        I strongly support the use of the international guidelines of Bureau International des Poids et Measures. Many times I have blogged about the juvenile and unhelpful use of uncertainty and variability estimates by climate workers. For example, taking an average of a few dozen CMIP runs from GCM has absolutely no basis for justification, never has, never
        will.

        (A few years later I was involved in the pre-mining estimate of the weight and grade of uranium in the Ranger One orebody, about the largest in the world at the time. This included about 100 models of the open cut pit, whose optimisation was critical to good mining. After mining ceased some 20 years later, the reconciliation put both grade and weight of U recovered within 5% of each estimate. I mention this because statistical skills seem to improve when the consequences of being wrong are high. In climate work, there is little to no personal accountability this way. Put more directly, many climate workers need to grow up and learn.)

        I’m just a country G.P. with a little experience in research and academic writing. The experiences with “accurate measurement” captioned above were those of physicist Jeffery D. Kooistra, his account made memorable for me by the fact that he published that column in the November 2009 edition of Analog magazine, his principal focus being on the preliminary report of Anthony Watts’ SurfaceStations project uttered in the spring of that year.

        A study of artifact effects in instrumental analysis.

        I don’t doubt that there are at least some “climate workers” willing “to grow up and learn,” but the field has – as a whole – been grievously corrupted by the invidious influence of people with pecuniary and political priorities diametrically opposed to veracity in the consideration of anthropogenic influences on the global climate, and weaning (so to speak) the field of climatology from the money trough flooded by predatory power-lusting liars and thieves may well take four or five decades, if it can be accomplished at all.

        He has committed the crime who profits by it.

        -– Lucius Annaeus Seneca

    • The GHG warming is greater than claimed by “non-warmers” and less than claimed by global warmers and that is all that can be said with any certainty.

      Modern warming is not more than Roman warming and not more than Medieval Warming. You cannot say with certainty that GHG caused any of the warming that you have no data to support.

      • Well, PCT…

        We are probably going to end up having to agree to disagree.

        The 22 PPM = 0.2 W study looks.pretty solid. If something looks solid and passes the smell test I go with it.

        On the other hand (and I would love to be corrected) the effect seems to only be near the ground. Can’t go much higher since there isn’t an equatorial hot spot and UAH isn’t affected.

        The average ground temperature is 288K and raising the temperature 1K requires 5.5 W/m2.

        http://cdiac.ornl.gov/ftp/trends/co2/lawdome.combined.dat
        Putting this all together, the forcing since 1900 (296 PPM) is
        ( 0.2/(ln (392/370))* ln (400/296) = 1.04 W/m2
        1.04 W/m2 / 5.5 W/m2/K = 0.19K

        So the temperature increase due to all the forcing since 1900 from GHG driven AGW is about 0.19 K.

        Since there has been more than 0.19 K of warming since 1900 it must be due to natural forcings, or ALW (anthropogenic local warming),. or perhaps those aerosol and cloudy things that are badly understood (“low confidence”),.since it isn’t due to GHG driven AGW,

        The average time outgoing energy is queued in the atmosphere increases with more GHG (at least at the surface) because the mean absorption distance is reduced, More energy queued in the atmosphere means it is warmer. So I expect near the surface it will be a little warmer. It is a little warmer near the surface as indicated by the UCB study.

    • PA: “For some reason climate science adamantly refuses to do things the right way.”

      Now, I wonder why that could possibly be…

      Cui bono?

    • @PA: The basic problem is we don’t have real data.

      Where “we” is everyone claiming rising CO2 presents no problem to the biosphere.

      Those who do have “real data” beg to differ.

      • Vaughan Pratt | November 10, 2015 at 3:25 am | Reply
        @PA: The basic problem is we don’t have real data.

        Where “we” is everyone claiming rising CO2 presents no problem to the biosphere.

        Those who do have “real data” beg to differ.

        Shouldn’t beg.

        CO2 is going to make it warmer. Don’t claim won’t..

        Don’t claim more CO2 is no problem. Have consistently claimed that more CO2 is a solution.

        Without good data on how much a PPM of CO2 makes it warmer and how many more PPM there are going to be – discussion of CAGW becomes a futile exercise.

        The only hard data we have is 22 PPM = 0.2 W/m2.

        That precludes CAGW presuming that god-awful increases in CO2 don’t happen.

        The 40 year track record is that the increases are around 2 PPM/Y and that when the emissions increase is 0 (1988-1994) the rate of increase decreases.

        That tends to rule out god-awful increases in CO2 happening.

        Will it get warmer? Survey says yes

        Will it get a lot warmer? Survey says no.

        I’m sort of interested in what happens during the 2016-2017 La Nina. The rate of CO2 rise should drop below 2.0 PPM/Y. If it reaches 1.5 PPM/Y then any worry about high CO2 levels is overblown. The CO2 emissions have increased over 60% since the last time it was under 1.5 PPM/Y (1999)..

      • Is it Good Oh’ Henry, or is it the biome biting the bar?
        ===============

      • Those who do have “real data” beg to differ

        More cherry picking Vaughan.
        I have real data and I can prove that there’s no loss of cooling, and that is really what Co2 is suppose to be changing, temps are not the measure, there are many reasons temps can change, but a loss of cooling is the effect Co2 is suppose to be causing, and there is zero sign of this since at least 1940, and in fact there is slightly more cooling than warming, from real data. In fact it’s a lot more real than the junk that you’re claiming is real, that data is mostly the output of some complex algorithm that makes up data based on the programmers opinion of what missing data would be if it was actually measured. All of the published series do this, and guess what they all come up with made up answer.

        But don’t try to pass it off as real data, it isn’t.

  50. What is perhaps most interesting to me is some of the inconsistencies between the satellite and surface data. Should be causing people to really question their data. Also, does anyone seriously say that GCM’s have been skillful over the last 20 years or so even for global average temp? Is there really an issue with the forcing numbers in CMIP5? Perhaps its time to rerun the old models with the corrected forcings.

    • They cannot run any models with corrected forcings because they do not really understand climate. It snows more when earth is warmer and they do not even have a clue.

    • It’s not the normal bring data that is inconsistent.

      Temp data is there, for everyone to see, in every newspaper and in the literature and painting, and the historical records etc etc, from the cricket games and the horse race meetings and the agricultural auctions and yacht clubs etc etc etc, religious festivals, shipping records, honey makers, mining records etc, stretching back for 2000 years and the ice cores (if Lonny Thompson would ever share them) and stretching back further with soil and seed samples etc (so long as Mann doesn’t ruin everything with his hockey stick historical temp revisionism) and everywhere else.

      The “forcing” numbers now are fictional and need to be ‘corrected’ as you say but how can they be when Gavin Schmidt and Karl and others are busy massaging the normal data til it doesn’t look at all like any kind of record at all – Rutherglen in Australia is a bit of a poster child for blatant temp record fudging.

      Start with the temp record, I reckon. Delete Hadcru, delete BEST, delete NASA, NOAA, the lot – it’s all been fiddled so much since the the late 1980s by partisan and deeply unprofessional players, it’s not real at all now.

      Crowdsource a collection of actual recorded temps using more than just buckets and stephenson screens but using the historical records, like TonyB’s approach.

      I’ll wager we’ll resurrect the Roman Warm Period, the Medieval Warm Period, the Little Ice Age and final, finally, the lie of the hockey stick historical temp record on which the whole global warming/climate change hysteria is built, will be obliterated.

    • David Young,

      Why would anyone presume that surface temperature and tropospheric (satellite) temperature data should be consistent?

  51. The Whitehouse now admits that the more immediate danger to modern civilization may be vulnerability to a solar EMP (electromagnetic pulse):

    http://www.shtfplan.com/headline-news/white-house-prepares-for-emp-that-would-wipe-out-power-render-cellphones-and-internet-useless_11062015

    • richardswarthout

      omanuel

      EMP protection is easy; don’t believe it’s not being done everywhere, and scary EMP headlines are probably the result of somebody trying to make a buck.

      Richard

  52. Pingback: Climate Apocalypse Crusade Stymied by Inconvenient Hiatus | al fin next level

  53. stevefitzpatrick

    Judith,
    “So it is premature to declare the hiatus dead.”

    Sure, but not too early to declare the stupid GCMs are wrong. The hiatus is irrelevant, the divergence of model projections from realty is what matters.

    • Isn’t the hiatus the reality from which the model projections have diverged?

    • In the General Franco sense of dead, the hiatus can still fog a mirror… barely. Didn’t end well for him; not going to end well for champions of the hiatus.

      • The alarmist clowns are desperate for the warming to resume. They wonder why they are not taken seriously. Funny.

      • JCH: “Didn’t end well for him; not going to end well for champions of the hiatus.”

        You are diametrically incorrect on that.

        It is you Alarmists for whom it is not going to end well, as you are going to find out, hopefully to your cost.

      • @DM: The alarmist clowns are desperate for the warming to resume. They wonder why they are not taken seriously. Funny.

        Don, sorry to bother you but would you mind quantifying that?

        Suppose that during 2015-2020 HadCRUT4 climbed at a rate of 5 degrees per century. What would be your position then on whether the warming had “resumed”. Would you agree that it had, or would you continue to deny that there had been any “resumption”?

      • Vaughan Pratt: “Would you agree that it had, or would you continue to deny that there had been any “resumption”?”

        Dear me, if you set fire to that strawman, there really might be some risk of Global Warming.

        You’re getting desperate, aren’t you?

      • Vaughan Pratt,

        On the other hand, suppose that the Earth had cooled for four and a half billion years . . .

        Cheers.

  54. My understanding is that the United States has a data quality law that applies to data acquired, compiled and archived by US Government agencies.

    This alone would bring data processing within the purview of the Congress.

  55. Obama said that: “Frankly, approving this project would have undercut [our] global leadership, and that’s the biggest risk we face; not acting.” Because if ultimately if we’re going to prevent large parts of this Earth from becoming not only inhospitable but uninhabitable in our lifetimes, we’re going to have to keep some of our fossil fuels in the ground rather than burn them.” Does anyone at CE think that “large parts of the world will become uninhabitable in our lifetimes?” I’m 73, but even the youngest here are unlikely to see devastating consequences in their lifetimes if warming resumes. Irrationality prevails.

  56. The emperor has no clothes on.

  57. More context here. The actual NOAA adjustment was between one tenth and one twentieth of a degree C, and that broke the pause leading to all this furor. It shows more about how delicate that pause was statistically than anything else.

    • More context here too. A pause of 18+ years i.e a very tight range with no statistical warming, has been going on and the Chinese were emitting 900 million+ extra tonnes of co2 over the same period. Karl et al 2015 come along and convince only armies while the rest of us normals laugh at it for it’s transparent partisan hackery.

      • ‘convince only warmies’ of course, darned lousy typing.

        I am hoping Prof Curry will mention the Chinese warming adjustment that made the press this week. It seems to me to rather a big deal. How can 900 million+ more tonnes of CO2 in the atmosphere since 2000, than previously thought, corresponding with the pause in temp rises, not be a substantive setback for warmy believers.

      • No statistical slowing in the warming either. The post-1998 time period is too short relative to the noise to conclude anything, which is why it hinges on only a small fraction of a degree.

      • Very credible, yimmy. We are impressed. What can we say? You win. No need to come back and bombard us with the same crap again tomorrow. Thanks, good bye.

      • Smith could be seriously embarrassed if it comes up in a hearing that his “investigation” is all about a twentieth of a degree. Break it to him gently before it comes to that. It is not a good headline.

      • Burbles Jim D:

        Smith could be seriously embarrassed if it comes up in a hearing that his “investigation” is all about a twentieth of a degree. Break it to him gently before it comes to that. It is not a good headline.

        How could it be in the least embarrassing for Rep. Lamar Smith (R – Texas) to publicly nail down some kind of confirmation that the great gaudy “We’re All Gonna Die!” climate catastrophe is “…all about a twentieth of a degree” (if that’s in fact what the data – being withheld from congressional scrutiny by federal government employees – should show)?

        Somebody had best break it gently to Jim D. Such hard-held delusions lead so commonly to catastrophic emotional decompensation when the patient with such a psychosis is is disabused thereof.

        I suggest that sedative hypnotics be held available. Midazolam, mayhap?

        Strictly speaking, pure science is about the search for the genuine causes of observable phenomena; politics is about gaining the authority to pursue favored outcomes. The method of science entails tolerance of and relentless but reasoned criticism of all views, including one’s own; the tools of politics include what urbanist Jane Jacobs calls “deception for the sake of the task.” Real science is about critically examining premises; pure politics is about defeating your opponent.

        In politics, you focus on that part of what is seen that supports your position, while in science, you try to get at the part of reality that is often not seen.

        — Sandy Ikeda (19 October 2015)

      • That is not what he is doing. He is going after NOAA for a fractional adjustment that went in a direction he did not like, ignoring all the larger previous instrument-related adjustments in SST that went in the other direction that Karl also showed in his paper, and drew no attention at the time.

      • Persists Jim D:

        That is not what he [Lamar Smith, R – Texas] is doing. He is going after NOAA for a fractional adjustment that went in a direction he did not like, ignoring all the larger previous instrument-related adjustments in SST that went in the other direction that Karl also showed in his paper, and drew no attention at the time.

        And the whiff of butyl mercaptan is somehow NOT supposed to give the reasonable man suspicion that there’s a skunk in the woodpile?

        If the NOAA “fractional adjustment” is defensible, then the methods by which said “adjustment” had been made cannot possibly prove detrimental to the NOAA officials – federal employees all, working diligently (perhaps even honestly?) on the toil and treasure of the hard-pressed U.S. taxpayers – and they have nothing to fear about coughing all that stuff up to Rep. Smith and the other members of his committee.

        You want the proud, diligent, consequent employees of the NOAA to show Rep. Smith and the whole wondering world just how solid is their “science,” don’t you, Jim D?

        You don’t?

        Yeah, we’re getting that, aren’t we?

        The less scrutiny federal agencies receive, the more absurd their rulings become.

        — James Bovard (2012)

      • They gave him the data so he can check what they did. Do they have to redo the calculations with the data in front of him too, or what would you have them do? It is just ridiculous to question a scientific study without checking the numbers first. Whenever a skeptic is proven wrong, people come to the table with actual numbers to prove it. It happens quite a lot. That is how science progresses.

      • Whimpers Jim D:

        They gave him [yet again, Rep. Smith of Texas, skunk-sniffer] the data so he can check what they did. Do they have to redo the calculations with the data in front of him too, or what would you have them do? It is just ridiculous to question a scientific study without checking the numbers first. Whenever a skeptic is proven wrong, people come to the table with actual numbers to prove it. It happens quite a lot. That is how science progresses.

        Everything requested by Rep. Smith (as chair of his committee) is work product done on government time and at government expense. All of it was – presumably – necessary for the completion of the final report published by Karl et al (if it wasn’t, then we’re looking at irresponsible waste at best, are we not?) and therefore bears upon the character, quality, and purpose of this redolent promulgation. All of it is subject to discovery, as Rep. Smith – a lawyer – knows full well.

        So do you, Jim D. Which is doubtless why you’re writhing in agony.

        You really don’t want those materials – with the ever-increasing presumption that they’re evidentiary of actions concerted to accomplish the knowing utterance of falsehoods and therefore the perpetration of malfeasance in public office – subjected to detailed dissection, do you?

        I had no faith in shaming the perpetrators. I preferred to awaken the victims.

        — James Bovard (2012)

      • Evidentiary of what actions, for example? What are they looking for if it is not to be found in the data? How do ship and buoy data figure into their conspiracy theory?

      • Flop-sweats Jim D:

        Evidentiary of what actions, for example? What are they looking for if it is not to be found in the data? How do ship and buoy data figure into their conspiracy theory?

        “What are they looking for”? Why, evidence of criminal mens rea, of course. “The mind on the thing” in the accomplishment of malfeasance in public office, comprised (as I’d elsewhere said) of the knowing utterance of falsehoods as if those promulgations were the reliable products of government work. Duplicity aimed at the violation of the private citizen’s unalienable individual right to a property in his person, his labor, his money, his real estate, etcetera, for this is the explicit and implicit purpose of all this “Climate Panic” fraudulence.

        Only a liar or a sucker could voice any other opinion of this preposterous bogosity at this late date.

        So which are you, Jim D?

        Hm. Does it really matter?

        At the very least, methodological failings in this work of Karl et al may not be efficiently, quickly, and accurately discerned from nothing more than “the data” (which is reasonably suspected to have been purposefully corrupted, mind you).

        Y’know, one of the reasons why I came to suspect that his “man-made global warming” yammer was about as solidly rooted as Birnam Wood was the wonderful lack of methodology papers in the “consensus” literature.

        Anyone who’s done academic publishing in the sciences and understands the value of “baloney-slicing” in leveraging the fruits of research for maximum fattening of the curriculum vitae knows about how well-written methodology papers can be readily accepted in peer-reviewed journals, providing as they do insights which can guide colleagues and students in the replication of your study’s investigations. Beyond that, if your methods are genuinely reliable (an they’d better be, hadn’t they?) the work you’d invested in devising and validating them is yet another valuable addition to the fund of knowledge in your discipline.

        So whyever in hell weren’t these “climate catastrophe” puckers publishing veritable reams’ worth of methodology papers?

        Something dodgy about the ways in which they’d magick’d up their “We’re All Gonna Die!” policy recommendations?

        The pejorative dimensions of the term “conspiracy theory” were introduced into the Western lexicon by CIA “media assets,” as evidenced in the design laid out by Document 1035-960 “Concerning Criticism of the Warren Report,” an Agency communiqué issued in early 1967 to Agency bureaus throughout the world at a time when attorney Mark Lane’s Rush to Judgment was atop bestseller lists and New Orleans DA Garrison’s investigation of the Kennedy assassination began to gain traction.

        — Professor James F. Tracy (3 September 2015)

      • So, they publish a paper on ship and buoy data and show why this leads to a trend bias, and Smith automatically assumes they are hiding something because he doesn’t like their results. He doesn’t like government scientists in general (see the EPA) who keep coming up with inconvenient results for his stakeholders. I can see how this makes him act desperately. Emails, yeah, that’ll show them for doing their inconvenient science? That’s the ticket. It’s his only tool because he can’t show what is wrong with the paper that started it.

      • Sputters Jim D:

        …Smith automatically assumes they are hiding something because he doesn’t like their results. He doesn’t like government scientists in general (see the EPA) who keep coming up with inconvenient results for his stakeholders.

        Lamar Smith being a Representative in the “People’s House” of the U.S. Congress (elected by the voters of Texas’ 21st District), how is he NOT supposed to be sharply scrutinizing federal “government scientists” (sic) perpetrating an arguable fraud at the behest of a presidential administration avowedly hostile to the interests (not to mention the civil rights) of his constituents?

        Tsk. How like a “Liberal” fascist Watermelon Social Justice Warrior to use “stakeholders” as an invective term.

        In the sciences, putzele, it is common for people at all levels of knowledge and expertise not to “like” aspects of an investigation’s results. Also methods, data, conclusions, and recommendations. Real scientists expect such responses, and are prepared to defend their work in detail, which necessitates the provision of ALL their work on said investigation, freely and without any effort at concealment.

        After all, even the most conscientious investigator can be wrong, and if he’s in error, the process of challenge – initiated by people who find something not to “like” in his work – is a means to test the validity of his efforts. To the extent that he is an honest, conscientious, honorable human being – and a scientist in fact as well as in name – he welcomes this.

        That’s why these “government scientists” you’re rabidly and incompetently scrambling to defend – in a Web log forum; ridiculous! – don’t meet the standards of comportment which gives anyone reason to credit them as “scientists” at all.

        Scientists don’t weasel. These weasels are not “scientists.”

        Neither, of course, is a weasel like you.

        If your thinking is fact-based or empirical, you do not begin with an idea or a narrative. You begin by collecting data. Then you formulate a hypothesis and run an experiment to test it. If the results prove the hypothesis, well and good. If they do not, you can dispose of the idea.

        Stuart Schneiderman

      • If a paper is wrong, you find it out with the data, not via emails. Did they ask Hansen for all his emails for his climate papers? No. Why not? How about Spencer when he got the satellite data wrong? Those might be interesting emails too. Where do you draw the line? How about public university academics? Congressmen? Are they protected? I don’t advocate digging into emails for no better reason than that you don’t like a paper. Read your quote at the end. They had a hypothesis that changing instrumentation changes a trend, and tested it with data. End of story. You seem angry. Calm down and think. This is just a paper, one of thousands. Don’t take it personally.

      • Blathers Jim D:

        If a paper is wrong, you find it out with the data, not via emails. Did they ask Hansen for all his emails for his climate papers? No. Why not? How about Spencer when he got the satellite data wrong? Those might be interesting emails too. Where do you draw the line? How about public university academics? Congressmen? Are they protected? I don’t advocate digging into emails for no better reason than that you don’t like a paper.

        Those “emails” being sought in this congressional investigation were work product created on federal government equipment by EMPLOYEES of the federal government conducting their activities on the federal payroll, with foreknowledge that everything they undertook in the way of communications on those email accounts were subject to discovery and review by superior officers of that federal government (meaning Rep. Smith and his congressional committee) anytime said “emails” were demanded.

        So are any of your beloved “government scientists” permitted in law to defy the demands of the Congress that those communications be produced in their entirety?

        Yammering incessantly about “the data,” you stinking weasel, is irrelevant. Said “data” is regarded with a high index of suspicion to have been purposefully corrupted, and it is reasonable to conclude that those “emails” being avidly (indeed, criminally) withheld from congressional scrutiny – savvy “subpoena duces tecum,” cupcake? how about “contempt of congress”? – will contain exchanges formulated to concert methods of “adjusting” said “data” in a fashion calculated to produce politically predetermined outcomes which comprise nothing more than elaborate lies.

        Jeez, if your “government scientists” are NOT lying their asses off in this and other promulgations of the bought-dog propaganda grinders Obozo has made of the formerly scientific agencies of the executive branch, what the hell are you afraid of? Why do you writhe and jiggle and squirm and squeal like vermin in a rat trap?

        “Where do you draw the line? How about public university academics? Congressmen? Are they protected?”

        The Congress – being the lawmaking body in our federal government – has enacted statutes which largely enable members of that body to hold elements of their work product – including communications unrelated to national security concerns – from scrutiny. But as for “public university academics”?

        Whence derive they their compensation packages, their office equipment, their research budgets, their support staff? Government, right?

        So if the state legislature with jurisdiction (and funding responsibility) for that particular “public university” says: Geek! just what the hell d’you THINK those “academics” are in law obliged to do?

        Oh,yeah, I forgot. Jim D has given up thinking.

        It interferes with his Narrative.

        Arguing with liberals is like playing chess with a pigeon. No matter how good you are at chess, the pigeon is just going to knock over the pieces, crap on the board, and strut around like it’s won.

        — Apocryphally attributed to Ann Coulter

      • …and the emails belong to NOAA, so they get to decide.

      • Jim D irrelvants:

        …and the emails belong to NOAA, so they get to decide.

        Ignoring the much-emphasized fact that the National Oceanographic and Atmospheric Administration is an agency of the U.S. federal government and therefore subject – ENTIRELY – to the control of the U.S. Congress.

        In other words, putzi, NOAA belongs to the Congress (as does the rest of the executive branch, our “pen-and-a-phone” Indonesian-in-Chief notwithstanding), so THEY “get to decide,” and you’re sufficiently full of crap to begin composting.

        Jeez, you can’t even confabulate contrafactuality without treading your own foreskin, can you?

        The truth, indeed, is something that mankind, for some mysterious reason, instinctively dislikes. Every man who tries to tell it is unpopular, and even when, by the sheer strength of his case, he prevails, he is put down as a scoundrel.

        — H.L. Mencken, Chicago Tribune (23 May 1926)

      • Am I going to take seriously someone who calls the President that, not once but every time? No. It says something about your belief system, and it is not a good thing. The employer of these scientists is NOAA. The employer owns the emails, which is why they say no. If Congress wants to go to court, they need to be doing that, where they have to show their reasoning. Perhaps they need a special committee for this one twentieth of a degree adjustment in one scientific temperature series. Perhaps El Nino means the “pause”, such as it was, is now just passe’ and they need to move on to denying the next rise instead.

      • Blusters Jim D:

        Am I going to take seriously someone who calls the President that, not once but every time?

        Who gives a toss what a specimen like you claims or does not claim to “take seriously” about our Indonesian-in-Chief? Have you not made yourself sufficiently odious and contemptible as a conniver at criminal fraud masquerading as “settled science”? “Barry” Soebarkah – who never did undertake action in law to change his name back to “Barack Hussein Obama II” after being dumped on his maternal grandparents at the age of 10 – was legally adopted by “Lolo” Soetoro (a Suharto regime government thug) to become a citizen of the Republic of Indonesia, and never naturalized as a U.S. citizen in the years subsequent. He’s far more Indonesian than American, if ever he’d been arguably an American to begin with.

        But you continue, like the perseverating psychotic you’re proving yourself to be:

        The employer of these scientists is NOAA. The employer owns the emails, which is why they say no. If Congress wants to go to court, they need to be doing that, where they have to show their reasoning.

        Care to cite statute law to support your idiotic assertion, you hopeless dunce?

        The U.S. Congress is the ultimate authority in law under the U.S. Constitution. The NOAA had been created by the Congress, and is subject in all regards to congressional command. The Congress has no real need “to go to court” except to seek criminal sanctions against the officers of the NOAA for having defied lawful orders to disgorge the materials demanded by Rep. Smith’s committee, acting at the behest of the Congress as a whole in the legitimate discharge of their duties as the legislature of our republic.

        You’re utterly friggin’ hopeless in all encounters with reality at odds with your spavined, sick, twisted, perverted little fantasies, ain’tcha, Jim D?

        How are any of those reading here to take your spoutings on the broke-dick bogosity of “man-made climate change” as anything but desperate “Social Justice Warrior” yammer now and forevermore?

        In the universities, in the churches, in the corporations, in the professional organizations, in the editorial offices, in the game studios, and just about everywhere else you can imagine, free speech and free thought are under siege by a group of fanatics as self-righteous as Savanarola, as ruthless as Stalin, as ambitious as Napoleon, and as crazy as Caligula.

        They are the Social Justice Warriors, the SJWs, the self-appointed thought police who have been running amok throughout the West since the dawn of the politically correct era in the 1990s. Their defining characteristics:

        • a philosophy of activism for activism’s sake

        • a dedication to rooting out behavior they deem problematic, offensive, or unacceptable in others

        • a custom of primarily identifying individuals by their sex, race, and sexual orientation

        • a hierarchy of intrinsic morality based on the identity politics of sex, race, and sexual orientation

        • a quasi-religious belief in equality, diversity, and the inevitability of progress

        • an assumption of bad faith on the part of all non-social justice warriors

        • an opinion that motivation matters more than consequences

        • a certainty that they are the only true and valid defenders of the oppressed

        • a habit of demanding that their opinions be enshrined as social custom and law

        • a tendency to possess a left-wing political identity

        • a willingness to deny science, history, logic, their past words, or any other aspect of reality that contradicts their current Narrative.

        — Vox Day, SJWs Always Lie: Taking Down the Thought Police (2015)

        ((Note: if moderation permits, please delete the previously post and its HTML error.))

      • Smith’s request is just frivolous and unfounded, and NOAA does not have to acquiesce. If court is where it is headed, so be it. If the “pause” still lives to these people, fine. To everyone else it is long gone, if it ever existed, and they are fighting past battles.

      • Jim D: “…and the emails belong to NOAA, so they get to decide.”

        No they do not.

        They belong to the taxpayer, and the representatives of the taxpayer are fully justified in demanding access to them.

        Curious that someone such as yourself is so vehement in deny1ng the rights of taxpayers, do you have a problem with democracy?

      • Jimd

        Never mind about Smith, don’t you think everyone in your Govt or mine would be amazed that the temperature accuracy claimed- and their adjustments- is akin to fairies dancing on the head of a pin?

        They would be even more amazed if they asked their chief scientists what the savage curtailment of co2 will do in terms of temperature reduction. Another statistically meaningless amount that would embarrass the dancing fairies.

        tonyb

      • Indeed, tonyb, the pause never rose to a level of significance. You need to look at 30-year averages to get anything like a stable trend calculation. 15-year trends are all over the place. Just in 2000, the 15-year trend was 3 C per century. The skeptics were silent on it, and still never mention it.

      • JimD

        ‘Just in 2000, the 15-year trend was 3 C per century. The skeptics were silent on it, and still never mention it.’

        Show your work, please

        Tonyb

      • JimD

        You presumably disagree with the politicians going after Exxon in the same way you disagree with the politicians going after NOAA, because my observations of you over the last couple of years is that you are not a man who believes in double standards.

        tonyb

      • tonyb, the 15-year and 30-year trends are here. Notice how in 2000 it peaked at 0.3 C per decade, but the 30-year trend has been stable all through the “pause” and actually since 1980.
        http://woodfortrees.org/plot/gistemp/mean:120/mean:60/derivative/scale:120/plot/gistemp/mean:240/mean:120/derivative/scale:120

      • Actually, the Exxon case has a smoking gun, which is the rather extensive ICN reporting showing their own documents from the era warning of CO2 projections and how it was only a matter of time before fossil fuels will be restricted. Smith’s thing is just puffery and bullying. No smoking gun.

      • Jimd

        Why have you ignored the 1910 and 1940 trends, besides which the modern trend pales?

        As For Exxon, the only difference is because of who is bringing the cases forward. Your prejudices are showing, especially as the Exxon material was hardly secretive was it? But if they want to look at Exxon and NOAA that is fine and I am surprised you are making a differentiation

        tonyb

      • Don’t you mean 1910-1940 pales? Besides which 1910 was a local solar minimum, which skeptics rarely mention, making that a minimum in the long-term series.
        http://woodfortrees.org/plot/gistemp/from:1985/trend/plot/gistemp/mean:240/mean:120
        The difference with Exxon is that despite all they knew, they were funding the early denialist groups that had a mission to sow doubt so that they could keep selling fossil fuels as long as possible. There is no comparison between that lucrative activity and what happens in the glamorous world of looking at buoy and ship data.

      • JIMD

        With your 6.38 comment you have posted a completely different link to the one you linked to a few minutes earlier. Why have you moved the goalposts?

        tonyb

      • The previous one was just the gradient of the latter one. It is the same data, now as a temperature itself. Note that the recent rise is much longer than the highly touted 1910-1940.

      • Jim D | November 7, 2015 at 4:53 am |
        Actually, the Exxon case has a smoking gun, which is the rather extensive ICN reporting showing their own documents from the era warning of CO2 projections and how it was only a matter of time before fossil fuels will be restricted. Smith’s thing is just puffery and bullying. No smoking gun.

        http://www.exxonmobilperspectives.com

        So what? Their accusers lack integrity and are unethical and dishonest. Exxon has answered the accusations which were absurd on their face. The proper response to scoundrels attacking company executives for informing themselves and defending the company’s interests and image is; “talk to the hand”.

        But what the hey let’s explore the accusation.

        They estimated the CO2 from projected oil consumption. That isn’t a smoking gun, that isn’t even a few empty shell casings. Given that coal is driving the CO2 emissions increase it is largely irrelevant. Since 2000 coal use has increased 79%, oil use has increased 13%.


        Further – we are 5 years ahead of their CO2 projection (400 PPM = 2020) and at less than 0.5°C are less than 50% behind the projected rise (1°C since 1960). This makes Exxon a bunch of hand-wringing worry-warts.

        Releasing more CO2 into the air is good. We should be doing more of it, about 20% more. More CO2 means more food and fewer animal extinctions. Hopefully the Chinese will continue to help us with this since the US isn’t pulling its share of the load. We really want to get to 500 PPM but between warmunists and the misinformed it doesn’t look like we will get there.

        Accusing Exxon of knowing they were going to benefit mankind in the future isn’t even an accusation. It is faint praise.

        There isn’t any real issue with more CO2 up to about 1000-1200 PPM. The synthetic objection about ocean neutralization can be solved by dumping coal ash into the core ocean – which could provide us with a farmed food source.

        However if you can provide a link to an Exxon document that looks suspicious to someone who isn’t a rabid ecowacko I am willing to take look and reconsider.

      • Looks like you stayed up all night, yimmy. And you have taken quite a beating. better get some rest. We will wait here for your to return to your carpet bombing.

        What you don’t get about the Congressional oversight investigation is that NOAA insider whistle blowers have given them the goods on “Karl in particular”. It’s going to be very funny when they perp walk “Karl in particular” in front of the committee and he takes the fifth.

        Enjoy your nap, yimmy.

      • Jim D | November 7, 2015 at 7:12 am |
        PA, you can look at the videos on this page and rethink what you wrote.
        http://insideclimatenews.org/news/15092015/Exxons-own-research-confirmed-fossil-fuels-role-in-global-warming

        ICN obviously was frothing at the mouth and it was hard to find anything sensible to respond to. They don’t provide (at least in this document) links and footnotes so I am just going to assume they are mostly lying.

        There were some interesting quotes:
        Still, Black estimated quick action was needed. “Present thinking,” he wrote in the 1978 summary, “holds that man has a time window of five to ten years before the need for hard decisions regarding changes in energy strategies might become critical.”

        Purport:
        1. ICN wants us to believe Exxon knew more about climate in 1978 than the IPCC does today (this is the only way they are culpable).

        2. Exxon believed that in 1978 that we only had 5-10 years (1983-1988) to cut emissions.

        3. In the mid 1980s Exxon cut its carbon dioxide research.

        Lets go through what happened.
        1. One of thousands of researchers comes up with a scare story.
        2. The story is seriously pursued with research.
        3. The story is proven wrong and the research dropped.

        I am unaware of any warming catastrophe happening in 1983. It was samo samo.

        https://en.wikipedia.org/wiki/Nuclear_winter

        What did happen in the mid 80s was the eco/regressives were pushing nuclear winter to force Reagan into disarmament talks. This messaging would have stamped paid to the global warming scare at Exxon.

      • PA, Exxon never had any research counter to the mainstream. In the 1980’s, they dropped their research like the hot potato that it was. Then they started working on the politicians via funded “think”tanks instead. The science itself was a lost cause from their perspective, but they could still have a chance to further their interests with politicians and money. It’s a very easy track to trace.

      • They gave a Q&A to Smith’s congressional aides where he didn’t even show up. They gave the data. If there are specific questions, they will answer them perhaps even with emails, if needed, but so far it just so vague, not even accusations and nothing about the buoys and ships at the center of it. There is a principle at stake here. Bring NOAA to court if they need to, and then the reason for Smith needing emails will have to be stated to everyone, or he may just withdraw an unfounded request which looks more like partisan political intimidation to the scientific community that stand behind the NOAA scientists (see the AMS letter for one).

      • Jim D – the congressional committee has every right to ask for the emails; NOAA management has every right to seek reasonableness from congress. It’s how the game is played. Unfortunately, when it comes to reason, this congress is sadly lacking.

      • In response to Jim D‘s psychotic thought-blocking on the powers of the U.S. Congress with regard to compelling the NOAA bureaucrats to geek – and right the hellangone now! – on the emails and other work products of their propagandists posing as “climate scientists,” we have JCH writing:

        “the congressional committee has every right to ask for the emails; NOAA management has every right to seek reasonableness from congress. It’s how the game is played. Unfortunately, when it comes to reason, this congress is sadly lacking.

        Well, no. “NOAA management” – as EMPLOYEES of the federal government – have no “right to seek reasonableness,” any more than a USMC recruit private has a “right to seek reasonableness” when his drill instructor tells him to run around the parade deck with his rifle at high port screaming obscenities at the top of his lungs.

        You’re confusing rights – emphasis on the individual’s negative rights to life, to liberty and to property, which are inherent in every human being – and duties, which is the realm within which a government EMPLOYEE operates in the execution of his job as a functionary in “public service.”

        We’re not discussing rights with regard to “NOAA management” in their criminal refusal to obey legal orders from the U.S. Congress (have you any appreciation of what the expression “subpoena duces tecum” means, JCH?), but rather these malevolent jobholders’ obligations under the rule of law TO OBEY THEIR ORDERS and friggin’ well geek as commanded by Rep. Smith’s committee.

        Ain’t no consideration of “reasonableness” – or your personal opinion of this Congress – in the matter at all.

        You don’t like the U.S. Constitution? Fine. Get up an Article V convention and do something about it.

        …before some audiences not even the possession of the exactest knowledge will make it easy for what we say to produce conviction. For argument based on knowledge implies instruction, and there are people whom one cannot instruct.

        — Aristotle, Rhetoric (350 B.C.)

      • JCH writes–“Unfortunately, when it comes to reason, this congress is sadly lacking.”

        I am not a republican, but what do you think the Congress has done badly in regards to the climate?

      • The email thing has quieted down, and the “pause” is now over, so maybe finally Smith is moving on, or perhaps he is off in search of a clue. It is a pity he did not go to his NOAA briefing because he could have asked some questions there, but clearly questions aren’t his priority at this point.

    • Jim D: “Just in 2000, the 15-year trend was 3 C per century. The skeptics were silent on it, and still never mention it.”

      And:

      “Just in 2000, the 15-year trend was 3 C per century. The skeptics were silent on it, and still never mention it.”

      Absolute, unadulterated horse excrement.

      Stop making stuff up.

      • Nice ripe juicy cherry, Jimbo.

      • Shows how much value a 15-year trend is, doesn’t it? Skeptics, not me, put too high a value on this type of thing, and now you can produce this for them when they talk about the so-called pause. Glad to help.

      • By the way, UAH does it too.

      • YAWN…

      • Jim, for goodness sake! There is no way to slice it. We aren’t worried about what the temperature was in 2007. We want to know what it’s going to be like in the future! That’s what all the hand wringing is about. You show a cherry picked period in order to claim that other 15 year periods can be equally cherry picked, but they are not!

        What we are interested in is what temperatures are doing now. So by your reasoning if the trend was 0.3 C per decade between 1993/4 and 2006/7 the most recent 15 year period is virtually 0.0 C per decade, then that means the trend has declined!

        Hooray! Global Warming is over!

        What you need to show is that CO2 increase and temperature correlate over the period in which the greatest contribution to GHG’s was made by man – over the modern period, and that the temperature was increasing by around 0.3 per decade over the whole recent period we were contributing significantly to atmosCO2.

        The whole reason we are all here beating ourselves up about all of this is because that patently didn’t happen. Making the point that you could find a period with a 0.3 C per decade rise does not help or support your case.

      • Oh, and I should add, it would be helpful if you could show that temp trends have been INCREASING to correlate with our increasing output of GHG’s. That would be strongly supportive of AGW.

      • It shows that the 15 year trend is all over the place. In another few years it could just as easily be back to 0.3C per decade where it was as recently as 2007, and I guarantee that the Republicans will have lost interest in it by then. Just looking at its variation over history tells you that pauses are as ephemeral as the political attention span. The 30-year trend is much more stable and has not wavered since 1980 because all the steps and pauses cancel each other in the big picture. There was some wisdom in defining climate as being from 30 years upwards, and it is similar with climate trends.

      • agnostic, this should be the plot you wanted that is supportive of AGW. It doesn’t get any more obvious than this.
        http://woodfortrees.org/plot/gistemp/from:1950/mean:12/plot/esrl-co2/scale:0.01/offset:-3.25

      • Jim, that shows atmospheric concentrations and temperature which correlate well. It DOESN’T show HUMAN emissions and temperature which don’t.

        That CO2 and temperature correlate is not usually in dispute. That CO2 CAUSES the temp (beyond the radiative effect) is.

        Show me the graph that shows HUMAN emissions, separated from atmospheric increase and temperature and then I will get a lot more interested.

      • Accumulated emissions correlate at 99% with the CO2 excess above 280 ppm. There are graphics that show this. Would you change your mind if you saw that? If so, I can go and find it. If not, I won’t bother.

      • “Accumulated emissions correlate at 99% with the CO2 excess above 280 ppm. There are graphics that show this. Would you change your mind if you saw that? If so, I can go and find it. If not, I won’t bother.”

        Of course if I saw compelling contrary evidence I would change my mind. That’s how it ‘changed’ in the first place. You need to show:

        – Human emissions NOT estimated from accumulation in the atmosphere, but estimated from source; ie industrial emissions, cars, electricity production and so forth. Do not bother showing me estimates based on atmospheric accumulation. They aren’t reliable. Manmade emissions have outstripped atmospheric accumulation by a factor of nearly 2:1 in recent times which hasn’t been adequately explained.

        – You need to show how over the modern period of 25 years or so that the correlation between emissions increase correlates with atmospheric increase in CO2 and that also correlates with temperature increase.

        So I am looking for an increase in the rate of atmospheric CO2 accumulation in line with increasing human emissions. That would go some way to telling me that we are solely responsible for the increase in atmospheric CO2, and I am looking for an increase in the rate of warming that would support the argument that increasing CO2 with postive feedbacks can explain recent warming.

      • Accumulated emissions correlate at 99% with the CO2 excess above 280 ppm.

        RF since the middle 1970s, correlates pretty well (including the slowdown in the last decade or so ).
        Before then, correlates very poorly.

        Causation? Coincidence?

        Maybe time will tell.

      • Jim D: “Accumulated emissions correlate at 99% with the CO2 excess above 280 ppm.”

        Oh dear, there you go again.

        No they do nothing of the kind.

        Stop making stuff up.

        Oh, and even if they did, how many times do you have to be told that correlation DOES NOT imply causation?

        Are you a bit ‘special’?

  58. John Robertson

    In my opinion we need context for those temperature changes – whether they are ‘pausing’ or not. In most parts of the World the temperature changes by 10 C or more each and every 24 hours. The average person is quite incapable of detecting a change of 0.2 C – and for the very good reason that it does not matter in the least.

    Looking to agriculture; first class wheat grows right now in Manitoba at an average of 2.5 C and in Western Australia at an average of 19 C. What matter a 5 C rise in this context, let alone 2 C.

    The 2 C ‘dangerous’ limit was a crude political fix. It has nothing to do with science.

    There is massive evidence of the. Huge benefits which more CO2 has brought and continues to bring to Mankind and Nature alike.

    • Yep.

      Can somebody please try to persuade me that at an average Global Temperature of 287.1K we were (unknowingly) living in Climate Paradise. But that at 287.8K we are in a Vale of Climate Tears. And beyond 289.1K we are doomed to Eternal Catastrophe.

      C’mon guys, get real. These are teensy changes in a big ever-changing system. We’re in angels on the head of a pin territory.

    • I agree, but I think the concern is not so much over the warming but what it might do to extreme events. But since we have clearly survived through warmer periods that were natural, and in fact it appears that those warmer periods were generally beneficial, it’s hard to make a genuine case that that would occur.

      I remember seeing an article on a news program of a scientist showing a bell curve, and global warming would mean a shift up of the curve meaning there would be more area and therefore a greater probability of extreme heat waves etc (without of course mentioning the corresponding reduction in cold spells which can be equally if not more dangerous). But I am doubtful that is how warming effects climate. How do we know that that bell curve doesn’t change shape because the climate has internal dampeners on excessive heat? I found. It hard to beleive that it was as simple as a shifting a bell curve up a notch….

      • @agnostic2015

        I understand your point. But surely the flip side of shifting the bell curve up a notch (and so supposedly increasing heatwaves) is that there would be a similar reduction in very cold spells.

        And where I live at least (UK), far far more people die of cold than of heat.

        Patrick Moore (ex-Greenpeace) makes the good point that Canada is actually bigger in area than the USA. But has only one-ninth the population.

        It is hard not to think that the colder temperatures have something significant to tell us about where people choose to live and thrive.

        This just in from Minnesota (cold!)

      • “I understand your point. But surely the flip side of shifting the bell curve up a notch (and so supposedly increasing heatwaves) is that there would be a similar reduction in very cold spells.”

        Yes I did mention that. I’m not sure the bell curve applies so literally to climate though, which was what I was getting at.

      • The difference is that the cold spells had been in the climate for centuries, and people were adjusted, while the high end breaks into completely new territory for a region. An extreme hot summer now will be average to below average by 2100 with a 3-4 C rise which is several standard deviations of that bell curve.

      • Hot spells have been in the climate too. We have a name for them “hot spells” or “heat wave”. Since average temperatures have been higher in the past, for example, the medieval warm period, the roman warming, the Minoan warming etc, clearly they are trivial to survive.

        In general warmth is preferable to cold and easier to adapt to. But in any case I don’t think the climate works as neatly as a simple shifting bell curve moving up and down. It’s simplistic and naive.

      • Hot spells of several standard deviations and persistent are more typified by the Dust Bowl. Not so easy, and that gives only a hint.

      • Hot spells of several standard deviations and persistent are more typified by the Dust Bowl

        Problem is.it seems that droughts in the 11th and 15th centuries were more severe and prolonged (read persistent) then the 20th.

        http://advances.sciencemag.org/content/1/10/e1500561

        As causation is uncertain (there being no plausible mechanisms) you are left with the almost obvious… Random Chance such as Hurst suggested.

        Although in random events groups of high or low values do occur, their tendency to occur in natural events is greater. … There is no obvious periodicity, but there are long stretches when the floods are generally high, and others when they are generally low. These stretches occur without any regularity either in their time of occurrence or duration (Hurst, 1951, §6)

      • JIMD

        I am completely astonished that you could make those comments about hot and cold spells.

        In his book ‘The Little Ice Age’ Professor Brian Fagan notes;

        “The little ice age of 1300 to about 1850 is part of a much longer sequence of short term changes from colder to warmer and back again which began millennia earlier. The harsh cold of the LIA winters live on in artistic masterpieces….(such as) Peter Breughel the elders ‘hunters in the snow’ (see Figure 9) painted during the first great winter of the LIA but there was much more to the LIA than freezing cold and it was framed by two distinctly warmer periods.

        A modern day European transported to the heights of the LIA would not find the climate very different even if winters were sometimes colder than today and summers very warm on occasion too. There was never a monolithic deep freeze rather a climatic see saw that swung constantly back and forwards in volatile and sometimes disastrous shifts. There were arctic winters, blazing summers, serious droughts, torrential rain years, often bountiful harvests and long periods of mild winters and warm summers. Cycles of excessive cold and unusual rainfall could last a decade a few years or just a single season. The pendulum of climate change rarely paused for more than a generation.”

        Do you STILL not believe in climate variability despite the mountains of evidence?
        tonyb

      • Jim D appears to be under the misapprehension that everything, including wild swings, moves with the average.

      • Jim D says

        ‘Hot spells of several standard deviations and persistent are more typified by the Dust Bowl’

        Maybe in the USA. But in UK and Northern Europe, we call such things ‘a lovely summer’ and they are all too rare.

        Are we teeming millions striving to keep warm to be denied the chance of some good weather because a few recent settlers in Kansas can’t manage their soil correctly?

        Why?

      • And not just the average, but the average anomaly.
        Plus, being that it’s a spatio-temporal average, it’s already twice-abstracted from the peaks and troughs of which the actual temperature data consist.
        But, in Jim D’s world, the two follow each other in lockstep.

      • In Brueghel’s masterpiece
        ‘Hunter’s in the Snow,’
        though peasants skate upon
        the frozen river, no
        winter wonderland is this.
        Silhouettes of leafless trees
        stand stark against a leaden sky
        that matches matt-grey river.
        Exhausted dogs, hunters with meagre prey,
        peasants laboring on the snow fields,
        each trying to survive the Little Ice Age.

      • LA, perhaps France 2003 is closer to home for you. Like that, but even more warm and that is just the average summer by 2100. A/c will be needed as a matter of survival.

      • That is, unless you get Hansen’s Greenland meltwater pulse scenario that cools things towards LIA conditions in Western Europe while raising sea levels by several metres. You don’t get to choose. Nature will choose one for you.

      • @jim d

        ‘You don’t get to choose. Nature will choose one for you.’

        That’s probably true. But it’s hardly an original observation. Ever since humanity was born Nature has been choosing our Climate.

        And we’ve had to adapt to it…a task at which we’ve been stunningly successful. There are very few parts of the Earth’s surface that our brains and ingenuity have prevented us from inhabiting. From very cold places to very hot ones.And from very dry ones to very wet ones. We’ve gone from maybe a couple of dozen individuals in East Africa to 7,000,000,000 spread all across the world. We are an adaptable species.

        That civilisation will come to an end if we move form a GAT of 287.1K to 289.2K is frankly a ludicrous proposition. Fit only to be made by those who really have lost the ability to see the wood for the trees.

      • @jim d

        ‘LA, perhaps France 2003 is closer to home for you. Like that, but even more warm and that is just the average summer by 2100. A/c will be needed as a matter of survival.’

        A/c has only been invented in the last 100 years. If it is really ‘needed as a matter of survival’, how did our ancestors manage to keep going long enough for us ever to be born? Especially in really hot places like the Middle East.

        Do you know any history at all? 3000 years ago the first ‘modern’ civilisations began in those very hot, very non-airconditioned lands. They survived. They even managed to do a lot of begatting, begetting and smiting.

        Gotta say that I think your unwonted fear of a warmish summer has warped your judgement so far that some of your statements no longer make much sense. That was one.

      • It doesn’t come to an end, but it becomes very costly, and only those who can adapt or migrate will survive, which is mostly a problem for those in the third world. People think about these things. I read this one today.
        http://www.huffingtonpost.com/entry/us-trillions-climate-change_5637acc1e4b0631799132a1b?utm_hp_ref=world&ir=World&section=world

      • LA, you will find that the population density in the desert is very sparse. That is for a reason. It doesn’t support much.

      • Let some Arrhenius drop like the gently acid rain:

        “By the influence of the increasing percentage of carbonic acid in the atmosphere, we may hope to enjoy ages with more equable and better climates, especially as regards the colder regions of the earth, ages when the earth will bring forth much more abundant crops than at present, for the benefit of rapidly propagating mankind”
        ================

      • JD, the net benefit of warming to the biome is always positive. There will always be losers and losses with any climate change, but warming increases the likelihood for maintenance of those adapting.

        You are too afraid of shadows cast large by small, but real disasters.
        =====================

      • LA, yes, there are, through evolution, populations that can take the heat or the cold in the Arctic, but I think you will find that two to three generations will not be enough to evolve that and migration would be more likely. What happened in France in 2003 might have been a nice summer for India, but look at how that played out.

      • Jim D | November 7, 2015 at 7:54 am |
        It doesn’t come to an end, but it becomes very costly, and only those who can adapt or migrate will survive, which is mostly a problem for those in the third world. People think about these things. I read this one today.

        Gee. Lets look at an excerpt:
        Those gains were calculated by examining the “social cost of carbon,” or SCC, which the study’s authors say includes “lost agricultural and labor productivity, trade and energy supply disruptions, negative public health consequences, ocean acidification, extreme weather events, flooding, wildfires, increased pests and pathogens, water shortages, migration, regional conflicts, and loss of biodiversity and ecosystem services, among others.”

        Ah, as expected the phrase “social cost of carbon” precedes the delusion, the mis-analysis, the zohderisms, and the outright lies.

        I’m not sure the article interfaces to the real world enough to be worth responding to.

        However I am sure that rabid ecowackos really get excited reading this drivel.

      • Jim D:

        What happened in France in 2003 might have been a nice summer for India

        No, it would have been called a heatwave in India as well.
        You clearly have no idea what a heatwave is.

      • PA, on the one hand skeptics want adaptation, but on the other, they don’t want to pay anything for it. Your SCC is your adaptation cost. Take it or mitigate it.

      • Jimd: “Hot spells of several standard deviations and persistent are more typified by the Dust Bowl”

        Do you mean that CO2 driven Dust Bowl? As I recall, it was before AGW kicked in. Not a good example to support your point.

        Maybe you should pick that decade of Max hurricane landfalls in the US – the 1890’s. Oh, wait, that’s not helpful either.

        Maybe high CO2 is suppressing weather extremes. But that won’t serve your purposes, will it?

      • peter3172, heatwaves happen in India too, and they are much worse than the ones France had in temperature terms. One location had 7 days of 40 C, which they get every year in parts of India. Heatwaves are defined in terms of excursions from the average as a measure of how unused to it the local population would be, and it may also affect the ecology. It is not an absolute, but a relative measure.

      • bigterguy, so if the Dust Bowl wasn’t several standard deviations above the climate of the early 20th century, where do you peg it? I am not sure of the point you are making. If the climate warms, do we get more years like that by probability or less, or do they even become the new average?

      • JIMFD

        France in 2003? Good meteorological reasons for it

        http://www.metoffice.gov.uk/learning/learn-about-the-weather/weather-phenomena/case-studies/heatwave

        Nowhere near as bad as the 11 month extended drought and scorching heat of 1539 and 1540 in Europe

        http://www.dailykos.com/story/2014/07/02/1311260/-The-Great-European-Heat-Wave-and-Drought-of-1540

        The link to the pay walled article is within the link. It goes to the journal ‘Climatic Change.’ I have seen it in the Met Office libraries, There are other extraordinary heat-waves in the record as well. The modern era is not unique

        tonyb

      • And the reason we are talking about extremes is that they will be the new normal in the future, with new extremes being somewhat beyond any experienced in the last millennium due to having a head start of many standard deviations on that bell curve.

      • Jim D, a heatwave is not an extreme of otherwise normal weather, it’s an abnormal occurrence caused by an unusual set of meteorological conditions, and is characterised by a long period of continuously high temperature, day and night, without respite.
        You can’t begin to compare it with hot summer weather – it’s a completely different beast.

    • John Robertson | November 7, 2015 at 3:07 am | Reply
      In my opinion we need context for those temperature changes – whether they are ‘pausing’ or not.

      The average latitude lapse rate is 1°C for 90 miles. Global warming is 2/3rds at night, so for a 1°C increase you only have to drive 60 miles further south.

      Drive 120 miles further south this weekend. Look at the ravaged landscape and desolation caused by a 2°C temperature rise. That is what you are doomed to.

    • Jim D | November 7, 2015 at 8:22 am |
      PA, on the one hand skeptics want adaptation, but on the other, they don’t want to pay anything for it. Your SCC is your adaptation cost. Take it or mitigate it.

      Again with the assumptions not factually based.

      Paraphrasing fight club:
      The first rule of sensible climate discussion:
      “There is no social cost of carbon”
      The second rule of sensible climate discussion:
      “There is no social cost of carbon”

      CO2 increases plant growth so there is a “social benefit of carbon”.

      We should be taxing the 3rd world countries for the undeserved benefit they are getting.

      There is an SBC not an SCC.

      Finally, all we need to do to adapt to a nicer climate is to buy fewer winter clothes and blankets, and enjoy paying smaller heating bills. That is a net savings.

      • OK, the economists are in on it too. 2 C, 4 C, all the same, right? Even the optimistic economists see things going downhill after 1 C, and we are already at that point, but you choose to ignore them.

      • Sure, they are in on it. Bought and paid for. I am surprised you made that observation.

        Back when academia wasn’t overrun with progressives and there was still some honest analysis, global warming warming was thought to be a good thing by economists.

        http://web.stanford.edu/~moore/Boon_To_Man.html
        GLOBAL WARMING: A Boon to Humans and Other Animals

        Thomas Gale Moore
        Senior Fellow
        Hoover Institution

      • Whatever floats your boat. If he gives you the warm fuzzies, go for it with gusto. Ignore inconveniences like the recent Nature paper that showed the opposite based statistically on how world economies do under varying temperatures.

      • The last 1 degree C of warming has given a cornucopic benefit to the biome and the human society. So will the next 1 degree C of warming, and the next, and the next. Paleontology demonstrates no upper limit to the benefits of warming.
        ==========================

      • Jim D | November 7, 2015 at 9:10 am |
        Whatever floats your boat. If he gives you the warm fuzzies, go for it with gusto. Ignore inconveniences like the recent Nature paper that showed the opposite based statistically on how world economies do under varying temperatures.

        It does float my boat. It does give me the warm fuzzies, I will go for it with gusto. I will ignore mindless eco/progressive propaganda.

        If you do have a paper that shows economic disaster from 2°C of warming please provide a link. A paper that breaks down the effect by regions would be preferred.

      • I thought the economists agree that anything up to a 2°C increase was beneficial after which there is diminishing returns.

      • Jeff Norman | November 7, 2015 at 1:54 pm |
        I thought the economists agree that anything up to a 2°C increase was beneficial after which there is diminishing returns.

        I can’t find a single NSF grant to measure and monetize the benefits of the past increases in CO2 and future increases in CO2. They have funded some plant studies where the scientists tortured the high CO2 plants so they didn’t grow as well. I believe the information about CO2 benefits is adequate and that the subject has been insufficiently studied.

        But we can fix this. At least $ 2 billion from the CO2 “mitigation and scaremongering” budget at NSF should be redirected by law into monetary estimates of the past and future benefits of more CO2 via several rounds of grants. The people who report the highest and/or most intangible benefits in the first and second rounds, would be favored for the later grant rounds.

  59. It is amazing how simple semantic games rule the climate science and politics. By calling interpretations as “datasets” the scientist are using a rhetorical device to convince funders and politicians of their interpretational results. By using the term “records” instead of admitting that what they mean is interpretations covering longer time periods, the scientist are trying to convince others that their interpretations are beyond debate. The only datasets that they really have are what they call “raw data”, rest is simply interpretations, based on various methods, which are build upon various theories. The thing is, it would hardly convince politicians if they were asked to spend endless billions based on conflicting interpretations.by competing research groups. Better to talk about “datasets” and “records” and keep silent about the fact that these keep being rewritten due to reinterpretation, err “reanalysis”. In other words, the paradigm, err climate science is settled, just send us the money to update the “datasets” and “records”. What a nice and secure career as a scientist if you don’t question the paradigm.

  60. It’s not surprising people are skeptical as to the validity of adjustments
    when you have the alphabet agencies actively searching for warming using a process that requires judgement calls.

    Judgement calls and the desire to find a certain outcome is not a good combination.

  61. Translate this to the surface temepratures and you will see a much longer hiatus:

    http://www.nature.com/ngeo/journal/v7/n3/fig_tab/ngeo2098_F1.html

    • There are some realism in this paper – just a pity that this kind of realims doesn´t find its way into summary for policy makers, the oval office or the papacy.
      “We note that systematic forcing errors in CMIP-5 simulations of historical climate change are not confined to the treatment of volcanic aerosols. Errors are also likely to exist in the treatment of recent changes in solar irradiance, stratospheric water vapour, stratospheric ozone and anthropogenic aerosols. Even a hypothetical ‘perfect’ climate model, with perfect representation of all the important physics operating in the real-world climate system, will fail to capture the observed evolution of climate change if key anthropogenic and natural forcings are neglected or inaccurately represented. It is not scientifically justifiable to claim that model climate sensitivity errors are the only explanation for differences between model and observed temperature trends. Understanding the causes of these differences will require more reliable quantification of the relative contributions from model forcing and sensitivity errors, internal variability, and remaining errors in the observations.”

  62. In a global warming perspective, the surface air temperatures are mostly noise, at least when it comes to periods less than 50 years. I agree with Mosher that it is the longer trends that matters. The warming and TOA imbalance shows itself in the oceans. So it is much fuzz about noise. With a warming of about 0,9 deg C from preindustrial time, from when the earth was cooled down, the air temperatures are increasing. But it is only 3% of the warming that shows up in air. So small fluctuations in what happens at the interface between air and water will have strong effects. I should like to know more exactly how air temperatures follow the whole climate budget.

    • Imagine this.

      Imagine you have a theory that says this small build up of C02, decade after decade,, will eventually OVER TIME result in a SMALL amount of warming (2-3C) and that OVER TIME this warming will emerge from the back ground noise and be detectable..

      Imagine that is the theory.. well it is the theory

      NOW imagine a room full of incompetent skeptics who go about looking for this GRADUAL SMALL INCREASE in the shortest noisest data they can find.

      jeez

      • davideisenstadt

        imagine this Mosh:
        ECS is really closer to 1.5 degrees celsius than 2-3 degrees …so when CO2 levels double from their current levels to around 800 ppm in a century and a half or so, the amount of warming will be vritually undetectable.
        Another gedunken for you Mosh…
        imagine that all of the policy prescriptions advanced by the IPCC are enacted, and global temperature are lowered by about 0.01 degrees celsius, at the cost of trillions, while poor people throughout the world continue to cook indoors with dried dung, are unable to read or work at night, and cant refrigerate foods, because they still have no electricity.
        jeez.

      • Steven Mosher: “Imagine this…”

        And then you woke up…

  63. It’s clear the argument to debunk the hiatus is on shaky grounds….

    • Shaky grounds of both the hiatus and the estimaes of climate sensitivity that are based on warming between 1970 and 2000.

  64. Which dataset is used by IPCC when trends are claimed in AR5?

  65. Sent by Alan Longhurst via email:

    But its not the hiatus thats the really interesting issue, its the odd jump in the 1980-90s, surely? Nothing like that in the previous 200 years. The black mess is up to 21 lines competing for the same bit of paper…

    https://curryja.files.wordpress.com/2015/11/slide11.png

    • Only in the last quarter of the last century. A magnificent Post Hoc, Ergo Propter Hoc.
      ================

    • That may be just a European regional effect. The US has been cooling since the 1930s as is obvious from the NASA surface temperature trend.

      • http://cdiac.ornl.gov/cgi-bin/broker?id=203632&_PROGRAM=prog.gplot_meanclim_mon_yr2014.sas&_SERVICE=default&param=TMAX&minyear=1886&maxyear=2014

        Most of the US looks something like above. The whole US is shown below:.

        The claim it is warming would appear to be incorrect. The USHCN/GHCN/etc. computed US trends appear to be overheated. The raw temperatures are not hamburger or chopped steak that you need to spice up to make a burrito.

        Your serve.

      • The missing image for the post above.

      • PA

        Here is the current GISS Land temperature graph dated August 2015

        The annotation to is as follows;

        “Annual and five-year running mean surface air temperature in the contiguous 48 United States (1.6% of the Earth’s surface) relative to the 1951-1980 mean. [This is an update of Plate 6(a) in Hansen et al. (2001). The corresponding graph in Hansen et al. (1999) shows a smaller trend, since it is based on data that were not yet corrected for station moves and time-of-observation changes, see FAQ.] “.

        To see why the data was corrected you need to go to the Hansen paper quoted which is here

        http://pubs.giss.nasa.gov/abs/ha03200f.html

        tonyb

      • climatereason | November 8, 2015 at 11:38 am |
        PA

        Here is the current GISS Land temperature graph dated August 2015

        Huh? What does a GISS chart have to do with the US temperature trend? GISS cranks out political documents.

        The USCRN is the US temperature trend.

        Perhaps an analogy will help. Someone asks you to post a picture of your pet cow on the web. You get hungry one day, you butcher the cow, grind up one of the haunches, throw it in a pot, brown it, throw in celery and onion and a little garlic, throw in a little water and taco seasoning and let simmer for a few minutes. Just before you put the filling in your burrito you remember the request and take a picture of the contents of the pot, after crudely arranging it in the shape of a cow, and post it on web.

        1. GISS does use the US temperature data.
        2. They do plot the data using a solid line with labelled x and y axes, so it looks all chart-like just as you would if you were making a real temperature trend chart.

        However – GISS bears the same resemblance to the temperature trend that the taco filling did to the cow.

      • PA

        Unfortunately the US govt and the IPCC believe that GISS produce data accurate enough to be used for far reaching policy decisions. They are therefore going to take more notice of GISS-and the graph I posted- than they are of sceptics-such as Steve Goddard-who claim the data has been so manipulated that it can no longer be seen as an accurate record of temperature.

        Those are the facts of life which is why I continually suggest that claims of unscientific behaviour and manipulation of figures need to be proven in a peer reviewed paper where The US Govr and the IPCC will take notice of them.

        I have great problems with the temperature data for a variety of reasons but claims of fraud need to be demonstrated to be correct.

        tonyb

      • climatereason | November 8, 2015 at 12:14 pm |
        PA

        I have great problems with the temperature data for a variety of reasons but claims of fraud need to be demonstrated to be correct.

        tonyb

        USRCN is the United States Reference Climate Network. Please look up what “reference” means. At this point we are done with GISS.

        Now why is GISS different than real temperatures? Well, the various us weather stations in various ranges of physical,, instrumental, and positional, and environmental consistency take some adjusting. There are a number of choices of which artifacts you remove and which artifacts you add. Dealing with ALW (anthropogenic local warming) and instrument/location changes is a big issue for GISS, but not for USCRN. A temperature trend is a attempted reproduction of the thermal history, much like a painting may reproduce a scene..

        USCRN is a William Bliss Baker, GISS is a Picasso.

    • 1. the PDO, a beast, peaked in that decade.
      2. ACO2’s presence in the atmosphere was high enough to prevent the expected decline that usually follows a PDO index decline
      3. PDO was not able to stall ACO2 warming until it fell into persistently negative numbers after 2005

      Crayons work.

      Same thing is happening right now… last 60 months at +.046C per year… last 48 months at +.065C per year.

      Is every phase of the PDO with persistently negative index numbers accompanied by a divine wind – see Issac Held’s post. The Kimikamikaze may not come around again for a very long time.

    • Judith,

      I’m not sure where Alan is coming from on this. As you have pointed out more than a few times the ~20 year increase between ~1978 and ~1997 is not dissimilar to the ~20 year increase between ~1920 and ~1940. One is allegedly more than 50% attributable to greenhouse gases while the other was just natural variation.

    • if you take 21 cities and CET you should expect to see odd things.

      plus there is a way to meausure “odd jumps” try it

  66. Curry,

    If you compare UAH (v6 beta3) and/or RSS with HadCRUt3, you will see how closely the global lower troposphere tracks the global surface, only with generally larger amplitudes, provided you adjust the surface record down by 0.064 degrees from Jan’98 on. (This needs to be done because of the spurious warming (by 0.09K) that entered the HadSST2 dataset across the seam following the UKMO Hadley Centre change in data sources for their SSTa product in 1997/98; the calibration simply failed (but never corrected), which is seen very clearly when comparing the HadSST2 (and the HadSST3) with the other primary global SSTa datasets, see below.)


  67. Let’s wait three years after the peak of the current el nino and see if a new higher plateau is emerging.
    By the way: another independent global thermometer is the moving annual difference of CO2 in Mauna Loa.

    estimated LT MSU = Carbon Dioxide Thermometer = 0.23*(dCO2 – 1.53) ± 0.2°C ( Jarl Ahlbeck )

    https://klimaathype.wordpress.com/2012/05/07/the-carbon-dioxide-thermometer-revisited/

  68. Suppose that by 2050, the world as a whole somehow manages to reduce total GHG emissions by 80% relative to a 2005 baseline, doing so in a scenario where the rate of progress in achieving worldwide emission reductions starts slowly at first, then gradually gains speed, and then greatly accelerates in the last fifteen years of the thirty-five years that pass between now and 2050. As would be predicted by the climate models which were used for IPCC AR5, what might be the likely range of Global Mean Temperature values being seen in the year 2100 if this worldwide 80% emission reduction scenario occurs?

    • My guess, if it stays at 20% from 2050-2100, is 500 ppm, which is 1.6-3.3 C of warming for 2-4 C sensitivity.

      • There’s your error JimD. Your overestimate of sensitivity.

        Try this: 0.8-1.65 C of warming for 0-2 C sensitivity.

        That’s not alarming.

      • Based on the rise rates of the last 60 years, the sensitivity is at least 2.4 C per doubling, similar to a number Lovejoy gets from data (not models). Anything lower requires an inexplicable source of forcing, that skeptics have yet to name, which also correlates strongly with the CO2 rise.

      • JimD, try this:

        Based on the recently disclosed extra 900+ million tonnes of CO emitted by China since 2000, the sensitivity is at most 1.2 C per doubling, similar to Curry & Lewis (which they get from data, not models). Anything higher requires an unsupportable level of forcing that climate alarmists are unable to justify using the 19thC physics and chemistry upon which CO2 warming is based.

      • I call bs on that. Nearly 500 Gt have been added globally since 2000, so you say 1 more Gt makes any difference? Show me the numbers.

      • JimD, I don’t need to show you anything.

        You’re the plaintiff. Prove your damage. You cannot.

        You have mistaken religious fervour and end-of-days alarmism for science. Or, to put it as Prof Curry would, you have mistaken bias and advocacy and the suppression of uncertainty for solid evidence.

      • hidethe decline
        ” (which they get from data, not models)”

        you realize that Lewis and curry is filled with and informed by models from TOP to Bottom.?

      • yimmy blurts==>”Nearly 500 Gt have been added globally since 2000, so you say 1 more Gt makes any difference?”

        OMG! That’s a lot of CO2. But we got the pause that is killing the cause. Global greening, instead of global warming. Let the good times roll.

      • The implication of the higher China CO2 emissions is that sinks are growing more than predicted and the sensitivity is probably slightly lower. The well mixed portion of the atmosphere CO2 is not increasing as much as would be expected by CO2 emissions, while the very near surface NH unmixed atmospheric CO2 concentration is likely higher than the mixed atmospheric concentration would suggest, likely increasing temps in the low troposphere mid-latitudes in and near China.

      • And Jim D would have a very good point regarding the relative size of the change in the emissions estimate. Not likely significant globablly. If that 500Gt was for the 2011-2013 time period the 900 million tonnes is for. But the article says for that year that would “be an 11 percent increase in emissions [a year for China], he said. For comparison, the International Energy Agency estimated before the revision that China had emitted 8.25 billion tons of carbon dioxide from fossil fuels in 2012.”

        It also says China’s coal consumption was 15% higher than previously estimated for the period 2005-13.

        That’s almost a whole percentage point of global CO2 emissions each year, with later years being higher than earlier.

    • Beta Blocker | November 7, 2015 at 9:38 am | Reply
      Suppose that by 2050, the world as a whole somehow manages to reduce total GHG emissions by 80% relative to a 2005 baseline

      Hmmm. This premise doesn’t make sense.

      The current environment absorption is 5.7 GT/Y.

      http://cdiac.ornl.gov/GCP/carbonbudget/2014/
      The absorption is increasing with CO2 roughly according to the formulas::
      Aco2_land = 0.0158 * (Xco2-316)+1.70
      Aco2_sea = 0.02469 * (Xco2-280)
      Aco2 = Aco2_land + Aco2_sea

      Now this means that at 500 PPM the environmental absorption will be 10 GT/Y.

      So if we reduce CO2 emissions 0% (zero percent) from a 2015 emissions level, the atmospheric CO2 level will peak at 500 PPM.

      We’ll just use the lowest IPCC ECS setting, 2°C, for a CO2 doubling. The IPCC ECS minimum of 2.0 °C is 50% to 100% to high, Curry&Lewis and a number of other well known and respected authors have suggested ECS values less than 2.0. The only actual measure of GHG forcing, 22 PPM = 0.2 W/m2 suggests that it is less than 1 °C.

      Anyway, computing warming:
      5.35*2*ln (500/400) = 2.39 W/m2.

      At the bottom of the atmosphere or BOA (5.5 w/m2/K) this means 0.43 °C, at the TOA (top of atmosphere, 255K and 3.7 W/m2/K) this means 0.65 °C,

      • PA, this was my original question:

        “Suppose that by 2050, the world as a whole somehow manages to reduce total GHG emissions by 80% relative to a 2005 baseline, doing so in a scenario where the rate of progress in achieving worldwide emission reductions starts slowly at first, then gradually gains speed, and then greatly accelerates in the last fifteen years of the thirty-five years that pass between now and 2050. As would be predicted by the climate models which were used for IPCC AR5, what might be the likely range of Global Mean Temperature values being seen in the year 2100 if this worldwide 80% emission reduction scenario occurs?”

        At the upcoming COP21 climate conference, President Obama will make a hard commitment to reducing America’s GHG emissions 80% by 2050. He could not successfully defend that kind of ambitious target against the inevitable political criticisms unless he were able to say that all other major industrial nations were making the same commitment.

        One possible outcome of COP21 is that while the United States will be making a hard and fast commitment to the 80% reduction target, other major industrial nations will agree to a less ambitious objective of pursing alternative technologies which might allow for an 80% reduction by 2050, but that the 80% reduction target itself will not be considered by those other major industrial nations as being a mandatory binding commitment upon them.

        In defending America’s own hard and fast commitment to an 80% reduction by 2050, the Obama Administration will probably consider the COP21 statements made by other large industrial nations as being ‘binding for all practical purposes’ and ‘the best deal we could get’. The administration will also be using the climate models cited by IPCC AR5 in making its scientific defense of the 80% by 2050 reduction target.

        Post COP21, the question will naturally arise, what would be the predicted range of year 2100 global mean temperatures if the IPCC AR5 models are used and if the 80% reduction by 2050 target is actually achieved on a worldwide scale.

        In his initial response, Jim D offers a quick estimate for what the climate models cited by IPCC AR5 might predict for the year 2100 if the GHG reduction scenario described above were to occur, and offers a cogently stated basis for his quick estimate. We should all be curious to know if any of the IPCC AR5 climate models have been run under a scenario which fits the general description of the one I’ve outlined above; and if so, what the results from those model runs were.

        If the climate models cited by IPCC AR5 haven’t been rerun for an emission reduction scenario similar to the one I’ve described above, then a major hole will exist in the arguments being used by advocates of an aggressive target for worldwide GHG reductions, just by the mere absence of an IPCC modeled outcome.

      • PA, problem with your model is that it doesn’t consider growth of the biosphere. It’s not likely simply a funtion of CO2 concentration but also the mass of living plants that survive each winter. Some of the CO2 consumed goes into increasing the plant biosphere, like capital investment. If emission were to stop, concentration would fall faster than they rose. When land plants eventually start dying off and decaying the deline would slow, but because some if it will end up in long term sinks, like the bottom the ocean, concentration decline will still be pretty fast.

      • aaron | November 8, 2015 at 12:56 pm |
        PA, problem with your model is that it doesn’t consider growth of the biosphere.

        Perhaps we need to backup a bit.

        This isn’t a model, it is curve fitting with loose physics based justification. I welcome any improvements.

        The formula is based on the actual CDIAC data and would incorporate more plant volume and a higher rate of photosynthesis. To this point the result seems to be a linear increase and it should be close to linear up to 600 PPM.

        The only problem is the Chinese emissions misstatements mean my formulas are 20% too low. I have to wait until CDIAC releases their next global carbon budget to recalibrate.

        When the next CDIAC report is released I will chart all the curves and we can kibitz about how good the fit is.

        Blocker | November 8, 2015 at 11:28 am |
        PA, this was my original question:

        Huh?… Oh, you are serious.

        80% reduction… We would be about 0.1°C cooler in 2100.

        The current absorption is 6.7 GT/Y (with chinese underestimate). If you reduce the emissions below 6.7 GT/Y the CO2 level starts to drop.

        As it is the temperature increase in 2100 is going to be less than 0.3°C if you are reality based and 0.65°C (2.0 ECS) to 1.45°C (4.5 ECS) if you use the IPCC numbers, for the business as usual scenario.

  69. If non-CO2 emissions related paleo-history is being repeated (a plausible intrinsic driven scenario), we will get a bit warmer (over maybe decades, not just years), fueling even more nonsense over the tiny anthropogenic portion of total ppm CO2 being responsible thus causing those in power or those who want to be in power making my life difficult.

    Which is why I am quite content to be planted in a far removed and isolated corner of Oregon where I can live in relative peace from the madding crowd (though that in itself has its drama).

    So…we are screwed by stupidity. An oft repeated Earthly condition.

    • I never cease to be amazed at the sort of morons voted in by the other morons.

    • That helps to explain why there are so many of us around the world.

    • I don’t know Pamela,

      I live in Oregon and state politics are well along the path of screwing up the state. Oregon used to be a state filled with moderates who believed it was better to find areas of agreement and work to accomplish something. No more. The two parties are controlled by the those furthest out from the center. It’s embarassing to be Republican at times. However I’m not stupid enough to vote Democrat.

      (Though I did vote twice for David Lee as my Congressman.)

  70. Beyond the “hiatus controversy”, the real issue raised by Dr Curry’s post is the validity of the data sets.
    What really matters is not only the data themselves, but more especially the corrections that have been applied to those data. And the inconvenient truth is that data sets, especially those dealing with surface stations’ temperatures, are highly questionable and indeed unreliable, due to unjustified adjustments.

    Two thirds of temperature anomalies are actually resulting from data corrections and not from raw measurements.
    This means that, the 0.75°C warming “observed” since 1850 is indeed composed of a 0.25°C actual warming plus a 0.5°C positive (warming) correction…

    With HADCRUT4 data series, the Hadley Centre has introduced new adjustments compared to previous HADCRUT3 data series :
    http://www.woodfortrees.org/plot/hadcrut3vgl/from:1970/mean:60/offset:0.025/plot/hadcrut4gl/from:1970/mean:60

    Curiously corrections are always in the warming direction… But where are the justifications ?
    Has anyone assessed the validity of HACRUT4 adjustments compared to HADCRUT3 ones ?
    I guess the answer is unfortunately that nobody knows except those who have defined the adjustments.

    Moreover, data adjustments appears to be constantly and obviously “fluctuating” over time, and indeed corrupted.
    When looking at US Temperature record as published in Hansen et al 1999 (graph fig. 6) :
    http://pubs.giss.nasa.gov/docs/1999/1999_Hansen_etal_1.pdf
    http://www.giss.nasa.gov/research/briefs/hansen_07/
    ● Warmest year is 1934
    ● In this graph,1998 only ranked 5th after 1934, 1921, 1931 and 1953…

    Original data were also available at the following address but NASA has deleted the file beginning 2015…
    Guess why…
    http://www.giss.nasa.gov/data/update/gistemp/graphs/FigD.txt

    In Hansen et al 2001, pretexting a “time of observation debiasing” (reaching up to +0.15°C), new adjustments made 1998 tight to 1934
    This situation has been maintained up to 2007

    http://icecap.us/images/uploads/NEW_RANKINGS.pdf

    In 2007, NASA GISS made a fruitless attempt to make 1998 ousting 1934 as Hottest U.S. Year
    In [Link]
    The “trick” has been discovered by McIntyre and NASA had to step back.

    But the record published in 2012 finally reached the objective of ousting 1934 as warmest year in the US :
    http://data.giss.nasa.gov/gistemp/graphs_v2/Fig.D.txt
    Compared to the 2000 publication :
    – 1998 average temperature anomaly has been adjusted by +0.35°C
    – 1934 average temperature anomaly has been adjusted by -0.21°C
    NASA also deleted those inconvenient data, but the resulting curve can be seen in Hansen et al 2010.

    Looking at individual surface stations data, one can also observe significant and questionable evolutions of adjustments.
    Few examples of how to hide the inconvenient truth that temperature have been warmer in the past, despite limited anthropogenic signature :
    Station Data: Reykjavik (64.1 N,21.9 W)
    – Old adjustments : the 30’s are clearly warmer than current period.
    – New adjustments : Current period becomes much warmer. But why ?

    The data manipulation is even more obvious and significant for Capetown Airpt (33.9 S,18.5 E) in South Africa.
    – Old adjustments show a typical W profile where the 30’s are clearly warmer than current period.
    – New adjustments make the W shape disappear and give place to a quite constant warming, with current period much warmer.

    This is not cherry picking: such examples, all extracted from NASA GISS data base, can unfortunately be multiplied.

    Conclusion :
    Temperature data sets are manipulated and corrupted by questionable adjustments and nice “tricks” whose aim is “to hide the decline”, as per famous Phil Jones Climategate email.
    When observational records do not support AGW consensus then modify the data to make them better fit with models outputs,…And then you can claim that models are duly validated and right…
    That’s climate junk science, but indeed that’s not science.

    • Nice work, Eric. That is a presentation that should be given when the Congressional science oversight committee drags in the NOAA boys and girls, “Karl in particular”, for a grilling.

    • problem with your conspiracy theory eric is that the methods and source code that implement them are public. Others, including some here, have crunched the numbers and found the same result.

      you are like the people crying about chemtrails. They see a contrail and don’t want to believe it’s a contrail. You see a data adjustment and don’t want to believe it’s legitimate.

      • Curious George

        What exactly makes an adjustment “legitimate”? Are adjustments a product of statistics or a wishful computing?

        After a discovery that “anomalies” are computed against a sliding “base”, I tend to mistrust much of IPCC results. And models don’t even have a latent heat of water evaporation right.

      • “What exactly makes an adjustment “legitimate”?”

        Creationists say the same thing about radiodating.

        Yawn.

        Some people just don’t want to believe the data is adjusted legitimately. They WANT to believe there’s a conspiracy, because if there isn’t they won’t be able to deny what the data says!

      • nebakhet try this:

        some people just don’t want to believe that the temp data is adjusted illegitimately. They WANT to believe the climate scientists doing the adjusting aren’t alarmist unprofessional hacks because if they admitted that they were hacks they’d have to admit the temp data show a hiatus in warming.

      • There is no conspiracy theory.
        Just inconvenient facts.
        GISS LOTI and HADCRUT4 adjustments always go in the warming direction.They introduce warming biases to “hide the decline”
        Because the pause or so called “hiatus” would confirm that natural variability is powerful enough to thwart the supposedly human influence, and would hereby formally rebut the AGW theory.

        This is exactly why the hiatus controversy is so “hot” and crucial, and why NASA-GISS, NOAA or Hadley Center put so much effort to “kill and bury” the hiatus, that threatens one of their main reasons of being.
        Unfortunately for them (hopefully for us) these efforts are formally contradicted by RSS and UAH satellites’ data.

      • Curious George

        Dear nebakhet: So you don’t know what makes an adjustment “legitimate”. Welcome to the club.

    • richard verney

      Perhaps this should be provided to the Senate Committee who are investigating the NOAA adjustments. Not directly relevant to NOAA, but generally germane to the issue of temperature adjustment by tax dollar funded Government institutions.

      • Data falsification by tax dollar funded Government institutions is actually not the main issue. The main issue is that your (US) and my (French) governments, as many others, are using those biased data to put in place very expensive actions’ plans (more than 100 billions dollars per year), in order to treat the false issue of anthropogenic global warming.

        When I see that my (french) government has paid about 200 millions euros for the organization of COP21…

  71. On the topic of statistically significant warming, it seems the temperature data generated by the satellites would have a better shot at being significant. Reason being, the number of samples is huge. 25 GB per year of raw data.

    • Not to mention the vast volumes of atmosphere being sampled…

    • richard verney

      If one is interested in Global data, it has better coverage.

      The land based thermometers have very poor spatial coverage (the vast majority of the stations compiling the data set are in the US and North West Europe with large swathes of the globe completed unmeasured), and of course, the stations that form the land based data set are continually changing over time so that one is never truly making a year on year comparison with the same data set.

      • Mosher – did you ever incorporate altitude in you model?

      • Richard Verney

        If you have evidence to rebut, my contention, then please set it out. Please post a map of the station locations for GISS and HADCRUT, and I think that you will see that they have very little global coverage and are based mainly upon data collected in the US and Northern West Europe.

        See for example;

      • Richard Verney

        PS. The grey areas are not sampled, and the yellow dots have station data going back only to about 1950. It is only the dark red that have long records and look at the spatial coverage of the dark red dots.

        But hey, maybe that is global is Mosher’s eyes! It is not in mine.

      • Put red dots on all the spots on that map where a satellite sensor is at two meters above the land surface and measures the temperature.

      • richard verney

        JCH

        No one is claiming that the satellite data goes back before 1979. It does not. The satellite data set is of short duration (and that is one of its issues). The point being made is that it has better spatial coverage compared to the land based thermometer record..

        The land thermometer data set has very poor spatial coverage. Some of the data goes back to the 1880s, but there are probably less than 20 stations in the Southern Hemisphere that have records back that far. As one can see, for practical purposes, all the long duration data is in the Northern Hemisphere and not well spatially covered in the Northern Hemisphere at that.

        Just look at how little of Canada, Greenland, Africa, central South America,Central Australia, the Russian plains, Indonesia, Antarctica is even being measured today. On no stretch of the imagine could the land based thermometer data set be properly considered to be global. The blue dots do not have a record any longer than the satellite data set, and even the yellow dots may have little more than 50% longer record compared to that of the satellite data set.

        As regards the long standing data sets, which are mainly concentrated in the US and North Western Europe, it is almost certainly the case that the US is cooler today than it was in the 1930s (the time of the dust bowl which warm period was accompanied with dry conditions), and CET has shown no increase in the rate of warming since coming out of the Little Ice Age. Indeed, the fasted rates of warming are long ago, and many consider the 1530s/40s to be the warmest period in central England.

      • richard verney: “Just look at how little of Canada, Greenland, Africa, central South America,Central Australia, the Russian plains, Indonesia, Antarctica is even being measured today.”

        Irrelevant.

        If climate “scientists” haven’t any data, they have a number of options at their disposal, ranging from making it up Krigging it, or of course the time-honoured method of using the output of their computer games climate models.

        “The data doesn’t matter. We’re not basing our recommendations on the data. We’re basing them on the climate models.”

        ~ Prof. Chris Folland ~ (Hadley Centre for Climate Prediction and Research)

  72. Why so much focus on the hiatus the last 20 years?
    I can´t immediately see that there is significant warming of the atmosphere in the satellite record at all.

    I can see no obviously significant warming since 1979. No warming of the atmosphere in 36 years. I wonder how the definition of significant is made up. In particular if that definition excludes that there has been no significant warming of the atmosphere during the whole satellite record.

    I would also like to see a model trying to reproduce the atmospheric temperature the last 35 years. A model which is initialized at 1979 and then left untouched. A model run without any model training or adjustments upfront and without any “bias” adjustment afterwords.

    (Yea – I know that the models can´t do that without manual bias adjustments:
    «When initialized with states close to the observations, models ‘drift’ towards their imperfect climatology (an estimate of the mean climate), leading to biases in the simulations that depend on the forecast time. The time scale of the drift in the atmosphere and upper ocean is, in most cases, a few years. Biases can be largely removed using empirical techniques a posteriori.”
    (Ref: Contribution from Working Group I to the fifth assessment report by IPCC; 11.2.3 Prediction Quality; 11.2.3.1 Decadal Prediction Experiments ) )

    • richard verney

      There appears to be only a one off and isolated step change in temperature (not straight linear warming) coincident upon the Super El Nino of 1997/8.

      Prior to that (1979 to 1996) there appears to be a slight positive trend in warming but bearing in mind measurement errors, this trend is not statistically significant. Temperatures cannot therefore be statistically distinguished from being flat, and therefore there was a ‘pause’ for about 17 years.

      Following the Super El Nino of 1997/8, (ie., 1998 to 2015) there appears to be a slight negative trend in temperatures (cooling) but once again bearing in mind measurement uncertainties/errors there is no statistically meaningful trend. Temperatures cannot therefore be statistically distinguished from being flat, and therefore there is a second ‘pause’ also of about 17 years in duration.

      As you suggest, apart from the step change in temperature coincident upon the Super El Nino of 1997/8, which was a natural event and not CO2 induced, there appears to be no evidence of atmospheric warming over the entire 36 years of the satellite record.

      In that record, there is certainly no first order correlation between CO2 and rising temperatures. the record strongly supports the view that the only warming is a consequence of a natural phenomena (the Super El Nino of 1997/8, but correlation does not establish causation), but the data set sheds no light why temperatures have not dissipated since the step change coincident with the Super El Nino of 1997/8.

  73. We do seem to be going on at length about how best to define a null hypothesis and determine refutation, etc. Yet there don’t seem to be a lot of climate science papers that expressly set out the null hypothesis, let alone register experimental parameters in advance.

    So what good will it do to carefully construct the null and testing protocols if you are allowed to do so after viewing (and even manipulating) all of the data?

  74. All of the criteria listed by Judith are quite arbitrary, given the lack of highly reliable knowledge of natural variability over a wide range of time scales. In particular, the third criterion of a 17-year hiatus is based on an unrealistic characterization of such variability as a “red noise” process.

    • There’s more than one question here. What are the error bars for the various temperature series and are the calculated correctly? Is the trend of any of them statistically significant? Can we tease out a signal definitely attributable to CO2? And on and on.

      • For determination of long term climate trend, the error bars of the present temperature series’ short term trends would be huge because of the absence of reliable population data. For this reason statistical tests and the isolation of the signal from natural variability from anthropogenic causes would highly conjectual.

      • Sorry about the missing “r” and as there’s no clear correlation between CO2 and temperature, this is indeed prescient.

  75. Geoff Sherrington

    It is not possible to talk about a hiatus when displayed by ground stations – when the ground station data are so suspect.

    For Australia alone, for example, the talk of increases in large, hot, temperature change events, whether they are harmful or beneficial, does not stand up.
    First, here is an analysis of the 5 southern capital cities (which have the longest years of record) that shows how wrong it is to claim recent harm. For those that do, the facts are simply not present.
    http://www.geoffstuff.com/are_heatwaves_more_severe_version2.pdf
    Then, there is this analysis from Melbourne, historically the BOM main recording site and presumably one of the best managed. Sydney shows a similar pattern. Both suggest that the number of hotter days per year, as defined, are decreasing as at the time of this analysis.
    http://www.geoffstuff.com/Melbourne_86071_very_hot_day_ each_year.jpg

    • if ground station data are so suspect…..then explain your comment that relies on them being accurate

    • long records have the highest probablity of being biased by changes in observation methods.

      you’d think people would get that by now

    • Geoff:

      Nice work in debunking a nascent myth. Despite changes in observation methods and in precise instrument siting (which in the aggregate mean should be trend-neutral), vetted century-long station records provide the most reliable thermometric data available to study variability, natural or unnatural (UHI). Alas, the vested interests of index makers prompt them to deprecate the inflexible readings of relatively rare long records in favor of their ad hoc syntheses from far-more-numerous snippets of data that are readily molded via adjustments and “homogenizations” to suit their misguided aims.

      As always, best wishes to unbiased minds!

      John S.

  76. Geoff Sherrington

    Please use this link for melbourne:

  77. Pingback: Folkevalgte i USA skal granske suspekte klimatemperaturer | Klimarealistene

  78. Pingback: Weekly Climate and Energy News Roundup #204 | Watts Up With That?

  79. Late to the party, but let’s hope my entrance is at least entertaining.
    Day to Day Temperature Difference
    Surface data from NCDC’s Global Summary of Days data, this is ~72 million daily readings,
    from all of the stations with >360 daily samples per year.
    Data source:
    ftp://ftp.ncdc.noaa.gov/pub/data/gsod/
    Code and reports (chart source data):
    http://sourceforge.net/projects/gsod-rpts/


    y = -0.0001x + 0.001
    R² = 0.0572
    This is a chart of the annual average of day to day surface station change in min temp.
    (Tmin day-1)-(Tmin d-0)=Daily Min Temp Anomaly= MnDiff = Difference
    For charts with MxDiff it is equal = (Tmax day-1)-(Tmax d-0)=Daily Max Temp Anomaly= MxDiff
    MnDiff is also the same as
    (Tmax day-1) – (Tmin day-1) = Rising
    (Tmax day-1) – (Tmin day-0) = Falling
    Rising-Falling = MnDiff
    Deserts of the US southwest, where water vapor will have the least effect.

    There is no sign in the surface record that there is any loss of nightly cooling, in fact the overall trend is slightly negative. Daily peak temperatures are irrelevant to warming caused by Co2, it has to stop heat from escaping, and it doesn’t. Peak temperatures can be up for any number of reasons (land use, jet contrails, undetected solar warming), but those have nothing to due with AGW, unless once Co2 is removed they decide that to collect their taxes they have to find something else humans do to tax, and or revert society to hippy communes of the 60’s.

  80. dikran

    I agree with your assessment above. Threads have finite lifetimes and unfortunately often not for the best of reasons.

  81. October is not in yet. The resistance on this thread will die a bit more when it is. Same with November. Same with December through May. ACO2 heatwave. Pretty big.

    • ACO2 heatwave.

      ENSO != CO2

      • ENSO without ACO2 would like 1905, or less.

      • Is pretty impressive with the SSTs at all time highs ( which does include October along with the MSU data ):

        But,
        1. El Ninos reverse
        2. Every year is expected to set a record (+/-) natural
        3. Longer term trends remain less than those predicted by IPCC
        4. No trends are significant since 2001, but…
        5. The existing trends in MSU remain negative ( still pausin’ )

    • ACO2 heatwave. Pretty big.

      If I understand your point (don’t want to put words in your mouth), what we have is a heat wave caused by warm water sloshing around in the ocean, Co2 didn’t have anything to do with it, what you will see is that come next spring through fall all of that hot water vapor getting carried over land will radiate to space as it dissipates. It will cool more at night, than it warms the prior day. That’s exactly what it’s done the last 2 big El Nino’s.

      • Each day the sun drills energy into the oceans. Each day, because of increasing ACO2, the energy’s exit from the oceans gets slower and slower and slower… because of the ever increasing presence of ACO2 in the atmosphere.

        So it has a great deal to do with it.

      • Each day the sun drills energy into the oceans. Each day, because of increasing ACO2, the energy’s exit from the oceans gets slower and slower and slower… because of the ever increasing presence of ACO2 in the atmosphere.
        So it has a great deal to do with it.

        Well that’s the claim, land surface stations says it is a false claim.

      • JCH says, November 12, 2015 at 12:11 pm:

        Each day the sun drills energy into the oceans. Each day, because of increasing ACO2, the energy’s exit from the oceans gets slower and slower and slower… because of the ever increasing presence of ACO2 in the atmosphere.

        Not according to CERES, I’m afraid:

        I’m sorry, JHC, but the best global data we have says the surface is cooling more an more efficiently by IR as time passes. Not less and less. Since March 2000.

    • Nothing can “reverse” unless the top of the atmosphere imbalance goes th either way. It stays, or it goes. Right now, it stays.

      • Nothing can “reverse” unless the top of the atmosphere imbalance goes th either way.

        How do you know what the net at the TOA is?

        CERES data is too short and noisy, but through March of this year, the period of record net was as close to zero as could be:

      • Nothing can “reverse” unless the top of the atmosphere imbalance goes th either way. It stays, or it goes. Right now, it stays./blockquote>
        Except the measurements at TOA aren’t accurate enough to tell, so it’s adjusted based on the theory that it’s imbalanced.
        Just more made up data.

      • Thermosteric sea level is spiking. Sea level is spiking. The temperature anomalies are large, and getting larger. You can believe in your fairytales. Fine with me.

      • The temperature anomalies are large, and getting larger. You can believe in your fairytales. Fine with me.

        I’m working with the station data directly, the anomalies are made up tripe.

      • micro

        ‘I’m working with the station data directly, the anomalies are made up tripe.”

        temperature itself is an anomaly. offset from a reference.

        you still havent found your error. keep looking… and remember you are the easiest person to fool..

      • temperature itself is an anomaly. offset from a reference./blockquote>
        In my case I use the individual station itself, the best possible reference for each station, it is the same instrument.

        you still havent found your error. keep looking… and remember you are the easiest person to fool..

        It’s not to hard to subtract one measurement from a second measurement, two times, and the subtract the results of those two subtractions from each other, then either add them together or average them.
        The average of the temperature evolution from multiple instruments, shows over a complete year to be negative, but within the bounds of uncertainty for the measurements.

        Let’s compare that process to the just the generation of temps for areas that aren’t actually measured? and why is a nonlinear field treated as a linear field? haven’t you watched a weather report and noticed these things called fronts?
        Maybe, just maybe you guys should take your own advise on fooling oneself.

      • Apparently it is hard to not delete the less than symbol.

      • “and remember you are the easiest person to fool..”

        Doubtless you know all about that, Steven.

  82. goes the other way…

  83. Reblogged this on I Didn't Ask To Be a Blog and commented:
    Lets take a look at ALL the global temperature data sets.

  84. Can someone confirm if my assumptions on this graphs are correct:

    Here’s the R code…

    library(ggplot2)
    setwd(dir = “W:/Simon/WoodForTress_DataSets”

    download.file(url = “http://www.woodfortrees.org/data/rss”,destfile = “rss.txt” )

    L <- length(grep(invert = T,pattern = "#",x = (readLines(con = file("rss.txt")))))

    RSS <- read.delim(file = "rss.txt",header = F,sep = "", stringsAsFactors = F, colClasses = c("numeric","numeric"),comment.char = "#",nrows = (L-1))

    RSS <- RSS[1:(nrow(RSS)-5),]
    colnames(RSS) <- c("Year","Temp")

    plotdata <- data.frame(Year= RSS$Year,TempAnomaly = RSS$Temp,lower = RSS$Temp-0.2, upper = RSS$Temp+0.2)
    AVG_RSS <- mean(RSS$Temp)

    ggplot(plotdata) + geom_line(aes(y=TempAnomaly, x=Year, colour = "Temp"))+
    geom_ribbon(aes(ymin=lower, ymax=upper, x=Year, fill = "nominal ±0.2C uncertainty"), alpha = 0.3)+
    scale_colour_manual("",values="blue")+
    scale_fill_manual("",values="grey12") +
    geom_hline(yintercept=AVG_RSS,size=1 ,alpha=0.5, color="yellow")+
    geom_hline(yintercept=AVG_RSS – 0.2,size=1 ,alpha=0.5, color="red")+
    geom_hline(yintercept=AVG_RSS + 0.2,size=1 ,alpha=0.5, color="red")

  85. Let me rephrase this:
    1) Does my source and my understanding of +/- 0.2°C is close to the error/uncertainty for this RSS-TLT dataset ?
    2) If 1) is true or close to it, does it make sense to say what I said: No statistically significant warming since we started measuring LT?

    Thanks!

    • Not sure what you are saying in 1). As for 2) Don’t you think it is a little awkward to write ‘or close to it [true]’ and ‘statistically significant’ together in the second sentence? The tenor and content of dikran’s comments suggest the rigor that comes part and parcel with significance testing.

      The short answer to your question is ‘No’. Some quick reasons are–first and foremost–that you want to address the trend and are looking at the mean of the entire series and even an unspecified interval for that. So you are definitely looking at the wrong thing in the wrong place. In addition the ambiguous interval itself is based on an unspecific concept ‘uncertainty’*, i.e., you give no indication of how the value 0.2 is derived.
      ———-
      Ha! I am certainly in a minority here as most are comfortable with use of the term ‘uncertainty’. It is a matter of precise communication–a tough thing with statistics.

      BTW It is my opinion based on years of observing others and struggling many scientists are are way over-confident in their prowess. As a result a lot of them do stinky statistics**, but since their peer-reviewers are…well, peers, it is not likely picked-up even in unbiased review. I think that this reflects that the formal education of most scientists and engineers addresses statistics in the style of a drive-by shooting. It is a fascinating and interesting topic–very different from popular conception–but demands exacting attention patience.
      ———-
      **I also stand firm on this based on personal empirical evidence–it is opinion. :O)

      Here is another BTW. You have to writeup what you are trying to accomplish and how when doing something like this. Documentation is a big deal when you want and ask other to understand what you are doing. Relying on code is far from optimal. Why should I read your code to figure out what you have done before I even get to your question [Rhetorical question, not at Simon personally.]

      Here that is particularly the case because you use the ggplot2 package which is essence a DSL (domain specific language)*** for graphics. That is, one has to have some understanding of the ‘ggplot’ language on top of base R and base R graphics. Here is a small example of the sort of confusion that can arise: for casual lookers of the code where the parameter ‘alpha’ in the code is is the transparency and yet ‘alpha’ is also used for the statistical significance.
      ———-
      ***The idea is allow users code the graphics in a more ‘natural’ way than constrained by usual R (and lattice) syntax.

      Good for you, for posting the graph and the code. Just write about it more next time. I would put in some time on statistical inference and here, testing for trends. Keep in dikran’s points–all the more reason to write what one is doing. Don’t trust common perceptions and uses.

      All this is just my opinion but I hope it is some help.

      regards,
      mwg

      [

      • You have to writeup what you are trying to accomplish and how when doing something like this. Documentation is a big deal when you want and ask other to understand what you are doing. Relying on code is far from optimal. Why should I read your code to figure out what you have done before I even get to your question [Rhetorical question, not at Simon personally.]

        IMO both a functional specification and the code should be produced out of any project. Not just for the sake of reviewers, but for your own. Often, pseudo-code is also important.

        The most important thing in this is that when you discover an error, retrofit the fix to both/all documents. Again, this is important for the sake of a good review, but also for your own sake.

        I have learned this through hard experience, when I made a quick change to the code without retrofitting to the specs, then later used the specs to drill into the code and spent many hours trying to reconstruct why they didn’t match.

        What I’ve learned is that it’s always worthwhile to dot the “i‘s” and cross the “t‘s” keeping the levels of documentation in sync, even for a small, supposedly “one-off” project. Even if it takes a little longer at the moment, it can save you weeks a year later when you need to do something very similar.

      • AK

        No arguments here. Learned that in a regulatory environment. I’ve gone around with others on documentation and QA a number of times. Oh well, not really my problem….

        BTW I like the idea of pseudo-code. Languages and the people using them have their eccentricities.

        Functional specification is also good…mandatory in a project.

        …along with clear delineation of the problem at hand.

        Here a question is just being asked, clearly things are more informal and brief. None-the-less similar things have to be communicated. I was not very helpful. C’est la vie

        mw

      • mw, “It is my opinion based on years of observing others and struggling [that] many scientists are way over-confident in their prowess.” I’m a (long retired) economic policy adviser, I’ve worked with many modellers and directed modelling. I frequently found that I could see flaws in models which the modellers couldn’t, I have (or had) a grasp for numbers and relations and picked up what to me were glaring errors, sometimes with a group of profs who’d been working on a model for some time. And of course interpreting the model involves understanding economic relationships and impacts (as should developing the model). I recall one occasion where three of Australia’s leading macroeconomic modellers (well, two leading modellers and one favoured by the ALP) were asked to model the same thing and came up with inexplicably different results. On delving further, it turned out that the differences all lay in the widely-differing underlying assumptions. It would have been better (I wasn’t in charge for this) to have had a round table with the three modellers to establish base assumptions before they started (a bit difficult in this case as two generally refused to be in the same room as each other).

        This illustrates just why all modellers need to be fully open about all assumptions, methods and data if they are to be credible. And have humility.

      • @Faustino: This illustrates just why all modellers need to be fully open about all assumptions, methods and data if they are to be credible. And have humility.

        I should also say that in my experience–environmental fate&transport and risk modeling–there are many who do fit that bill. Indeed working with these people has been very informative and rewarding. [Just thought there is a need to keep my perspective! :O) ]

        PS I’ve never worried about geohydrologists throwing water balloons at each other — the community is or was quite genuinely collegial.

      • mwgrant | November 14, 2015 at 1:21 pm | Reply
        Not sure what you are saying in 1). As for 2) Don’t you think it is a little awkward to write ‘or close to it [true]’ and ‘statistically significant’ together in the second sentence? The tenor and content of dikran’s comments suggest the rigor that comes part and parcel with significance testing.

        How much of the uncertainty is “common mode” and how much isn’t?

        There is systematic error which may be common/consistent across all measurements, there random error which isn’t common with anything. and there is personal error.

        Most measurements by global warmers have personal error that is greater than the other errors in the measurement. They compound this with additional errors in the analysis phase.

        Anyway, Mear’s measurements shouldn’t have a lot of personal error, so you really need to figure out how much of the uncertainty is “common mode” systemic error.

        If part of the uncertainty is an unknown positive or negative bias it is common across the measurements and means the measurements are less uncertain relative to each other.

      • PA,

        Note that immediately following the text you you quote I wrote:

        The short answer to your question is ‘No’. Some quick reasons are–first and foremost–that you want to address the trend and are looking at the mean of the entire series and even an unspecified interval for that. So you are definitely looking at the wrong thing in the wrong place. …

        The approach—mean and interval over the mean—in the code simply does not test for a trend.

        Do not need to say anything else given the question and information at the time of my response… and I regret that I did write more. :O)

        Statement 1) is irrelevant because statement 2)’s ‘No’ answer holds regardless of the truth of 1) [Again an inappropriate test is encoded]. For me that is the end of the story.

        mwg

  86. Wow I was not asking that much! And I think you misunderstood 99% of what I am trying to accomplish! Probably my fault….

    Ok forget everything I wrote, I am sorry but wordpress does not allow delete!

    Let’s start with a basic question… from this paper: http://images.remss.com/papers/rsspubs/Mears_JGR_2011_MSU_AMSU_Uncertainty.pdf

    Can we infer that RSS-TLT as an error bar or uncertainty or a +/- value of around 0.2°C ?

  87. Ok sorry about all that group… will take my learning questions elsewhere…

    Moderator… please feel free to remove all my comments. Obviously I am clueless.

  88. Pingback: 5 Facts the Left Isn't Trumpeting About Paris and Climate Change

  89. Pingback: 5 Facts the Left Isn’t Trumpeting About Paris and Climate Change | 411 Headlines

  90. Pingback: The Paris Climate Change Scam | Brookings Harbor Tea Party

  91. Pingback: Facts the Left Isn’t Trumpeting About Paris and Climate Change- | conservativepolitics1

  92. Pingback: 2015 → 2016 | Climate Etc.

  93. Pingback: 2015 → 2016 | Enjeux énergies et environnement