Has NOAA ‘busted’ the pause in global warming?

by Judith Curry

A new blockbuster paper published today by NOAA:

These results do not support the notion of a “slowdown” in the increase of global surface temperature.  

Color me ‘unconvinced.’

Possible artifacts of data biases in the recent global surface warming hiatus

Thomas R. Karl, Anthony Arguez, Boyin Huang, Jay H. Lawrimore, James R. McMahon, Matthew J. Menne, Thomas C. Peterson, Russell S. Vose, Huai-Min Zhang

Abstract: Much study has been devoted to the possible causes of an apparent decrease in the upward trend of global surface temperatures since 1998, a phenomenon that has been dubbed the global warming “hiatus.” Here we present an updated global surface temperature analysis that reveals that global trends are higher than reported by the IPCC, especially in recent decades, and that the central estimate for the rate of warming during the first 15 years of the 21st century is at least as great as the last half of the 20th century. These results do not support the notion of a “slowdown” in the increase of global surface temperature.

The paper has just been published in Science Express [link] (apparently will be open access).

Here is the main summary diagram from the paper:

Slide1

JC’s initial reactions

I received this several days ago, from an (international) journalist asking for comments, my quick initial reactions provided below:

The greatest changes in the new NOAA surface temperature analysis is to the ocean temperatures since 1998.  This seems rather ironic, since this is the period where there is the greatest coverage of data with the highest quality of measurements – ARGO buoys and satellites don’t show a warming trend.  Nevertheless, the NOAA team finds a substantial increase in the ocean surface temperature anomaly trend since 1998.

In my opinion, the gold standard dataset for global ocean surface temperatures is the UK dataset, HadSST3.  A review of the uncertainties is given in this paper by John Kennedy http://www.metoffice.gov.uk/hadobs/hadsst3/uncertainty.html.  Note, the UK group has dealt with the same issues raised by the NOAA team.  I personally see no reason to the use the NOAA ERSST dataset, I do not see any evidence that the NOAA group has done anywhere near as careful a job as the UK group in processing the ocean temperatures.

I am also unconvinced by NOAA’s gap filling in the Arctic, and in my opinion this introduces substantial error into their analysis.  I addressed the issue of gap filling in the Arctic in this recent publication:  Curry JA, 2014:  Climate science:  Uncertain temperature trends. Nature Geoscience, 7, 83-84.

Relevant text:

Gap filling in the Arctic is complicated by the presence of land, open water and temporally varying sea ice extent, because each surface type has a distinctly different amplitude and phasing of the annual cycle of surface temperature. Notably, the surface temperature of sea ice remains flat during the sea ice melt period roughly between June and September, whereas land surface warming peaks around July 1. Hence using land temperatures to infer ocean or sea ice temperatures can incur significant biases.

With regards to uncertainty, in their ‘warmest year’ announcement last January, NOAA cited an error margin in the global average surface temperature anomaly of 0.09oC. The adjustments to the global average surface temperature anomaly is within the error margin, but the large magnitude of the adjustments further support a larger error margin.  But they now cite a substantially greater trend for the period 1998-2014, that is now statistically greater than zero at the 90% confidence level. 

My bottom line assessment is this.  I think that uncertainties in global surface temperature anomalies is substantially understated.  The surface temperature data sets that I have confidence in are the UK group and also Berkeley Earth.  This short paper in Science is not adequate to explain and explore the very large changes that have been made to the NOAA data set.   The global surface temperature datasets are clearly a moving target.  So while I’m sure this latest analysis from NOAA will be regarded as politically useful for the Obama administration, I don’t regard it as a particularly useful contribution to our scientific understanding of what is going on.

CATO comments

Pat Michaels sent me these comments, which will be posted at the CATO web site:

IS THERE NO “HIATUS” IN GLOBAL WARMING AFTER ALL?

 Patrick J. Michaels, Richard S. Lindzen, Paul C. Knappenberger

 A new paper, from Thomas Karl and several co-authors[1], that removes the “hiatus” in global warming, will doubtless receive much attention in both scientific and policy circles. As with many scientific publications, Karl et al. prompts many serious scientific questions.

While this will be heralded as an important finding, the main claim[2] that it uncovers a significant recent warming trend is certainly dubious. The significance level (.10) is hardly normative and the use of it certainly will prompt many readers to question the reasoning behind the use of such a lax standard.

The treatment of the buoy sea-surface temperature (SST) data was guaranteed to put a warming trend in recent data. They were adjusted upwards 0.12°C to make them “homogeneous” with the longer-running temperature records taken from engine intake channels in marine vessels. As has been acknowledged by numerous scientists, the engine intake data are clearly contaminated by heat conduction from the structure, and they were never intended for scientific use. On the other hand, environmental monitoring is the specific purpose for the buoys. Adjusting good data upwards to match bad data seems questionable, and the fact that the buoy network becomes increasingly dense in the last two decades means that this adjustment must put a warming trend in the data.

The extension of high-latitude arctic land data over the Arctic Ocean is also questionable.   Much of the Arctic Ocean is ice-covered even in high summer, so that the surface temperature must remain near freezing. Extending land data out into the ocean will obviously induce substantially exaggerated temperatures.

Additionally, there multiple measures of bulk lower atmosphere temperature that are made independently from surface measurements and which indicate the existence of a “hiatus”[3]. If the Karl et al., result were in fact robust, it could only mean that the disparity between surface and midtropospheric temperatures is even larger that previously noted. Getting the vertical distribution of temperature wrong invalidates virtually every forecast of sensible weather made by a climate model, as much of that weather (including rainfall) is determined in large part by the vertical structure of the atmosphere.

Instead, it would seem more logical to seriously question the Karl et al. result in light of the fact that, compared to those bulk temperatures, it is an outlier, showing a recent warming trend that is not in these other global records.

[1] Karl, T. R., et al., Possible artifacts of data biases in the recent global surface warming hiatus. Scienceexpress, embargoed until 1400 EDT June 4, 2015.

[2] “It is also noteworthy that the new global trends are statistically significant and positive at the 0.10 significance level for 1998-2012…”

[3] Both the UAH and RSS satellite records are now in their 21st year without a significant trend, for example

GWPF comments

Received via email from GWPF:

Key pitfalls of the paper:

  • The authors have produced adjustments that are at odds with other all other surface temperature datasets, as well as those compiled via satellite.
  • They do not include any data from the Argo array that is the world’s best coherent data set on ocean temperatures.
  •  Adjustments are largely to sea surface temperatures (SST) and appear to align ship measurements of SST with night marine air temperature (NMAT) estimates, which have their own data bias problems.
  • The extend of the largest SST adjustment made over the hiatus period, supposedly to reflect a continuing change in ship observations (from buckets to engine intake thermometers) is not justified by any evidence as to the magnitude of the appropriate adjustment, which appears to be far smaller.

1. They make 11 changes (not all are explained) producing the ERSSTv4 Sea Surface Temperature (SST) dataset that includes new estimates for the different way SSTs are measured from ships (intake or buckets). They also add 0.12°C to each buoy to bring their measurements in line with those taken from ships. These issues have been raised before by the UK Met Office when compiling their HadSST3 ocean surface temperature dataset, see, ‘A review of uncertainty in in situ measurements and data sets of sea surface temperature’

2. The greatest changes are made since 1998, which is interesting because this is when we have the highest quality of data and global coverage using several methods. Only this analysis finds any increase in global annual average surface temperature over this “hiatus” period. The authors have produced a dataset that is at odds with other surface temperature datasets, as well as those compiled via satellite.

3. The authors start their trend estimates in 1998 and 2000. This has long been considered unwise as 1998 is a very strong El Nino year and 1999-2000 is a much cooler La Nina period. The difference between them distorts their trend estimates. For example, their 1998-2014 trend is 0.106+/- 0.058°C per decade. Starting two years later (during La Nina influenced years) yields a trend of 0.116 +/- 0.067°C per decade as one would expect from starting at a lower temperature. Ignoring these caveats the authors say their analysis produces twice as much warming for 1998-2014 than earlier estimates. Their conclusion is, ironically, based on inbuilt biases in their analysis.

Their Fig 1 shows that when using their updates it is only with the use of these inappropriate start and end points that the “hiatus” is reduced.

4. Even with the 11 changes to their SST database and the problem of start and end dates the authors admit that the statistical significance of their results is only significant at the 0.10 level, and in some cases not even that.

“I believe their estimates of the error in their decadal trend figures are far too small. They quote the error in a 15-year period to a precision of one thousandth of a degree C. In their report the authors admit that their error analysis is not definitive and that looking at them another way invalidates their trend conclusions,” said Dr David Whitehouse, science editor of the GWPF.

5. Note that trends that include 2014 and 2015 must be treated with caution due to a recently persistent very warm feature in the NE Pacific that is affecting global SST estimates.

6. In addition, they do not include any data from the Argo array that is our best coherent data set on ocean temperatures. The authors state this is because Argo temperature data is not surface data. However, ship-derived temperatures can be from as much as 15 m below the surface. The Argo array samples 5 m below the top of the ocean. From 2004 to 2013 it shows considerable variation and little trend. The non-ARGO data aptly demonstrates the problem of starting trend analysis in 1998 or 2000.

Source: ‘Unabated planetary warming and its ocean structure since 2006’ Nature Climate Change, 2 February 2015. Black line: 5 m optimally interpolated (OI) ARGO; red lines: NOAA OI SST v2

7. Their conclusions are also at odds with satellite data that shows no trend in the past 16-years or so.

Source: http://nsstc.uah.edu/climate/index.html and http://www.remss.com/research/climate

8. Extending a change in ship observations (from buckets to engine intake thermometers) to the present time had the largest impact on the SST adjustments over the hiatus period, per Karl et al 2015:

“Second, there was a large change in ship observations (i.e., from buckets to engine intake thermometers) that peaked immediately prior to World War II. The previous version of ERSST assumed that no ship corrections were necessary after this time, but recently improved metadata (18) reveal that some ships continued to take bucket observations even up to the present day. Therefore, one of the improvements to ERSST version 4 is extending the ship-bias correction to the present, based on information derived from comparisons with night marine air temperatures. Of the 11 improvements in ERSST version 4 (13), the continuation of the ship correction had the largest impact on trends for the 2000-2014 time period, accounting for 0.030°C of the 0.064°C trend difference with version 3b.”

Ref (18) is a 2011 paper by Kennedy et al. It states (paragraph 3.1) “Dating the switchover from uninsulated canvas buckets to insulated rubber buckets is problematic as it is not clear how quickly the practice of using insulated buckets was adopted. … Based on the literature reviewed here, the start of the general transition is likely to have occurred between 1954 and 1957 and the end between 1970 and 1980.”

A 2010 review article “Effects of instrumentation changes on SST measured in situ” by Kent, Kennedy, Berry and Smith states that “Models of corrections for wooden and uninsulated canvas buckets show the adjustments to be five to six times greater for the canvas buckets.”

So post 1980 adjustments to bucket measurements should be very small (under 0.1 C) Moreover, by 2000 ship measurements were a minority of total measurements and all types of bucket were a small proportion of ship measurements (see figs 2 and 3 of Kent et al. 2010). These facts imply that post 2000 adjustments warranted by use in some ships of bucket measurements should be negligible.

“The justification given for the change that had the largest impact on trends for the 2000-2014 time period – continuing to adjust ship SST measurements by reference to night marine air temperature (NMAT) data, ‘which have their own particular pervasive systematic errors’ (Kennedy 2014) – i.e. that some ships still continue to take bucket observations, appears to support only a very small adjustment,” said Nic Lewis, an independent climate scientist.

In summary
This is a highly speculative and slight paper that produces a statistically marginal result by cherry-picking time intervals, resulting in a global temperature graph that is at odds with those produced by the UK Met Office and NASA.

Caution and suitable caveats should be used in using this paper as evidence that the global annual average surface temperature “hiatus” of the past 18 years has been explained.

JC summary

I am posting this at 2 p.m. ET (when the press embargo is released).  I will provide all the relevant links as soon as available, and will update with links to the forthcoming articles from the MSM; I am particularly interested in what ‘consensus’ supporting scientists have to say about this.

UPDATES:

Post at WUWT by Anthony Watts and Bob Tisdale [link]

From Carbon Brief:

Prof Sir Brian Hoskins, chair of the Grantham Institute at Imperial College London, says:

“[I]t had been thought that the reduction in surface warming must be due to natural variation in the heat exchanged between the atmosphere and ocean. Now it appears that any such exchange of heat between the atmosphere and ocean has not been large enough to obscure the global warming trend.”

But we shouldn’t dismiss the presence of a “slowdown” in surface warming just yet, warns Osborn. He says:

“There are other datasets that still support a slowdown over some recent period of time, and there are intriguing geographical patterns such as cooling in large parts of the Pacific Ocean that were used to support explanations for the warming slowdown.”

Neither should scientists stop seeking to understand the role of natural decadal variability in influencing short-term trends in climate, Osborn says. Dr Peter Stott, head of climate monitoring and attribution at the Met Office Hadley Centre, echoes this point, saying:

“[N]atural variability in the climate system or other external factors have still had an influence and it’s important we continue research to fully understand all the processes at work.”

On the whole, scientists seem to welcome the new study in terms of its contribution to fine-tuning the global surface temperature record. But the so-called “hiatus” – its causes, consequences and even its very existence – is a multi-faceted topic. Forster predicts:

“I still don’t think this study will be the last word on this complex subject.”

Post from the International Surface Temperature Initiative [link]

Doug MacNeal’s take [link].

Seth Borenstein from the AP:

Several outside scientists contacted by The Associated Press said the new and previous adjustments are sound. Kevin Trenberth, climate analysis chief at the National Center for Atmospheric Research, said the new work was “good and careful analysis” but only confirms what most scientists already knew, that there was no such hiatus.

“NOAA is confirming what we have been saying for some time that the ‘hiatus’ in global warming is spurious,” Berkeley team chief and physicist Richard Muller said in an email. Muller said global warming continues but in “many fits and spurts.”

John Christy of the University of Alabama Huntsville, one of the minority of scientists who dispute the magnitude of global warming, said the Karl paper “doesn’t make sense” because satellite data show little recent warming. “You must conclude the data were adjusted to get this result” of no warming pause, Christy wrote in an email. “Were the adjustments proper? I don’t know at this point.”

From Mashable:

Scientists who have investigated the warming hiatus or are otherwise involved in assessing climate change on various timescales told Mashable that the study’s key shortcoming is that it does what mainstream climate scientists have long criticized the climate contrarians, often now referred to as “climate denialists,” of doing: cherry-picking start and end dates to arrive at a particular conclusion.

Gerald Meehl, a climate researcher at the National Center for Atmospheric Research (NCAR) in Boulder, Colorado, told Mashable in an email that while he finds the new study laudable for improving temperature measurements, there are flaws in how the researchers interpreted the data. For example, Meehl says there is still a lower warming trend from 1998 to 2012 compared to the previous base period of 1950 to 1999, “… Thus there is still a hiatus defined in that way.”

Meehl says that adding two years to the time period by including 2013 and then 2014, which was a record warm year, makes the warming trend appear to be 38% larger than previous studies that did not include those two years.

“My conclusion is that even with the new data adjustments, there still was a nominal hiatus period that lasted until 2013 with a lower rate of global warming than the warming rate of the last 50 years of the 20th century,” Meehl says, “and a factor of two slower warming than the previous 20 years from the 1970s to 1990s.”

Lisa Goddard, director of the International Research Institute for Climate and Society (IRI) at Columbia University, told Mashable that the study does not support the conclusion that global warming didn’t slow down for a relatively short time period.

“It is clear that Karl et al. have put a lot of careful work into updating these global products,” Goddard said in an email. “However, they go too far when they conclude that there was no decadal-scale slowdown in the rate of warming globally. “However, they go too far when they conclude that there was no decadal-scale slowdown in the rate of warming globally. This argument seems to rely on choosing the right period — such as including the recent record breaking 2014.”

Another senior climate researcher, Kevin Trenberth of NCAR, says the hiatus depends on your definition of the term. To him, global warming never stopped, as climate skeptics argue, because most of the extra heat from manmade greenhouse gases such as carbon dioxide was redirected deep into the oceans during the 1998 to 2012 period. However, surface temperatures did warm more slowly during this time.

“I think the article does emphasize that the kind of variation is now much more within the realm of expectations from natural variability, but it is a bit misleading in trying to say there is no hiatus,” he said in an email conversation.

Michael Mann, the director of the Earth System Science Center at Penn State University, says the study helps drive home the point that “global warming continues unabated as we continue to burn fossil fuels and warm
the planet.”

The new study reveals yet again that surface temperature data has many flaws, says Peter Thorne, a climate researcher at Maynooth University in Ireland. In an interview, Thorne said critics of climate science are incorrect in charging that global warming is an artifact of urban heat islands and other influences on thermometers, but at the same time, our approach to taking the Earth’s temperature needs to be rethought.

Thorne says more investments should go toward establishing redundant, carefully calibrated temperature observing networks where data is currently sparse, such as the Arctic, much of Africa and especially the oceans.

““The uncertainty in the marine records is far greater than in the land records,” he said.

“If we put enough good quality, traceable, redundant observations around the globe we can make sense of all the other observations that aren’t so good,” he said. “There is no need to bequeath onto researchers in 50 years time a similar mess.”

 

 

 

 

912 responses to “Has NOAA ‘busted’ the pause in global warming?

  1. I don’t see anything about the clear conflict of interest here with NOAA being directly administers by a government that so clearly wants this result.

    • Well yes this thought occurred to me also. It would have been more convincing if they first provided a detailed analysis of what they did to the ocean data and compared it with the HADSST data sets – in a journal that allowed a lengthy contribution. Instead, they went for a politically desirable headline first.

      • If you took the politics out of global warming ‘science’, there would be little left to merit concern.

      • David Springer

        The worst thing about this paper is the data it uses. Or rather the data it doesn’t use.

        How about if we use ARGO, actually designed among other things to measure global SST with the needed precision, for the past 8 years and see what it says the global trend is. All other instrument types should be adjusted to match ARGO because it’s the gold standard by far.

        Unabated planetary warming and its ocean structure since 2006 ~Nature Climate Change 5, 240–245 (2015)

        My emphasis. This is important. SST trend from 2006 is 0.05C/decade from the only ocean temperature sensors, 4,000 ARGO diving buoys, with the range and precision for the job.

        Figure 1: Globally averaged SST anomaly.

        5-m Argo OI temperature (black), NOAA OI SST v2 (ref. 27) masked to the same area as the Argo OI (solid red), NOAA OI SST v2 without the Argo mask (dashed red). All figures are 12-month running means unless otherwise noted.…

        Increasing heat content of the global ocean dominates the energy imbalance in the climate system. Here we show that ocean heat gain over the 0–2,000 m layer continued at a rate of 0.4–0.6 W m−2 during 2006–2013. The depth dependence and spatial structure of temperature changes are described on the basis of the Argo Program’s accurate and spatially h0mogeneous data set, through comparison of three Argo-only analyses. Heat gain was divided equally between upper ocean, 0–500 m and 500–2,000 m components. Surface temperature and upper 100 m heat content tracked interannual El Niño/Southern Oscillation fluctuations, but were offset by opposing variability from 100–500 m. The net 0–500 m global average temperature warmed by 0.005 °C yr−1. Between 500 and 2,000 m steadier warming averaged 0.002 °C yr−1 with a broad intermediate-depth maximum between 700 and 1,400 m. Most of the heat gain (67 to 98%) occurred in the Southern Hemisphere extratropical ocean. Although this hemispheric asymmetry is consistent with inh0mogeneity of radiative forcing and the greater area of the Southern Hemisphere ocean, ocean dynamics also influence regional patterns of heat gain.

      • Prof Curry,

        I agree. Bundling all these adjustments into one package that produces a politically useful conclusion seems inherently problematic. It’s a tone-deaf action guaranteed to arouse concern about integrity of the process.

        Doesn’t this just shift the focus of the debate from the surface temperature trend to the model – temperature and surface – lower atmosphere divergences? Small adjustments to the surface temperature trend seems far less important than these growing divergences — which fundamentally challenge the climate science paradigm.

      • agreed. This new analysis changes the model-obs comparison story slightly; reduces somewhat the discrepancy in the hiatus period, but increases the discrepancy for the the earlier period.

      • Question: which is a more serious challenge to the climate science paradigm: the increase in the model – temperature divergence, or increase in the surface – lower atmosphere temperature divergence? Or both, or neither?

        if I understand this correctly, the former suggests that the GCMs’ projections are not reliable (although this might result from small problems in the GCMs). The latter suggests that the models poorly represent one or more fundamental climate processes.

        From the bleachers, it seems that neither topic gets much attention in the literature. That is, however, a commonplace with paradigm challenges. They’re not considered key question by the consensus until they produce a compelling alternative paradigm.

      • There is a detailed analysis of their SST analysis with comparisons to HadSST and COBE data sets.

        Boyin Huang, Viva F. Banzon, Eric Freeman, Jay Lawrimore, Wei Liu, Thomas C. Peterson, Thomas M. Smith, Peter W. Thorne, Scott D. Woodruff, and Huai-Min Zhang, 2015: Extended Reconstructed Sea Surface Temperature Version 4 (ERSST.v4). Part I: Upgrades and Intercomparisons. J. Climate, 28, 911–930.

        Wei Liu, Boyin Huang, Peter W. Thorne, Viva F. Banzon, Huai-Min Zhang, Eric Freeman, Jay Lawrimore, Thomas C. Peterson, Thomas M. Smith, and Scott D. Woodruff, 2015: Extended Reconstructed Sea Surface Temperature Version 4 (ERSST.v4): Part II. Parametric and Structural Uncertainty Estimations. J. Climate, 28, 931–951.

        http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-14-00006.1
        http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-14-00007.1

        Paywalled for me, but you can find pdfs of both by googling.

      • curryja says: “Well yes this thought occurred to me also. It would have been more convincing if they first provided a detailed analysis of what they did to the ocean data and compared it with the HADSST data sets…”

        NOAA published those papers back in November. See:

        Huang et al. (2014) Extended Reconstructed Sea Surface Temperature version 4 (ERSST.v4), Part I. Upgrades and Intercomparisons, and
        Liu et al. (2014) Extended Reconstructed Sea Surface Temperature version 4 (ERSST.v4): Part II. Parametric and Structural Uncertainty Estimations.

        I discussed the ERSST.v4 data in a number of posts last year, including Has NOAA Once Again Tried to Adjust Data to Match Climate Models?

      • jhprince2014

        It’s remarkable you made the distinction of NOAA running to the media and politics before a professional peer-review….

      • “While this will be heralded as an important finding, the main claim[2] that it uncovers a significant recent warming trend is certainly dubious. The significance level (.10) is hardly normative and the use of it certainly will prompt many readers to question the reasoning behind the use of such a lax standard.”

        It’s clear they began with the question:, “What can we do to make it look like there’s no pause?” Terrible way to approach a science problem obviously, but we have to remember it’s not really about the science.

        (aka pokwrguy)

      • All this buckets vs engine-room intake thing played out in the 1970s. I don’t know how the heck they manage to evoke it smother the 1998 El Nino. This is such a blatant frig that ti’s laughable.

        The irony is that it was just that steep warming, at the end of the last century, that got all the alarm-bells ringing. It was trying to reproduce that up-tick that caused the climate models to get it all wrong and run hot ever since.

        Now if they want to run with 0.1 deg C/decade since 1950 that is steady right up to present, then they need to explain why the model mean is about 2.8 deg C/decade

      • Pay attention to the marble..

        Obama’s ‘climate’ legacy is being buried in the secret trade deal. He’s got to get enough political propaganda out their to push the trade deal over the Senate ratification line. (Republicans generally favor free trade treaties..what better place to bury misguided climate policy stench).

    • Don Monfort

      More government-academic conflict of interest and blatant lying about it by paragons of the ivory tower intellectually and morally superior climate crusaders:

      http://www.breitbart.com/big-government/2015/06/04/harvard-syracuse-researchers-caught-lying-to-boost-obama-climate-rules/

    • So NOAA should never publish any scientific work unless it counter to the current administration’s political position? That, of course, being the only way that they could prove they are not unduly influenced politically, if your comment is taken to its logical conclusion.

      Just how in your mind does a researcher prove that conflict of interest has not influenced them?

      I submit that you should leave this question alone and let the science stand or fall on its merit – something that stands ultimately having a little more substance to it when compared to the vaporous considerations of conflict of interest.

      • Never is a tall order. Maybe just once would help.

        If just ONE TIME they did not adjust the data so it gets closer to what the models say it “should” be.

        If as much effort was put into CORRECTING the models instead of ‘correcting’ the data, they may get somewhere.

      • Don Monfort

        So we should not indulge in vaporous considerations of conflict of interest in drug research. The science will stand or fall on its merit.

      • No, they should always make a clear statement of potential conflict of interest by stating very clearly it is the policy of the administration we work under that climate change exists and is real and it is our job to prove it and if we don’t, we will likely be fired or have our money cut off. That way people would know just how seriously to take them. Imagine the screaming if someone with funding from say The Evil Koch brothers had published this article? The attitude that government scientists are off limits for conflict of interest is silly.

      • In any activity. “Follow The Money” is always good advice, including a politicized government agency such as NOAA.

      • Everything that comes out of the mouths of consensus climate scientists can be summed up in two sentences:
        1. This proves that CAGW is happening, we need to limit fossil fuel usage now!
        2. Although we know it is happening, we still have much data to torture.., err I mean analyze, so we need more grant money!

      • Nice try at diverting attention.

        Not that it worked.

    • Frankly, I do NOT trust NOAA to give us the truth because as a government agency, they are now forced to advance government policy rather than be an honest broker. Bear in mind, the shrillness of climate alarmists is increasing as we approach a Paris climate treaty meeting and they realize it is likely to be another failure.

      I find it peculiar that NOAA also wants to take on the IPCC which is what climate skeptics do.

      Somehow, this doesn’t pass the smell test, but I could be wrong.

      George Devries Klein, PhD, PG, FGSA

      • Let me add one thing. During 1993-1996, I was New Jersey State Sea Grant Director. NOAA’s Marine Fisheries Services (NMFS) was housed in the building next door. During their off-hours, NMFS scientists were involved with the activist movement promoting certain pro-environmental issues. When speaking publicly, they identified themselves as NMFS Scientists (giving the impression NIMFS supported tat particular environmental agenda item), which at the time was contrary to policy that allowed government employees to speak as citizens but not as representatives of their employer.

        George Devries Klein, PhD, PG, FGSA

      • george, not NOAA wtf NOAA? Mosher says their pooh don’t stick.

  2. daveandrews723

    This NOAA study is just the latest example of them trying to force data to fit their pre-conceived notion that CO2 is jacking the earth’s temperatures at record rates.
    If it weren’t so pathetic, and dangerous, it would be laughable.
    It is time for the mainstream media to start doing its job and to challenge these warmists who are too stubborn and too proud to admit that the hypothesis and models upon which all their beliefs are based is bogus.
    This is a very dark period for science.

    • Curious George

      Ah, adjustments. How I love adjustments! The 2014-2015 winter is record-cold one day, and the next day the science of adjustments finds that old temperature records were imprecise, and after an adjustment the last winter is no longer record cold.

      To paraphrase Lenin(?), he who measures temperatures does not create a true permanent record. He who adjusts measurements does.

  3. The GWPF comments contain a couple of misunderstandings. ARGO floats are used for subsurface temperature and salinity measurements, not surface. There is another set of buoys called “drifters” that are used for sea surface temperature measurements. The other misunderstanding appears to relate to the ship-bias corrections. The adjustments during the hiatus should represent the change from ship inlets to buoys (drifters not ARGO), not the change from buckets to ship inlets.

    Cheers.

    • Good point. But aren’t engine water intakes at least a couple of meters below the surface also, to allow for wave action? And those are surely contaminated high by engine heat conduction.

      • rogerknights

        According to a couple of WUWT comments on this matter, the thermometers are located at the front of the intakes, so engine heat isn’t a factor.

      • If so, then why fretted about for decades of ‘adjustments’?

      • A ship usually has two sea chests, as the intakes are called, one on the vessel side to be used in shallow waters and one on the bottom, or well below the lowest load line (Plimsoll mark). The reason is to avoid ingestion of sediments when in shallow waters. It is not uncommon for the crew to forget to switch from one to the other. In open water this means sucking air into the system, and thus temperature increases.
        The temperature measurement is normally made in a position well away from engine heat, as the temperature is used for engine control purposes. Although this requires accurate measurements, it does not mean “accurate” as in tenths of degrees. And ship instrumentation is wanting in the area of calibration, to put it mildly. I have been a classification society surveyor, specialising in machinery. The idea that automated temperature readings of inlet temps are anywhere near as accurate as required for the purpose of climatology is laughable.

      • Anders Valland:
        Thank you for the excellent description of intake temp measurements.

      • Steven Mosher

        The position of the intake varies with the ship types.

        its not a simple problem

        http://icoads.noaa.gov/advances/kent.pdf

        http://www.ocean-sci.net/9/683/2013/os-9-683-2013.pdf

        “While buckets generally sample the upper few tens of
        centimetres (note that Parker (1993) describes two weighted
        buckets designed to sample at 1–2 m depth), depths sampled
        by intakes can be highly variable. Engine intake inlets are
        usually close to keel depth to ensure submergence under all
        sea conditions. Actual sampling depth for intakes on container
        ships and bulk carriers can vary by several metres, depending
        on shipload (Beggs et al., 2012). Large ships can
        have dual seawater intakes, one close to keel depth and another
        a few metres higher (Ecology and Environment, 2007).
        The deep intake is used at sea and the upper when in shallow
        coastal waters or canals. Intake depths reported in the early
        literature are presented in Table 1. Brooks (1926) reports an
        intake depth of ∼ 7 m on a Canadian Pacific steamship in
        the 1920s. James and Shank (1964) estimate intake depths of
        ∼ 3–10 m for various US merchant, Navy and Coast Guard
        observing ships reporting in 1962 and 1963. They defined
        relations between intake depth and full-load draft for different
        hull types and categorised observing ships by hull
        type to estimate their intake depth. More contemporary intake
        depths averaged by type of VOS ship reporting this between
        1995 and 2004 are presented in Table 5 of Kent et
        al. (2007). Container ships and tankers were found to have
        intakes at ∼ 7–9 m depth while intakes on bulk and livestock
        carriers were found to often exceed 10 m. Kent and Taylor
        (2006) report that the average intake depth for VOS reporting
        this in 1997 was 8.4 ± 4.1 m, with the deepest inlet being at
        26 m.”

      • Steven Mosher, Variation in depth is the issue with intakes and not the asinine “engine room temperature” crapola. The subsurface temperature versus supra surface temperature is always going to be an issue which is why “Global Mean Surface Temperature” is a less than desirable metric. Interpolating a less than desirable metric to even more completely confuse subsurface and suprasurface is more not very smart thermodynamics.

        You should try to understand more of what Dr. Curry dishes out.

      • Don Monfort

        “The position of the intake varies with the ship types.
        its not a simple problem”

        Doesn’t bother the NOAA Kidz. They can use the ships that pass in the night to adjust the buoys. Yeah, that’s a plan.

      • Steven Mosher

        captain.

        Nice change of subjects.

        personally I would do a global SAT&MAT product

        As for SST?

        Tons of ways to look at the data. No such thing as a best metric.

        You guys are way too certain about what is important.

        plus most of you have never once looked at ICOADS. maybe you rad about in the newspaper.

      • Steven Mosher

        “Doesn’t bother the NOAA Kidz. They can use the ships that pass in the night to adjust the buoys. Yeah, that’s a plan.”

        Whether you adjust the ships or the bouys doesnt matter.

        You have collocated measurement systems. you do the best you can
        you report the results.

        Thats the day job. we are not allowed to throw our hands up and run from uncertainty. Cause, like, there is life beyond the armchair.

      • Don Monfort

        This is Mosher’s contribution to the discussion of his friend Ross’s Karl paper analysis on WUWT:

        ” Steven Mosher
        June 4, 2015 at 5:31 pm

        Ross

        “So what changed in the SST records? Bear in mind that there are very few records of air temperatures over the oceans, especially prior to 1950. So to get long term climate estimates, scientists use SST (i.e. water temperature) data, which have been collected since the 1800s by ships.”

        WRONG

        Check the source material ICOADS

        http://icoads.noaa.gov/advances/parker3.pdf

        There are 8 million records for air temps prior to 1900.”

        Very profound, Steven. Somebody responded:

        “dmh
        June 4, 2015 at 10:37 pm

        There are 8 million records for air temps prior to 1900.

        From the slides you linked to, emphasis mine:

        Undigitized marine observations for 1851-1900UK national archives

        Given that they are not digitized, their existence is moot. They are of no value until they are, no matter their number.”

        Steven never replied.

      • Don Monfort

        “Whether you adjust the ships or the bouys doesnt matter.’

        So you are going with that.

      • Steven Mosher, “You guys are way too certain about what is important.”

        Thermodynamics kinda forces you to use reliable references. That is where the certainty comes from. Since “surface” air temperature ranges from over 50 C to less than -70 C its average doesn’t meet the zeroth law sniff test. Average “surface” ocean temperature ranges from -2C to +34 C making it a better thermodynamic reference, not perfect, but better. Mixing the two makes zero sense if you are into thermo.

        As for ICOADS I have looked at a lot of the data and you still have the zeroth law issues. Dr. Curry’s specialty is atmospheric thermodynamics not data manipulation.

      • Steven Mosher

        Don.

        Go get ICOADS.

        The typical record for a ship includes SST and MAT all the way back to the begining of the record.

        Ross is wrong.

        or do some reading

        http://www.metoffice.gov.uk/hadobs/hadisst/HadISST_paper.pdf

        https://www.ncdc.noaa.gov/bams-state-of-the-climate/2009-time-series/mat

      • Don Monfort

        Well, let’s just stipulate that your friend Ross was WRONG in the statement of his that you quoted. Is he WRONG about everything in his analysis of Karl?

      • David Springer

        Mosher, lacking any experience with actual temperature sensing systems, ignores what Valland who has such experience has to say about engine intake temperature sensors.

        1) resolution of the gauges isn’t in tenths of degrees
        2) calibration of the gauges is lax
        3) operators forgetting to switch between shallow and deep intakes

        Never getting your hands dirty, Steverino, is a problem you can’t fix by increased bloviation.

  4. Below are the 12 year running linear regressions of temperature trend in the mean of anomalies of GISSTEMP, NCDC, and HadCRUT4.

    Appears that 2001 through 2013 and 2002 through 2014 have negative trends.

    • Speaking of adjustments, perhaps someone can fill in any missing pieces to the reasoning I’ve heard described before for the following. Apparently the halt in warming that occurred between the 1950s and the 1970s was caused by sulphate aerosol pollution that initiated an albedo effect that has since mostly dissipated. Assuming there was never this pollution source, wouldn’t the hiatus have been even more pronounced then it is today? (ignoring the new NOAA data in the consideration)

  5. Reblogged this on JunkScience.com and commented:
    A new way to curve fit, make the data fit the model? I’ve maintained data required by various environmental permits and regulations for years. If I adjusted those data like these folks seem to do, I’d be in jail.

  6. Don Monfort

    Ah, the latest attempt at killing the pause that is killing the cause. Will this improve the dim prospects for meaningful mitigation from the Paree junket? Let’s ask jimmy dee for his regurgitation of the delusional huffpo perspective.

    • Don Monfort

      In retrospect, it always get warmer. Why didn’t they tell us this a few years ago?

  7. I will always be skeptical of adjustments people say are necessary when they only discover the necessity of those adjustments after they get results they dislike. Especially since the issue of bucket corrections has been discussed for at least 10 years now. If these particular adjustments for them are needed, why did it take them so long to figure it out?

    • Brandon,

      Great point! If successful predictions are the gold standard of science theory, post-hoc adjustments that support theory are the brass standard. While they play a useful role, they’re unlikely to change the balance of opinion among laypeople concerned about the public policy implications of these trends.

      • I predict there will continue to be more “post-hoc adjustments”. We’ll see how successful that prediction is.

    • Post hoc. A good point, Brandon.

    • Well…

      Climate science has an integrity problem, it actually has a number of problems.

      1. People who are open advocates for a view on the “real look” of the data are allowed to work in the field. They should be fired. Science studies funded by the government should be as free from bias and objective as possible.

      2. One team (the team in charge of the climate centers) is allowed to move the goalposts and change the heights of the goalposts at will. They should be fired if this continues. I am not interested in the rationalizations for why the goalposts should be moved 20 yards further away when the other team is kicking.

      3. The earth is a heat engine with two sources, one sink, and a working fluid. Climateers measure one sink temperature, the temperature of the working fluid above the other sink, and guess at temperatures in the polar regions to produce a synthetic index. The temperature of the working fluid above a sink will not be an accurate reflection of sink temperature. I’m not sure the synthetic index has a real meaning. There are all kinds of adjustments that will make it do something different that can be justified for {insert rationalization here}.

      We have created an artificial index for guiding public policy. There should be a fixed set of rules for creation of the synthetic index. The index should not be a moving target. People who believe the index needs constant adjustment should be encouraged to find employment in another field.

      To make

  8. Well, CO2 would seem likely to have an effect on radiative balance.

    But from the get go with gcms there is group denial that somehow we could ignore the non-linearities in the climate system to make predictions. The recent more than decade long cooling trends and the fact that the HotSpot has not occurred are reminders that the dynamics of the climate system have internal change components in the decadal and centennial scales.

    I trotted out this quote, but it bears repeating:

    “Thus, the whole climate system must be regarded as continuously evolving with parts of the system leading and others lagging in time. The highly nonlinear interactions between the subsystems tend to occur on many time and space scales. Therefore, the subsystems of the climate system are not always in equilibrium with each other, and not even in internal equilibrium.”

    ‘The Physics of Climate’ ( Peixoto and Oort )

  9. Pingback: Has NOAA ‘busted’ the pause in global warming? | Enjeux énergies et environnement

  10. Whilst I have the greatest regards for John Kennedy of the Hadley Centre we simply must not continue with the notion that we have any sort of handle on global SST’s back to 1850. We have some degree of accuracy from the 1970’s and no doubt very well travelled sea routes could lay claim to having some merit in their readings back another few decades, but global is not possible. (sorry John-I know you will pop up here)

    As for the first article, is not Thomas C. Peterson -one of the co authors, also a co author, with William M. Connolley of the paper purporting to show there was no concern over global cooling in the 1960’s and the first part of the 1970’s?

    tonyb

    • climatereason,

      “purporting to show there was no concern over global cooling in the 1960’s and the first part of the 1970’s?”

      That’s an exaggeration of what the paper said, which was “An enduring popular myth suggests that in the 1970s the climate science community was predicting “global cooling” and an “imminent” ice age…”
      http://journals.ametsoc.org/doi/abs/10.1175/2008BAMS2370.1

      The record clearly shows that analysis to be correct. Your statement is, however, also correct. There was concern about the possibility of future cooling — although this was not a “consensus” forecast.

      For example, the 1970s cooling played a large role in the creation of NOAA’s climate analysis center. See links to this an other information about the 1970s global cooling scare here:
      http://fabiusmaximus.com/2009/12/07/cooling-4/

      • EOTFMW

        I have had this discussion with William amongst others, which is why I chose my words carefully. The global cooling scare had its genesis in the 1960’s when a number of prominent commentators, amongst them Budyko and Lamb, expressed concerns. This was taken up very widely in the scientific literature of the day.

        By the time the early 70’s came round Lamb and others had recognised there concerns were unfounded and like good scientists changed their minds after examining the available evidence.

        The paper I cited therefore misses the point as looking at 1970’s papers is a largely fruitless exercise as by then the scare was waning. In those pre internet days it took some time to be recognised that the situation had changed.

        tonyb

      • Tony,

        Thanks for the additional history. I didn’t know this went back to the 1960s.

        The famous letter to President Nixon about cooling described in NOAA’s official history was December 1972. The reduction of aerosols following the Clean Air Act of 1963 and its ever-tightening amendments had their effect, so that a 1974 textbook I cited discussed cooling as a possibility, but with warming more likely. And the rest is history…

      • “An enduring popular myth suggests that in the 1970s the climate science community was predicting “global cooling” and an “imminent” ice age…”

        This is a classic straw man which does not address the actual timeline, but moves forward a decade, after the debate had flip-flopped. The following quotes provide a few time stamps to illustrate the actual timing. So yes, in the 1970s the climatologists, press and politicians were moving on to warming… after they had spent the 1960s warning about cooling:

        After a week of discussions on the causes of climate change, an assembly of
        specialists from several continents seems to have reached unanimous agreement on only one point: it is getting colder.
        -New York Times – January 30, 1961

        Like an outrigger canoe riding before a huge comber, the earth with its inhabitants is caught on the downslope of an immense climatic wave that is plunging us toward another Ice Age.
        -Los Angeles Times December 23, 1962

        The battle to feed humanity is over. In the 1970s, the world will undergo famines. Hundreds of millions of people are going to starve to death in spite of any crash programs embarked upon now.
        – Paul Ehrlich – The Population Bomb (1968)

        It is now pretty clearly agreed that the CO2 content [in the atmosphere] will rise 25% by 2000. This could increase the average temperature near the earth’s surface by 7 degrees Fahrenheit. This in turn could raise the level of the sea by 10 feet. Goodbye New York. Goodbye Washington, for that matter.
        -Presidential adviser Daniel Moynihan, 1969 (later Sen. [D] from New York 1976-2000)

      • sci guy,

        One thing we should have learned from the climate wars: journalists are not reliable guides to the state of a science. Their job is to sell stories, and science is among the most difficult to accurately dress up as lurid headlines.

        On the other hand, we have reliable guides in the science literature. It’s just much more difficult to assess.

      • EOTFMWS wrote: “journalists are not reliable guides to the state of a science. Their job is to sell stories”

        Most journalists act as intentional activists first and profit centers only secondarily. For instance, the mission statement of the NYT:

        “Mission: The New York Times’s core purpose is to enhance society by creating, collecting and distributing high-quality news, information and entertainment.”

        http://www.nytco.com/who-we-are/culture/standards-and-ethics/

        So clearly their self-stated mission would be to “enhance society”, not to generate profits or growth for its shareholders by selling stories in the most profitable manner. Especially today, media outlets such as the NYT very clearly attempt to further the goals of certain activist groups, NGOs, and politicians by publishing a stream of headlines with little regard to the actual scientific underpinnings, as long as a meme is furthered.

        For instance, why was the Karl paper pre-released to certain journalists allowing for a guarantee of favorable headlines on release day? Was that to spread the science among scientists, or to allow the media to “enhance society” by promoting a meme?

        It is very productive to follow the media outlets, for that is where one can observe how activists use current events as a tool to “enhance society”. If scientific thought in a field is being misused in order to further such a mission, then it is up to the “three percent” of dissenting members to stand up and let their reservations be known. And such efforts will eventually find their way into the the media output, even if it takes a decade or so for it to appear. If the “three percent” chose to remain silent, then why should a future historian chose to conclude that science did not believe what was being printed daily in the media outlets “of record”?

      • Sci guy,

        “So clearly their self-stated mission would be to “enhance society”, not to generate profits or growth for its shareholders by selling stories”

        After decades of reading corporate mission statements, especially those of Wall Street firms, I no longer share your touching faith that they are more than public relations, unconnected to what the organization actually does.

        As for the Karl paper, do you have evidence of its selective early distribution — or was it given to all major media who wanted it?

      • After 3 decades of working from within (the fringes) of large corporations I am acutely aware that mission statements betray the peculiar world-views of upper management, even while trying to mirror their desired customer base. I have seen these statements come and go with CEOs too often to think otherwise.

        That the NYT would consider itself an “enhancer” rather than a “recorder” is revealing if we are to consider it a media outlet “of record”.

        As to a pre-release copy list, there is of course no easy way to know who was given the details earlier or later… such is the nature of “leaked” information. There is plenty of FOIA evidence that Karl allowed inside access to Andrew Revkin and other media “friendlies” he wished to cultivate, including email copy lists which read like a who’s who of Climategate/Hockstickville.

      • Sci guy,

        I believe your standards are commendable, but unlikely to be found except in Heaven. Down here working the press is just good sense.

    • ==> “purporting to show there was no concern over global cooling in the 1960’s and the first part of the 1970’s?”

      Is that what is was purporting to show, tonyb?

    • As for the first article, is not Thomas C. Peterson -one of the co authors, also a co author, with William M. Connolley of the paper purporting to show there was no concern over global cooling in the 1960’s and the first part of the 1970’s?
      http://en.wikipedia.org/wiki/William_Connolley

      Connolley is a green party advocate famous for defacing Wiki climate pages, who was banned by Wiki (a very liberal organization) for his activities. His wiki page is pretty mild – but if you look at some of his wiki edits it is pretty clear he is less than unbiased.

      A less mild view of his activities
      http://newsbusters.org/blogs/lachlan-markay/2010/10/21/wikipedia-bans-radical-global-warming-propagandist-editing-all-pages

      Any information product associated with Mr. Connolley should be considered a meritless piece of advocate propaganda.

    • “(sorry John-I know you will pop up here)”

      Pop!

      Hi Tony,

      By now, you can probably fill in my half of the argument. But anyway…

      A lot of work has been done trying to understand uncertainties in the historical record. A flavour of this can be had by reading the papers written about the current generation of in situ SST data sets. E.g.

      COBE-SST-2 (Hirahara et al. 2014)
      http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-12-00837.1

      ERSSTv4 (Huang et al. 2015 and Liu et al. 2015):
      http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-14-00006.1
      http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-14-00007.1

      HadSST3: (Kennedy et al. 2011 and Kennedy et al. 2011)
      http://onlinelibrary.wiley.com/wol1/doi/10.1029/2010JD015218/full
      http://onlinelibrary.wiley.com/wol1/doi/10.1029/2010JD015220/full
      copies here: http://www.metoffice.gov.uk/hadobs/hadsst3/

      As well as various papers looking at the uncertainties specifically e.g.
      Kent et al. Effects of instrumentation changes on sea surface temperature measured in situ
      http://onlinelibrary.wiley.com/doi/10.1002/wcc.55/abstract

      Are observations perfectly reliable? No, never. Can we make reasonable estimates of the uncertainties? Yes.

      Best regards,

      John

      • Hi John, thanks much for these remarks

      • John

        Your antenna needs tuning. I made the original remark several days ago and repeated them again today following Beths’ link to a real life person who took actual real life measurements. Have you been on holiday or something? :)

        Are the observations perfectly reliable? No.

        Are there enough accurate measurements pre 1960’ish to give us a very good idea as to what is happening globally? No.

        Can we rely on the SST’s in a few well travelled areas for short periods? Probably yes.

        Should we heed Hubert Lambs maxim (applied to land measurements) that we can understand the tendency but not the precision ? Most definitely

        My regards to Boggis. I hear that even large rats are in awe of him.

        tonyb

  11. Henri MASSON

    Voic déjà la réaction de Judith Curry et de ceux qui la suivent sur son blog

  12. Very interesting but I suspect that everyone will end up being validated by this news. For some it will be because the numbers are now of more supportive expected warming trends. For others it will be because their fears of official manipulation have been confirmed. The subset of people whose overall perspective is impacted by this news will be vanishingly small. I fear the divide will grow rather than shrink.

    • The problem with biased science is that it is polarizing rather than informing the public.

      Distributing propaganda instead of empirically obtained knowledge seems to have that effect.

      Time has come to eliminate government science funding for politically hot topics until the obvious bias in studies has been reduced.

  13. I wonder if there is a correlation between the various reactions to this brand new study, and the attitudes of the respondents to previous AGW data and studies. My cursory spot check indicated a correlation of 1.0. Shouldn’t it be more like 0, since this is a completely new study?

  14. The polar caps will melt, and the oceans will swallow cities, and the biosphere will collapse, and AGW will cost us trillions of dollars, and yet somehow the physical evidence is so infinitesimal and so miniscule that whether or not a 16 year pause in rising atmospheric temperature has occurred cannot be determined conclusively to the satisfaction of all.
    Anybody see anything incongruous about this?

  15. If they get into a political tussle over thermodynamics my old steam tables will be worth their weight in gold.

    • Don Monfort

      I wonder if anyone has read Ross’s concise and insightful analysis. Mosher? Little jimmy dee? The SkS Kid who came here bearing gratuitous insult?

      • Steven Mosher

        Ross is dead wrong about about MAT. he has never looked at ICOADS.

        pity he should lose his focus

      • Don Monfort

        I see that over at WUWT you made what seems to be a nitpicking non-substantive attack on Ross’s analysis that didn’t pan out. What we are talking about here are the Karl adjustments. Do you have anything to say about your friend Ross’s detailed analysis of the Karl adjustments, or are you going to focus on one mistake you allege he made?

  16. This is amazingly tone deaf. Does not look good, and is at variance with other SST, and both satellite records. Post hoc change to get the desired headline. But the world is increasingly watching. Would not be surprised if this stunt did not produce some significant media blowback.
    Lots of ‘scientific ‘ acceptance of the pause, with possible explanations like Trenberth’s folly. And now there is no pause? So much for the settled science meme.

    • The alarmists want us to know that while they haven’t been able to figure out what has happened over the last few decades, they are absolutely certain what will happen in the next century. I suspect that their admission that their understanding of the recent past was flawed might not build a lot of confidence in their claims about the future.

    • It’s almost as if they are intentionaly destroying their credibility.

      • Aaron,

        I’ve often thought that some of the obvious over the top stuff from Warmers were cries for help. But I might be wrong.

        Andrew

      • Not intentional IMO. This is what happens when panic sets in because Mother Nature did not cooperate with their predictions and Paris looms. And when there is conflict of interest-Obama decreed the science was settled, so government employees have to settle it. And when AAAS plays the MSM PR game as if the internet and blogs with sentient denizens did not exist.
        This has a good chance of backfiring.

    • daveandrews723

      From what I have seen the science reporters in the media who are covering “man-made climate change” are useless hacks… ready to spread any alarming story put out by the warmists, without any attempt to balance their stories. They know that alarming headlines sell newspapers. They are as irresponsible as the “scientists” who are putting out this utter nonsense (for their own self-benefit).

  17. Nic Lewis http://climateaudit.org/2015/06/02/implications-of-recent-multimodel-attribution-studies-for-climate-sensitivity/
    Dare I wonder if the correlation of reactions to Lewis’ paper will have -1.0 correlation with the reactions to the paper here?

  18. The pause is deeply embedded in another set of the NOAA’s data.

    NASA:
    “Variations in Earth’s magnetic field and atmospheric circulation can affect the deposition of radioisotopes far more than actual solar activity. ”
    http://science.nasa.gov/science-news/science-at-nasa/2013/08jan_sunclimate/

    What could Dr. Tony Philips of NASA be implying?
    Is it that the climate change is related to the variations in Earth’s magnetic field?

    Let’ have a look at the two variables and a possible correlation.
    Relevant Earth’s Magnetic field data from NOAA
    http://www.ngdc.noaa.gov/geomag-web/#igrfwmm
    The latest global temperature data from:
    http://www.cru.uea.ac.uk/cru/data/temperature/CRUTEM4-gl.dat

    When plotted together on visual inspection (‘wiggle match’) it appears to be some relationship.
    This is also confirmed by the linear regression, giving correlation factor R2=0.81.. Result is presented here.
    http://www.vukcevic.talktalk.net/MTC.htm
    There is distinct 1940’s ‘bump’ and more importantly the PAUSE is there.
    Eliminating the PAUSE from the geomagnetic data would not be matter of a minor adjustment.
    Even having in mind well known maxim ‘correlation is not necessarily causation’ the above is unlikely to be just a coincidence.
    Possible mechanism come to mind:
    a) radioisotopes nucleation and the cloudiness albedo
    b) secular change in the magnetic field is simply a proxy for the ocean floor tectonics.

    Consider it an alternative perspective on the natural variability.

  19. Correct me if I’m wrong, but a prime reason for using the surface station + in-situ SST network is to compute in anomaly space a very long time series e.g. 1880-present of temperatures. We can show a gradual warming trend over the past century that occurs in steps. Again, this is in “anomaly space”.

    Since 1979, we have global satellite coverage from a multitude of sensors that measure (in)directly many aspects of the climate system. Numerical weather prediction techniques like 4D-Var used by the best weather models (ECMWF, UKMET) produce an analysis of the atmosphere-ocean system every day incorporation so much data from many sources that imo it’s one of humanity’s remarkable achievements. For the past several decades, I contend that surface-station datasets are insufficient and (perhaps) not well suited to describe the exact global temperature. The methods used by NOAA/NASA/BEST are akin to the quality control procedures required by every NWP data assimilation scheme to initialize the analysis prior to running a weather forecast.

    If I wanted to create a time series of global temperature since the 1990s, then I would NWP / variational techniques.

    • Good to find you here. Please visit more often.

    • Steven Mosher

      So if NWP is the gold standard and I match NWP…….

      the conclusion would be????? what would logic say?

      Of course if I used NWP and showed the haitus ending then I suspect
      that people would argue that NWP is not data but a model. and they would demand only use observations..

      But I probably agree using NWP will get you a physically defensible answer.. But why limit it to after 1990?

      I kinda liked MERRA
      I dunno.. how close to MERRA did we get?

      http://static.berkeleyearth.org/posters/agu-2013-poster-1.pdf

      .

    • Nit: The satellite SST record only goes back to 1981.

      Still, for 1981-present, one can indeed do reanalyses. The CFSv2 reanalysis also used modern (variational) analysis techniques, and unlike purely weather models, included an ocean in its system.

      But this will have artefacts too. When the number of satellites available changes, there’s a discontinuity. When the quality of the satellite data changes, there’s another discontinuity. When you add a new type of data (AMSR-E for SST, for instance), there’s a discontinuity. And when you lose that source, such as with AMSR-E dying in 2011, there’s another.

      For weather-oriented systems, this is not so much a concern. The emphasis there is on getting the best possible analysis from which to make today’s run. If there’s a good new data source or algorithm, use it.

      But for trend analysis, every one of those discontinuities is a problem capable of creating, destroying, misrepresenting the trend you’re trying to study.

      In other words, data are messy. Anything based on data is inherently messy. And that includes data assimilation systems and their results.

      • Steven Mosher

        If you look at the input data sets for weather models you’d be shocked.

        If your life is weather foorecasting of course you think the models are the gold standard.

        Basically we have various approaches to estimating things.

        Its fun to compare them and try to improve them.

        A good analyst compares them all and makes balanced statements.

        advocates.. well, they have different approach

      • @mosher:
        Actually, I wouldn’t be at all shocked by looking at the input data for weather (and ocean, ice, land, …) models. Part of my day job involves doing just that. That’s why I say data are messy. I also construct some data which are then used by such models. Even stronger reason I say data are messy.

        Forecasters are generally quite skeptical about models, and model guidance, preferring data sources, and, typically, data sources untouched by models. Radar and satellite imagery, for instance. As the forecast lead time increases, they do rely increasingly on model guidance.

        Anyhow, one of the data targets I work on is for SST, so I’ll probably write up something over at my blog this weekend. More on the general problem of trying to determine SST than specific details about this paper.

  20. Call in Mosher to explain to us why these important adjustments don’t change the conclusion.

    Andrew

  21. Pingback: Ahora intentan negar La Pausa (en el calentamiento global). De risa. | PlazaMoyua.com

  22. If the temperature was about 70 deg. F between the ice ages, then shouldn’t we expect the current temperature to continue to increase until we get to about 70 deg. F since we are still in an ice age? What difference does it make what causes a rising temperature since we always have a rising temperature following an ice age and we are still in an ice age?

  23. Why not manufacture a homogenized ocean set by synthesizing a decontamination of the boat data to conform with the more robust – and scientifically-intended – buoy data? Why did they chose to make the buoy data “more like the boat data” and not the other way around? Am I missing a clear reason, or is the obvious answer staging in my mind the real reason?

  24. Adjusting buoy data to match “ship of opportunity” data is totally bizarre. We spend billions of dollars launching satellites and deploying state of the art ocean buoys and then “correct” their data to match ship obs, really?

    The more details of GST estimation I am exposed to just confirms my existing bias that the entire endeavor is silly and subjective.

    What will alarmist bureaucracy stoop to next?

  25. Ah, it seems as though things have become utterly shameless as the Paris meeting nears. This was bound to happen.

    I think that a novel aspect of this is some of the new language being used in the AP story. Seth Borenstein (who writes several biased articles every week) uses the term “non-scientist” three times when referring to people skeptical of CAGW. I have to wonder if we’ll see this offensive tactic again. The piece also tries to re-write the IPCC’s handling of the hiatus–as though the IPCC was just humoring the “non-scientists”. I suppose we’ll see that theme again, too.

    In any case, the role of activist journalists like Mr. Borenstein interests me and I was wondering if there is anything worth reading about their involvement in the climate change debate. Thanks in advance for any recommendations. :-)

    • John Archer

      In any case, the role of activist journalists like Mr. Borenstein interests me and I was wondering if there is anything worth reading about their involvement in the climate change debate. Thanks in advance for any recommendations.

      Well, following their deaths by hanging—as commendably advocated* by Luboš Motl for their crimes against humanity—their thus accelerated obituaries would probably be a good start.

      You’re welcome.

      * Minor natural extrapolation by me to Schmidt, Mann et al‘s coterie of media henchmen.

  26. The scientific credibility of government global warming scientists is on hiatus.

  27. David Wojick

    What this actually busts is the surface statistical models. Booya!

  28. “The extension of high-latitude arctic land data over the Arctic Ocean is also questionable. Much of the Arctic Ocean is ice-covered even in high summer, so that the surface temperature must remain near freezing. Extending land data out into the ocean will obviously induce substantially exaggerated temperatures.”

    They did not do this – tell Pat to read the paper again. They provide an estimate of what that would mean but do not include it in their surface temperature product.

    “I am also unconvinced by NOAA’s gap filling in the Arctic, and in my opinion this introduces substantial error into their analysis. I addressed the issue of gap filling in the Arctic in this recent publication: Curry JA, 2014: Climate science: Uncertain temperature trends. Nature Geoscience, 7, 83-84. Relevant text:
    Gap filling in the Arctic is complicated by the presence of land, open water and temporally varying sea ice extent, because each surface type has a distinctly different amplitude and phasing of the annual cycle of surface temperature. Notably, the surface temperature of sea ice remains flat during the sea ice melt period roughly between June and September, whereas land surface warming peaks around July 1. Hence using land temperatures to infer ocean or sea ice temperatures can incur significant biases.”

    Two things. First it sounds based on your statement like you’re implying you published a peer-reviewed paper on the subject of infilling in the Arctic which is certainly not the case.

    Secondly, you have never shown any original analysis to counter the temperature analyses performed by Cowtan and Way and Berkeley with respect to the Arctic. In the time since you made your ‘comments’ there have been a number of papers (see below) which have provided validation for our approach from atmospheric reanalysis datasets, isolated weather stations and satellite datasets. We have provided numerous follow-up investigations which once again support the methodology (see below) and include assessments against the Atmospheric Infrared Sounding Unit for instance. You can’t continue to keep hand-waving on the subject. If you want to say it can’t be done then please show us why – also provide your cross-validation statistics on your proposed improvement ;) If not show us why its better to pretend the Arctic is warming at the global average rate (FYI it’s not).

    Comiso, J. C., & Hall, D. K. (2014). Climate trends in the Arctic as observed from space. Wiley Interdisciplinary Reviews: Climate Change, 5(3), 389-409.

    Dodd, E. M., Merchant, C. J., Rayner, N. A., & Morice, C. P. (2014). An Investigation into the Impact of using Various Techniques to Estimate Arctic Surface Air Temperature Anomalies. Journal of Climate, (2014).

    Simmons, A. J., & Poli, P. (2014). Arctic warming in ERA‐Interim and other analyses. Quarterly Journal of the Royal Meteorological Society.

    Updates:
    http://www-users.york.ac.uk/~kdc3/papers/coverage2013/updates.html

    • Robert

      We can clearly see other examples of arctic amplification, for example the 1920 to 1940 period, the 1818 to 1860 era, the first few decades of the 16 th century and of course the extended warming during the Viking period.

      As Enhanced co2 couldnt have been responsible what do you think could have been their causes?

      Tonyb

    • This is a pretty astonishing comment:

      “First it sounds based on your statement like you’re implying you published a peer-reviewed paper on the subject of infilling in the Arctic which is certainly not the case.”

      I was invited by Nature Geoscience to write a piece on the arctic infilling issue, based on my extensive expertise on the topic. You then imply I am trying to mislead people into thinking this is a peer-reviewed paper? Beyond bizarre.

      I regard your method for interpolating into the Arctic to be deeply flawed. I have other things that I regard to be more important and/or interesting that I am working on, so at present I don’t intend to pursue this issue beyond critiquing on the blog any publications that I regard to be flawed.

      I agree with Ryan Maue that with regards to recent (post 1990’s), we should be looking at the reanalyses, which provides the basis for a dynamically and thermodynamically sensible ‘interpolation’ into the data sparse region of the Arctic.

      • “I was invited by Nature Geoscience to write a piece on the arctic infilling issue, based on my extensive expertise on the topic.”

        You have expertise, certainly, but I would question if it is entirely appropriate in this case because its clear based on your criticisms that you haven’t investigated deeply this subject matter. I presented to you three papers which provide strong indications that our approach is appropriate. There will be an additional paper which shows it is appropriate in the near future as well.

        “I regard your method for interpolating into the Arctic to be deeply flawed. I have other things that I regard to be more important and/or interesting that I am working on, so at present I don’t intend to pursue this issue beyond critiquing on the blog any publications that I regard to be flawed.”

        And you have not providing any reasonable commentary as to why it is flawed.

        “I agree with Ryan Maue that with regards to recent (post 1990’s), we should be looking at the reanalyses, which provides the basis for a dynamically and thermodynamically sensible ‘interpolation’ into the data sparse region of the Arctic.”

        Well luckily the reanalysis data support our results – we looked into this in rather heavy detail in our analysis and the updates afterwards. In many cases they show more warming than ours rather than less. Have a look at the study I cited above with ERA-Interim – the best performing reanalysis in the Arctic, even against out-of-sample observations.

        We also have a reanalysis-guided infilling (MERRA) which gives almost identical results.

        “Gap filling in the Arctic Ocean based on land temperatures is unphysical. Here are some things that you would have to include in a physically reasonable analysis: Daily sea ice concentration analysis
        Satellite or buoy measurements of open water SST in the Arctic Ocean (e.g. polar pathfinder SST)”

        Actually if you compare to the Arctic buoy station data (Rigor et al., 2002) you’ll see that it is ‘unphysical’ to use SST anomalies to extrapolate and that it is optimal to use land anomalies. This is in the Cowtan and Way (2014) paper where we validate against those stations over a long period. Please read the paper and the cross-validation measures used. Reading the paper you will see we do in fact use the sea ice data in the interpolation so that land temperatures are not extrapolated over areas which are not covered by sea ice. Once again it is in the paper and subsequent update documentation…

        What do atmospheric reanalysis, drifting buoy stations, land surface temperatures measured from satellites and kriging guided by reanalysis/satellites/land stations have in common?

        They all give similar results.

      • Even if your analysis gives similar results over a certain period, your method is unphysical and I would not expect comparable results going forward. Getting ‘close to’ the ‘right’ answer for a short period of time, for the wrong reasons, frankly isn’t useful.

        You use climatological sea ice data, not the daily (or even monthly) varying sea ice. During the summer time, sea ice surface temperatures have nothing to do with land temperatures.

        I can probably figure out some algorithm that ‘works’ for awhile based on the price of tea in China.

      • Don Monfort

        The inexperienced newbie not quite Phd, will often make profound discoveries with short term curve fitting.

      • Steven Mosher

        Judith

        “Even if your analysis gives similar results over a certain period, your method is unphysical and I would not expect comparable results going forward. Getting ‘close to’ the ‘right’ answer for a short period of time, for the wrong reasons, frankly isn’t useful.”
        ##################################
        the problem is estimation areas where there are no data.
        all methods of interpolation are “unphysical” except perhaps
        re analysis and even there there are aspects of estimation where
        physical processes are represented “unphysically”
        Since the problem is “Estimate” you cannot reasonably challenge
        a method that passes validation without
        A) admitting you have nothing better
        B) doing a better job yourself

        Leaving the area unfilled is ALSO an estimate. And that estimate
        performs worse in validation.

        ############################

        You use climatological sea ice data, not the daily (or even monthly) varying sea ice. During the summer time, sea ice surface temperatures have nothing to do with land temperatures.

        1. of course that will introduce errors.all methods have errors.
        2. I have yet to see any proof that land temperatures have Nothing
        to due with the temperature of air over the arctic. Nothing is pretty
        strong statement. To show this, you would need to compare the two.
        Problem: one of them is missing.

        ###############################################
        I can probably figure out some algorithm that ‘works’ for awhile based on the price of tea in China.

        I doubt whether the algorithm would validate out of sample.

      • Don Monfort

        “I doubt whether the algorithm would validate out of sample.”

        Have the warmistas heard about this Mosher principle?

      • Kriging temperatures (eg over the Arctic)

        An unexpected chance to gnaw on this bone, having done a little homework. Kriging seems more tricky than it may look! There are two forms.

        There is statistical estimation. This requires multivariate normal data with an estimated covariance kernel. BEST make a limited attempt at justifying their methodology in these terms. It is most unlikely this statistical approach could be justified for temperature data – it would be remarkable if temperatures are anywhere near normal (gaussian), let alone with the same covariance kernel over the whole planet.

        The second way of using kriging is as purely a functional interpolation in reproducing kernel Hilbert space. No statistical estimation. This is probably the only way it can be applied to temperatures. Not for the mathematically faint hearted I would suggest and needs careful documentation as regards the effective interpolating functions. Not provided for temperature kriging it seems.

        Not necessarily a flawed interpolation technique, but much more detailed work needed to justify choices of interpolation functions and how these affect the results.

      • Regarding Kriging, I read this when the Sherwood paper on RAOB data came out:

        “Kriging is based on regionalized variable theory, which assumes that the spatial variation in the data being modeled is homogeneous across the surface. That is, the same pattern of variation can be observed at all locations on the surface.

        Sounds as if sea ice/open ocean versus land makes Kriging a very dubious choice for the Arctic.

        I also suspect it is not valid for RAOB data.

      • > I can probably figure out some algorithm that ‘works’ for awhile based on the price of tea in China.

        Go for it.

        While you’re at it, an explanation as to why the correlation between the price of tea in China should be fed to the algorithm might be nice.

        Finding two other indicators that give similar results, like Robert Way did, would be even nicer.

      • “Even if your analysis gives similar results over a certain period, your method is unphysical and I would not expect comparable results going forward. Getting ‘close to’ the ‘right’ answer for a short period of time, for the wrong reasons, frankly isn’t useful.”

        I’m really not sure where the unphysical arguments come in. We’re not talking about absolute temperatures we’re talking about anomalies and there are much fewer physical properties which can introduce bias when you interpolate anomalies. Even in absolute terms you can summarizes about 95% of geographic variation in air temperatures across land based on a small number of covariates (x,y,z) in for instance a thin plate spline algorithm. The idea that anomalies (which are much more autocorrelated at larger distances) can’t be used for interpolation in this manner is somewhat silly. What is unphysical about the interpolation when it’s guided by MERRA based anomalies or the UAH-based anomalies. I can tell you that the UAH-based ones probably miss some of the near-surface warming which is characteristic of Arctic Amplification.

        “You use climatological sea ice data, not the daily (or even monthly) varying sea ice. During the summer time, sea ice surface temperatures have nothing to do with land temperatures.”

        Using a climatology inserts a bias but it is smaller than the bias introduced using a variable (monthly) ice mask. These tests have been done. Expect to hear more on this in the future.

        “I can probably figure out some algorithm that ‘works’ for awhile based on the price of tea in China.”

        As Mosher said it wouldn’t pass cross-validation. If you have an alternative idea then please go ahead and present it – but there’s a certain sense of irony in watching you continually say that you’ve countered our approach when you’ve provided zero evidence to support your views.

      • No all I have to do is reject your approach and not use your data set; there are other surface temperature analyses that I regard as more reliable.

      • Robert Way, The “unphysical” part is the thermodynamics. -50C over sea ice represents 162 Wm-2, -2 C over ocean water represents 302 Wm-2 plus some latent, 18C over open water represents 406 Wm-2 plus approximately 100 Wm-2 of latent. Assuming that a temperature anomaly over sea ice is meaningful with respect to -2 over ocean Arctic or 18 C is pretty much naive.

        Mathematically, the kriging is impressive. Thermodynamically it is just about useless.

      • Steven Mosher

        TE.
        Yes kriging across the boundary may introduce errors.
        That’s not the question.
        The question is are the errors significant?
        Given the spatial area involved.. No.
        We are down to polishing the bowling ball.

      • David Springer

        If you want to use a bowling ball analogy using old temperature series from instruments never meant to establish global average trends to hundredths of a degree C/decade then using those is like polishing an old pitted bowling ball that has such gross flaws as to make polishing it an exercise in futility.

        A more applicable polishing analogy to grossly inadequate temperature data is “you can’t polish a turd”.

      • Steven Mosher

        Foias.
        Yawn.
        Read more.

      • Steven Mosher, it is polishing, but balling ball wasn’t the first thing that popped into my mind.

        If you want to see Judith’s point you need to consider the what the polished temperature represents in thermodynamics. If you neglect latent, just the range of possible SST gives you a thermodynamic uncertainty of about 0.35 C or SST could vary by up to 0.35 C with zero change in energy. You could play around with converting all the temperature data to S-B energy before taking anomaly and you see that the tropics have a higher energy weighting than Arctic winter temps.

        Since there isn’t much other data to use though, SST and mean surface temperature have to be used, but when you assume that the mean temperature can be reliably converted to S-B energy you are stepping in the “balling ball”. There is a “hidden” uncertainty you can’t get away from.

      • > If you want to see Judith’s point you need to consider the what the polished temperature represents in thermodynamics.

        Wait, Cap’n. Are you suggesting that unless we include daily sea ice concentration analysis or buoy measurements of open water SST in the Arctic Ocean, no statistical analysis can never make any physical sense?

      • Willard, “Wait, Cap’n. Are you suggesting that unless we include daily sea ice concentration analysis or buoy measurements of open water SST in the Arctic Ocean, no statistical analysis can never make any physical sense?”

        No, I am saying there is considerable uncertainty as to what energy that temperature actually means. It makes physical sense when you include the realistic uncertainties. As far a thermo goes, tropical SST plus allowance for ice coverage would be more meaningful, but you have to use what you have. An average ocean temperature would be the best thermo reference, that only varies by a degree or so, but it isn’t exactly an easy thing to determine.

        I posted that below, but see how the 0-100 meter temperature anomaly is right in the middle of the pack. That is one gut check. With a planet scale problem I think lots of gut checks are a good thing.

      • > No all I have to do is reject your approach and not use your data set […]

        All one has to do to counter is to reject?

        ***

        > I am saying there is considerable uncertainty as to what energy that temperature actually means.

        Judy’s argument was that Robert Way would need to include of daily sea ice concentration analysis or buoy measurements of open water SST in the Arctic Ocean, Cap’n. Otherwise she rejects his analysis. That’s all she needs to do, it seems.

        Your thermointerpretation of Judy’s argument looks a bit farfetched.

      • Willard, “Your thermointerpretation of Judy’s argument looks a bit farfetched.”

        Pretty telling you would think that. Since it is supposed to be an energy balance problem, perhaps you should read more.

      • > perhaps you should read more.

        I would real all the quotes from Judy that justifies your thermointerpretation of her argument.

        Until then, all I have to do is to reject your thermointerpretation.

      • Willard, “Until then, all I have to do is to reject your thermointerpretation.”

        Reject away then. Not particularly unexpected. Then if you think of it as more of a puzzle you might find some enlightenment. Basically though a ten C anomaly over sea ice in winter should not be averaged with a 1 C anomaly over open water which is why she would rather ignore C&W’s contribution.

      • David Springer

        Willard your rejection of a temperature series data set is not equivalent to Curry’s rejection of same. She’s a qualified expert user of such data sets and you’re a bored housewife taking anonymous, meaningless potshots. Why are you acting out in this way?

      • > She’s a qualified expert user of such data sets

        I reject that as an appeal to authority.

        Only Chuck Norris is so expert in everything that his rejections become counters. His stare suffices to reject almost everything. For anything else, there are round kicks.

      • David Springer

        I reject you as someone not playing with a full deck.

      • Willard, Dr. Curry shouldn’t have to remind a “climate scientist” like Robert Way of the most basic of the laws of thermodynamics.

        From the Wikipedia page on the Zeroth Law.

        “The law is important for the mathematical formulation of thermodynamics, which needs the assertion that the relation of thermal equilibrium is an equivalence relation. This information is needed for a mathematical definition of temperature that will agree with the physical existence of valid thermometers.

        It is really pretty simple. If you want to use temperature as a proxy for energy you have to consider the relationship between your proxy and reality. At the Top of the Atmosphere you have your energy reality. ~240 Wm-2 in/out and ~0.6 +/-0.4 Wm-2 imbalance in “equilibrium”. That reality only includes one form of energy transfer, radiant.

        At the “surface” you have 15 C +/- about 1.7 C which is representing ~390 Wm-2 radiant, ~25 Wm-2 convection and ~90 Wm-2 latent heat transfer up/down. You have a temperature ~15C representing an energy of about 505 Wm-2 which should have an “effective” temperature of about 34 C degrees. Your 15 C temperature proxy does not represent all that well your energy. Stephens et al. pointed out that the “surface” uncertainty in energy terms is about +/- 17 Wm-2. I personally think that could be reduced to about +/- 8 Wm-2, but it would be a bitch to get lower.

        https://judithcurry.com/2012/11/05/uncertainty-in-observations-of-the-earths-energy-balance/

        Your “greenhouse effect” is actually closer to 505-240=265 Wm-2 not 150 Wm-2. I don’t have a reference handy, but Manabe discussed this in the past when he estimated the GHE in terms of temperature could be 90C.

        This confusion is Thermo 101 stuff, Frame of Reference. At the “surface” you have ~390, 25, and 90 Wm-2 which are all heat losses, countered by approximately 340 Wm-2 DWLR, your heat loss prevention. That is a 165 Wm-2 “surface window” not an ~40 Wm-2 “window” which actually would be closer to 20 Wm-2 if that estimation were valid. That is also noted in Stephens et al.

        Since Cowtan and Way are incorporating more data less representative of actually energy, they are polishing the wrong turd. Why? Because if there is a usable “equilibrium” of sorts on human time scales, advective heat transport would have to roughly equal TOA heat loss. Your heat engine, the tropics is a better temperature proxy than an “average” of some inconsistent and poorly defined “surface”. Your polar temperatures are your heat sink. Averaging your heat sink with your heat source is crapola.

        What Cowtan and Way are doing is trying to determine engine efficiency by checking the temperature of the tires. They have done a wonderful job of checking the tires and that is about it.

      • > Dr. Curry shouldn’t have to remind a “climate scientist” like Robert Way of the most basic of the laws of thermodynamics.

        You’re putting your own words into Judy’s mouth, Cap’n. Instead of whispering to her your own response, you ought to write a blog post. Until I see Judy’s seal of approval, I reject it as a tasteless trick.

        ***

        > the relation of thermal equilibrium is an equivalence relation.

        Good. Now apply the notion of stochastic equivalence, which is the relevant notion for what “scientists” such as Robert Way do, and report.

      • Willard, “Good. Now apply the notion of stochastic equivalence, which is the relevant notion for what “scientists” such as Robert Way do, and report.”

        Willard they have a great model for a thermodynamically irrelevant entity. that is the whole point bubba. they done a fine job and I am sure their mommas are proud, but “global” anomaly by itself is meaningless.

        I mentioned before if they or mosher converted all the absolute temperatures to S-B energy before figuring the anomaly they would have a second reference, also with issues, but naturally weighted for regional sensible heat.

        Not that hard, or at least is should not be, to understand.

        -50C and 34C have an average of -10 C, but if you average the S-B energy equivalent then convert, the average temperature would be -2C. Basic math. You have a wonderfully accurate equivalent of a teat on a bore hog.

        The whole value of anomaly is depreciated as the range of absolute temperatures increase. SST has a small range of temperature, in comparison, so it is a more reliable anomaly. For a planetary scale problem you would want as many somewhat reliable references a possible to estimate a reasonably accurate range of uncertainty. A temperature reconstruction paired with and energy reconstruction would give you and uncertainty range of about +/-0.3 C degrees for the oceans.

      • CD:
        I might be understanding. The surface of the Arctic ocean is the radiator. When closed, there is sea ice, you place your temperature gauge 2 meters above that. It reads -30.0 C. You open the radiator to run full, the sea ice is not there. The thermometer now reads -1.0 C. One might ask who would place a temperature gauge downwind of the radiator? For a car, on a warm day, that gauge will read higher as the cooling system works harder and maintains the equilibrium.

    • Gap filling in the Arctic Ocean based on land temperatures is unphysical. Here are some things that you would have to include in a physically reasonable analysis:

      Daily sea ice concentration analysis
      Satellite or buoy measurements of open water SST in the Arctic Ocean (e.g. polar pathfinder SST)

      See this doc from WHOI, a good source of model assimilation and data available in the Arctic Ocean
      http://www.whoi.edu/science/PO/arcticgroup/projects/andrey_project2/indexAP.html The ocean assimilation

    • Curious George

      I have been trying for years to obtain an error analysis of climate models: an error caused by a grid selection; an error caused by an approximation to convection; an error caused by an assumption that a latent heat of water vapor in the air is negligible. Can you please provide a link to such an analysis?

    • Talk about someone who doesn’t recognize when he’s punching above his weight class.

    • Mike Flynn

      Robert Way,

      I hope you are not seriously expecting to be taken seriously, with statements like this –

      “Reanalyses, created by running modern weather models on historical data to infer the state of the atmosphere.”

      Amongst a host of questions this begs, is this one – “How are modern weather models superior to older weather models?”

      Do they incorporate more modern physics? Are they capable of providing the same wrong answers much more quickly?

      The results of your reanalyses make absolutely no difference to the present, nor to the future. Seriously pointless, unless you are trying to provide anti fossil fuel propaganda.

      Is there a point to the exercise? It doesn’t appear to achieve anything of utility.

  29. > Doug MacNeal’s take [link].

    That says quite a lot.

  30. All the usual skeptic uproar when published work on dataset improvements are made. And yet I rarely see literature that shows that there wasn’t a sound basis for doing so. Clickbait. JP

    • You need to learn the difference between improvements and plain old fudging. I refer to my physics professor, Hamorski, and his Hamorski fudge factor: a multiplicative value derived by dividing the ‘right’ value by the experimental value, applied to the observed value, before submitting the experiment.
      Essay When Data Isn’t in ebook Blowing Smoke. Please return with rational explanations for what is indelibly shown there. Rutherglen and Reykjavik, for example. Or DRX 2013 to NClimDiv 2014 for Maine.

      • I anxiously await the Curry, Tisdale, Michaels, et al publication that empirically demonstrates the errors in the Karl, et al work. But I’m not holding my breathe. JP

      • You already have it, above. The old canard about not peer reviewed? Behold Marcott (paleo), PMEL and Fabricius (2x ocean acidification), Cazenov (SLR)… All evidence of peer reviewed academic misconduct.
        Other essays in Blowing Smoke, which you appear good at.

    • Mike Jonas

      atarsinc – To put it a bit more simply than ristvan : You will never see anything using the la la la method. (Sorry about the mixed metaphor).

    • Don Monfort

      Yeah, Karl et al. is unassailable. As if the 7 billion folks who are not losing sleep over AGW give a flying f—. On to Paree! Good luck with that.

    • Steven Mosher

      Me?

      I am suspending judgement on the paper until I have an opprtunity to study it in detail.

      The uncertainty monster left the building and some people have decided to judge without looking. Or they judged based on the names on the paper.

      • Or they judged based on the smell test.

      • Then read/ study faster. Some of us have already paid the paywall and read the thing. After the 1400 EDT embargo. Abysmal.

      • Don Monfort

        I am with Mosher, on this one. Paper got to be studied. We have no reason to doubt that a few months before the crucial to the survival of the the planet Paree junket, the Gruberites at the NOAA have suddenly found out that they have been failing to account for the temperature properly. Of course, the corrections and updates erase the pause. We wouldn’t expect it to be any other way. And we know we can count on the story of this newly discovered faux pas being legit, because we all got to keep our doctors, we are on average saving $2500/yr on our health insurance, we are kicking hell out of ISIS, and Hilly’s Billy didn’t touch that young intern woman, uh what’s her name?

      • Don Monfort

        Ross has read it, Showtime. Go see what he has to say.
        Your reflexive defense of the consensus BS is getting comical.

    • Improvements. Right.

  31. Ben Palmer

    “Kevin Trenberth,[…] said the new work was “good and careful analysis” but only confirms what most scientists already knew, that there was no such hiatus.” contradiction his own 2009 statement: “The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t.”

  32. The pause and pre-pause periods in the various temperature records, including the new Karl et al. Interestingly, the new record is most similar in trends to Cowtan and Way and Berkeley Earth over the 1998-2014 period: http://i81.photobucket.com/albums/j237/hausfath/temperature%20trends%20comparison_zpsagn2clbe.png

    • Hmm, that image host seems to be having problems; this might work better: http://s28.postimg.org/6xwxipoy5/temperature_trends_comparison.png

    • “the new record is most similar in trends to Cowtan and Way and Berkeley Earth”

      Is that good or bad, Zeke?

      Andrew

      • Just interesting. I’m not convinced their new SST adjustments are better than those in HadSST3, but thats also not my area of expertise. Either way, their SST adjustments coincidently have the same effect on the trend as the greater arctic coverage in Berkeley and C&W.

      • Don Monfort

        Greater arctic coverage? Is coverage the word you want?

    • Zeke,
      I think you’ve mis-labelled the x-axis in the left-hand panel. Should be 1951-1997 – I think.

      • Try embedded tweets:

      • Zeke, What I see here is large error bars for the shorter period and lower rates of change than I thought GCM’s found for the earlier and longer period. From a scientific perspective, it seems like this whole argument is rather meaningless and about spin as much as anything.

        A more meaningful question is why the TLT satellite data and the radiosond data disagrees so much with the surface data. It seems to me with all due respect that this question is like a 3rd rail that most climate scientists steer clear of even mentioning.

        As has been pointed out many times, the data is noisy and the signal we seek is quite small compared to the size of the variables and MUCH larger than the truncation errors in the GCM’s. Perhaps better data is a very important investment we need to make.

    • Looks like the plot shows a low of .06 C / decade and a high of .11 C / decade. Looks to be a difference of 0.5 C / century.

    • human1ty1st

      Thanks Zeke, that really clarifies what evrybody is getting so hot under the collar about. But really I thought the real comparison was the 1970s-1990s warming rate with the ‘hiatus’. It was that rate of warming that matched the expectation for AGW going forward and its the failure to match that rate which seems to demand explanation. Along with the model/obs comparison ( maybe they are one in the same thing)

    • Perhaps I am missing something. The uncertainty ranges don’t seem to be correct.

      The 1998-2014 trend has almost four times the uncertainty of the the 1951-1997 trend? Really?

      My understanding was all the satellite thingies, automated weather stationy things, and the thousands of bob-in-the-ocean thingies gave us better information in modern times. If they don’t reduce uncertainty why do we waste money on them?

    • Zeke Hausfather,

      I have never really bought into the definition of hiatus or pause being applied to post ’98 records, I have always just looked at it as a lower trend line from ’78. It amuses me that both sides really get caught up in the description battle and it seems as if real science suffers in a silly scirmish. I know politics and propaganda have a lot to do with it and that reflects badly on science IMO.

      As far as all the dispute over temperature adjustments I would blame that on a lack of communication or transparency. In particular you are saying that it is interesting or coincidental that Berkeley Earth, C&W and now Burke et al are all very similar in their upward trending adjustments. Now wouldn’t that seem like an obvious red flag, to any outside observer, if there is a question as to why or how these adjustments are made? Correct me if I’m wrong but isn’t the upward trend of both B.E. and C&W attributed to the addition of arctic temperatures where as Burke et al is from adjustments to marine records? I don’t know which would be more coincidental? Do you see any possibility that these coincidents could be rationaly explained?

      • The met office mentioned the pause and wrote three papers on ot and had an international meeting about it.. The ipcc mentioned the pause. I heard Thomas stocker himself mention the pause

        http://www.nature.com/news/ipcc-despite-hiatus-climate-change-here-to-stay-1.13832

        Whatever it’s cause it existed. Why do some people deny it?

        Tonyb

      • Tony B, It’s not a matter of denial for me it’s simply a matter of how long you make a trendline. If you go from ’78 to ’98 and then ’98 to present you have a robust upward trend followed by a fairly flat upward trend on most all records. If instead you measure from ’78 to present you get a less robust upward trend. One could say both there is a pause and there isn’t a pause and not be lying.

      • Whatever it’s cause it existed. Why do some people deny it?

        Tonyb

        They must be the “Deniers” that everybody dislikes.

  33. Bad behavior, along with bad science, is being rewarded. The rewards go to President Obama who has directed his minions at NOAA & NASA & NCDC & others to get rid of his opponent: the hiatus. In true Al Capone Chicago style, Karl et al, with the blessings of Trenberth and others, via data management, have assassinated a personage, a stumbling block, who had been constraining Obama’s triumphant march on Paris. No more hiatus; no more obstacles remain from descending down the progressive rabbit hole. Government by fiat is alive and well in Washington. This date, June 4th, will live in infamy for the destruction of reason and the triumph of bestial power. No smoke and mirrors. Just raw power: this is the way things will be.

    A complicit Congressman as in Whitehead sets the tone of intimidation for any and all who thoughtlessly wish to decry such Imperial behavior.

    Skeptics of the World, Unite! you have nothing to loose but your shackles.

    • Whitehouse not Whitehead.

      • So, the latest NOAA cut reduces the pause but also reduces the longer term warming?

        NPR was pretty quick with
        ‘Skeptics Contradicted – No Pause’
        rather than the other equally valid assessment:
        ‘Global Warming not as bad as previously thought’.

        The real significance is not so much the recent cooling trend ( which may end with this year’s El Nino ) but that the observed trends are all less than the low end model predictions and have been declining.

      • rogerknights

        Actually Whitehorse (IIRC).

      • rogerknights

        Oops–scratch that! Whitehouse it is.

  34. Pingback: A new volley in the climate wars: here’s a playbill to the action! | The Fabius Maximus website

  35. NOAA are acting like 5 year old children who have been told they can’t have something.

    Pathetic!

  36. One of the more interesting gaps to fill would be Greenland.

    Looking at the periphery in the GISTEMP data, one would fill in anomalously high trends:

    But, UAH MSU seems to think high elevation Greenland has been cooling
    ( as well as Antarctica and the Tibetan Plateau ):

    Contamination? or real cooling of higher elevation that simple minded interpolations are missing?

    • Also note the incoherence of trends in the North Polar regions ( warming cells next to cooling cells ).

    • What is the trend of the Summit station in Greenland?

      The station is in the middle of the green (cold) spot.

      GIStemp should be showing data since there is a land station there.

    • Using UAH for the high elevation portions of the ice sheets is incredibly naive. There’s a reason RSS exclude that data.

      • Using UAH for the high elevation portions of the ice sheets is incredibly naive. There’s a reason RSS exclude that data.

        That’s possible, surface pressure at the Greenland Summit would be quite low. However, the cooling trend is interesting.

        And would interpolating over Greenland based on peripheral sea level stations also be incredibly naive?

        I think so.

      • Greenland aside, the UAH trend map indicates a fair amount of spatial consistency with the surface temperature trends. Cooling in the Eastern Equatorial Pacific, two warming lobes in the Western Pacific ( consistent with PDO? ), cooling in the Souther Ocean and maximal warming in the Arctic.

  37. Mike Flynn

    And still no scientist (or even climatologist) able to demonstrate the warming powers of greenhouse gases in the presence of unbelievers.

    Just more “new analysis” of a mishmash of temperatures from unknown distances above the nominal surface taken under a variety of changing conditions, temperatures from buoys in the ocean, temperatures of buckets of sea water plucked at random form the sea, supposedly accurate engine intake sea temperatures taken at inconsistent positions at inconsistent times, and purporting to be global surface temperatures.

    Any real scientist would recoil in horror if asked to stake his or her reputation on the resultant average, as a reliable scientific measure.

    And still millions die from disease, starvation, war, torture and other forms of deprivation. How will the reanalysis of the meaningless help to benefit these people? Or is it all a useless exercise demonstrating the ability to frivolously waste tax payers’ money?

    Still, the analysts need a job, I suppose. Somebody benefits!

  38. Well I just went and reread the huang et al paper
    http://rda.ucar.edu/datasets/ds277.0/docs/ERSST.V4.P1.JCLI-D-14-00006.1.pdf

    I remember spotting this when it came out, but i didn’t pay it much attention since I use HadSST. All this is more depressing than I thought. The 0.12C adjustment to the buoy data makes no sense – I get it that there is a problem with ship data, but why try to solve this by warming the buoy data? And why neglect the ASTR data, which is probably the best SST data there is? And why neglect ARGO, which measures temps at the same depth as ship intake?

    If the issue is the hiatus, just look at the best observations you have since 1990, which would mean forgetting ship data, and just look at ASTR, buoys and ARGO.

    I haven’t read the companion paper on uncertainty, but with all the stuff they are doing to the data, uncertainty should be quite large.

    • Have just done the same. Concur with you.
      Looks like this was a planned multipaper setup. The fingerprint ‘evidence trail’ is clear. Legal types should be shouting “willful misconduct”.

      • Looks like this was a planned multipaper setup. The fingerprint ‘evidence trail’ is clear. Legal types should be shouting “willful misconduct”.

        And people wonder why Lewandowsky wanted to look at conspiracy ideation by “skeptics.”

    • Dr. Curry:

      Don’t get your hopes up.

      “In the next version of ERSST, we will analyze Argo floats’ regional impacts, and if merited, we will include these data in future version. In order to include the Argo float SSTs, we will need to carefully calibrate the Argo float SSTs against ship and surface drifter/moored buoy SSTs, just as we did for the buoy-ship SST inter-calibrations in the current version of ERSST that is used in this paper.”

      • oh i see, they will use the ships to ‘fix’ the argo data

      • and if merited

        In other words, if they can find a way to torture the data to get the answers they want they will include it. And if they can’t, they won’t.

  39. Can Zeke or Robert Way or Mosher or anyone here explain this in a way that doesn’t sound absurd? Why are they adjusting the buoy data? So far all I have is some statements by pro-AGW scientists that they’re happy The Pause is gone (and statements by anti-AGW types that this all makes no sense). I’d like to hear a more coherent presentation by the pro folks.

    • Steven Mosher

      I’ve just read through the paper.

      A while back we were looking at doing a new ocean and this bouy adjustment ( in other records) was something that caught our eye.

      Technically It doesnt matter whether you adjust the recent past or the deep past.

      If you do an adjustment you will either hold the current past constant and adjust the deep past. or hold the deep past constant and adjust the current past.

      There is good argument for adjusting the deep past especially if the current past has for example , better measurement system. However, when people do this on the land record they are accused of re writing the past.

      Think of it this way.

      Suppose you want to adjust for inflation.. say the cost of your house.

      You can put the past price of your house in current dollars or the current price of your house in past dollars.

      The difference is not material because it is 100% reversible. if you describe your approach, then people can understand what you did.

      Obviously if you are selling it you want to work in current dollars.
      But when I talk to my 80 year mom about housing prices I put it dollars that she remembers when she bought a house.

      Finally since we are talking about anomalies and focusing on trends it doesnt matter either way.

      The only cogent objection I see is the one over the use of MAT.

      • Don Monfort

        “But when I talk to my 80 year mom about housing prices I put it dollars that she remembers when she bought a house.”

        Your mom has probably noticed the changes in housing prices.

        Have you read Ross’s comments on the paper?

        With all the warmests’ angst about the MOD-denier-BIG OIL promoted pause, why do you think it took the Grubers so long to make these adjustments? Why hasn’t someone else noticed the need for these adjustments?

      • It may make no difference in the result, but it is sure likely to make a difference in how the rest of us deal with it. a) Changing the present means that you’re going to need to continue changing the future as well. b) Why are you making changes? Isn’t it because you think that one or the other measuring method is biased? Well, which one do you think is biased, and why? Surely not the present method, no?
        I guess it would help to hear a coherent explanation of why this is being done. What do they think needs fixing, and what makes them think so?
        Of course I could read the paper, but it might not help.

      • Mike Flynn

        Steven Mosher,

        You seem to love an analogy – good, bad, or indifferent.

        Try this one for focussing on a trend. Obscure the windscreen of your car. Pick a straight piece of road.

        Now drive your car by looking in the rear view mirror. You’ll just be following the past, which surely can be used to predict the future, can’t it?

        Silly. Trends are useless. Try following one, and all it does is bring you closer to any inflection point. Is it not so?

      • Steven

        I understand your point about inflation but that just brings out dozens of problems with the attempt to make CPI or any other methods of using a valid inflation indicator.

        Regardless of the field, adjustments are adjustments, with inherently flawed results, in spite of the best intentions by the best minds in the world.

        The deeper one digs into this whole attempt at reconstructing reality, the closer to bizarro world it gets. But then the next campaign is starting to heat up and we have all those donors to call, all those fund raisers to attend and all those yard signs to make. Leave no stone unturned and all that stuff.

      • Steven Mosher:

        There are two issues.

        One is using interpolation to provide missing data values. That has its own set of issues.

        A second issue is using nearby values to adjust another existing value which is deemed to be “wrong”.

        On the second issue – whether you use past dollars or current dollars – if you tell your mom that the sale price for the house she sold in 1950 for x dollars has been adjusted (in past dollars) to x times .9 in order to homogenize it with the sale prices of nearby homes (at the same time), I bet she would be pissed.

        She might even say – NO my house sold for x in 1950 and it doesn’t matter what the other houses sold for – you should leave the sale price at x.

      • David Springer

        Mosher’s analogy to inflation adjusted dollars is (unsurprisingly) nonsense. The problem is that old and new dollars have no error bars. They each precisely represent 100 pennies. Old and new temperature data both have error bars. The latter usually much smaller bars. So the modern “dollar” represents between 99 and 101 pennies while the old “dollar” represents between 75 and 125 pennies.

        Mosher makes an astonishing number of mistakes. One after the other in endless succession. Find a different hobby, Steven.

  40. Pause or no, SST monkey business or no,
    the slightly longer term perspective remains,
    all the global temperature trends are at rates less than the low end model projections:

  41. I just tweeted this, no response yet:

    If trying to improve analysis in ocean SST during recent hiatus, why not look at ASTR, ARGO and buoy data since 1990 (forget ship data)

    The key issue IMO is to sort out these data sets for the recent past (say since 1990) and moving forward.

    Lets use the modern, best datasets, and forget about tying ourselves to the long term continuity of the old data sets!

    Sure, somebody can continue with the old data sets for the sake of continuity, but ignoring or mutilating our best data sets in the interests of long term continuity of sub par data sets makes little sense at this point

    • Steven Mosher

      “Lets use the modern, best datasets, and forget about tying ourselves to the long term continuity of the old data sets!

      Sure, somebody can continue with the old data sets for the sake of continuity, but ignoring or mutilating our best data sets in the interests of long term continuity of sub par data sets makes little sense at this point”

      Now I am totally confused.

      In the land based record we choose to adjust the past and hold the current observations constant. So, for example we have CRN which is a gold standard and when we adjust the past observations holding the gold constant we are accused of re writing history

      In this record they choose to adjust the current and hold the past constant.

      And Now they get criticized for the opposite reason.

      The idea that one should ignore the issue of continuity strikes me as close to being malpractice.

      If ARGO is replaced with a better system.. the same argument will be made. Just focus on the most recent best data.

      You have the information you have. collected in the way it was collected.
      The question is how can we removed uncertainty. how can we gain insight.

      If you want to do the adjustment differently, then go do that. And then folks will accuse you of re writing history.

      When the goal is to maintain uncertainty, no good faith attempt to reduce uncertainty will go unpunished.

      We should probably stop trying to create a long term arctic ice series because establishing continuity between satellite records and map based records requires that some records be “multilated”. And that submarine data that Gore wanted.. throw it away too. it could never be brought into alignment with satellite measures.

      Sun spots? throw out Leif’s work.. why try to make a long record.

      The main benefit of NOAA’s approach is you get to see the analyst choices.
      You get to see what things are in doubt and you get to see how the answer changes as a consequence. you get visibility into the uncertainty that is conditioned by analyst choices.

      Every day in business I have to adjust data. even in a system where every transaction and mouse click is recorded I am constantly adjusting the past and present to get a consistent record. Its SOP

      • there are two separate issues:

        • long term record
        • best available data for recent past and going forward

        Trying to accomplish both in the same analysis may not be useful or optimal for many variables

      • Mike Flynn

        Steven Mosher,

        You wrote –

        “Every day in business I have to adjust data. even in a system where every transaction and mouse click is recorded I am constantly adjusting the past and present to get a consistent record. Its SOP”

        Only in America. Companies specialising in adjusting the past – not to mention the present. Combine this with Government bodies adjusting the past – no, not the Bureau of Adjusting History to Suit Present Purposes – try NOAA, and you might wonder why an entire country seems to be preoccupied with the past.

        Do you not realise that you cannot change the past? It is gone, history, finished, kaput!

        Maybe the record reflects Nature. Inconsistent, chaotic, often totally unpredictable. What do you think?

      • Mosh “when we adjust the past observations … we are accused of re writing history”.

        Well, duh. It is re-writing history.

        As Prof Curry points out, NOAAs adjustments are to perfectly good recent history for the purpose of generating an alt-history. That’s not SOP anywhere but climate science.

      • “Now I am totally confused.
        In the land based record we choose to adjust the past and hold the current observations constant. So, for example we have CRN which is a gold standard and when we adjust the past observations holding the gold constant we are accused of re writing history”
        I reject the charge. You have not been accused by me. I am asking, I don’t know who else is asking: why would it make sense to adjust the buoy data to fit the ship data?

      • Steven,

        I usually pay attention to what you post – except when you get involved in spitting matches – but more and more you seem to me speaking in tongues.

        And I don’t need to read harder. You need to write more clearly.

      • Mosher,

        I missed your last paragraph.

        In certain fields what you describe can be called fraud. Adjusting ones interpretation of data is one thing. Adjusting the data is another.

      • Mosh,
        There is something you may be missing in why the people you argue with never seem to understand your point of view concerning dataset constructions and data adjustment. I don’t think they have really conveyed this to you either.

        The majority of educated climate skeptics have witnessed countless revisions over time to the surface temperature sets by the mainstream climate science groups that maintain. Old sets are revised or retired, and new versions take their place. Consistently and likely without out fail, every revision of the last 15 years or more, AlWAYS INCREASES THE WARMING TREND. This just reeks of a problem, whether conscious or unconscious and just throws the entire concept of these new and improved data adjustments under the bus.

        It does not matter how honestly BEST has worked and presented its results. It does not matter that if you don’t adjust you actually reduce the trend, which may be absolutely true in your and BEST’s work.

        All that matters is that something is wrong if revisions by the majors ALWAYS increase the trend. That in itself does not pass the smell test. Tracing the smell to the source may not yet have been done. It may be very tricky to find where subconscious bias distorted process at a deep level.
        But it clearly smells.

      • This is funny:

        ==> “There is something you may be missing in why the people you argue with never seem to understand your point of view … Consistently and likely without out fail, every revision of the last 15 years or more, AlWAYS INCREASES THE WARMING TREND.

      • Steven Mosher

        Jeez

        “The majority of educated climate skeptics have witnessed countless revisions over time to the surface temperature sets by the mainstream climate science groups that maintain. Old sets are revised or retired, and new versions take their place. Consistently and likely without out fail, every revision of the last 15 years or more, AlWAYS INCREASES THE WARMING TREND. This just reeks of a problem, whether conscious or unconscious and just throws the entire concept of these new and improved data adjustments under the bus.”

        Well thats not correct.

        Take the karl paper we are discussing.

        The change to the long term trend.. from 1880 to present is the OPPOSITE of what you claim.

        See the table in the SI. YUP. the new version has a slightly lower trend.

      • Steven Mosher

        Let me see if I can explain it to you in a simple way.

        Suppose you a system that measures a variable from 1900 to 2000.
        It consistently records 52.
        in 1990 through 2015 you start collecting with a different system and the value for that same thing in the same location is 54.

        So for 100 years you have data that says 52.
        and then you have overlapping data thats says 54.

        one measurement system say 52. the other says 54.

        Got that? you know because they are done side by side that one is offset from the other by 2.

        Suppose now that you want to build a long series what do you do.

        A) use raw data.
        B) adjust 54 to 52
        C) adjust 52 to 54

        If you use A you will have a bias in your record. you will introduce a bias in the trend.

        if you use the B or C approach. the trend will not be baised.

        It doesnt get any easier than this. We know from COLLOCATED SHIPS AND BOUYS that there is an offset. the question is option B or option C

        So how do you fix that bias. you can change 52 or change 54.
        which is “valid” when both yield the same answer in trends?

        Further you are NOT REWRITING the history. The historical data is still there. go use it. You are not as judith argues “mangling” the data.

        You have two data sets. There are STILL THERE. You use those two to create a third. If you want to create a long record you can only choose B or C. Thankfully the final answer doesnt change if you choose B or C

        There are practical reasons why you might choose B or C, but those are not scientifically material.

      • Don Monfort

        “Take the karl paper we are discussing.

        The change to the long term trend.. from 1880 to present is the OPPOSITE of what you claim.

        See the table in the SI. YUP. the new version has a slightly lower trend.”

        Yeah, it was necessary to erase the pause and the formula for adjustments they cooked up to do that happened to have an undesired effect on the long term trend. Sometimes you got to make a small sacrifice for the big headline.

        The real question is are the Karl adjustments justified, or is it a case of politically motivated headline hunting? What say you, Mosher?

      • The real question is are the Karl adjustments justified, or is it a case of politically motivated headline hunting?

        Don, you mean those evil climate scientists are committing fraud?

      • Don Monfort

        I will be generous and say that it appears they have committed deliberate premeditated blatant self-inflicted confirmation bias, yoey. The climate science needed that headline. Paree will still be a flopee.

      • “those evil climate scientists are committing fraud?”

        Joseph,

        Do you know if they are? Do tell what you know.

        Andrew

      • Joseph | June 5, 2015 at 2:27 pm |
        The real question is are the Karl adjustments justified, or is it a case of politically motivated headline hunting?

        Don, you mean those evil climate scientists are committing fraud?

        “Don, you mean those evil climate scientists are committing fraud, again?”

        Fixed it for you.

      • jeez | June 5, 2015 at 4:35 am |

        All that matters is that something is wrong if revisions by the majors ALWAYS increase the trend. That in itself does not pass the smell test. Tracing the smell to the source may not yet have been done. It may be very tricky to find where subconscious bias distorted process at a deep level.

        But it clearly smells.

        I archived all varients of the USHCN data set last year.

        This year on the anniversary of download I will download the sets again and plot the difference. The last time I plotted the difference I got a couple of hundreds of a degree celsius change and a pumpkin toothed alteration pattern. The difference is about 1.57°C per century per century. By 2100 the data adjusters will have lowered the early 1900s about 3.14°C relative to 2100 and there will be 3.14°C difference between the early 1900s and 2100 even if a time traveler would have measured the same temperature.

        Now – it is without question that a 3.14°C alteration for no reason is wrong. So the question is how to stop them.

        The easiest solution is for congress to find the budget line item for the department that alters the temperatures and delete it. This will get the adjusters reassigned to other departments or fired which is really the only solution.

      • David Springer

        Steven Mosher | June 4, 2015 at 8:12 pm | Reply

        Now I am totally confused.”

        Fixed that for ya!

    • Don Monfort

      “The main benefit of NOAA’s approach is you get to see the analyst choices.
      You get to see what things are in doubt and you get to see how the answer changes as a consequence. you get visibility into the uncertainty that is conditioned by analyst choices.”

      Yeah, that’s why they made those choices. So we would get to see them. At least the four hundred people that read the paper will see them. The rest of the world will see the headlines about the pause being busted in the NYT, the Guardian and all the other hack climate consensus mainstream BS publications. But it won’t do them any good. Paree will still be…busted.

      • Steven Mosher

        the pause would be busted REGARDLESS of how you do the bouy adjustment.

      • Don Monfort

        Really? You might try to esplain that. But I ain’t expecting it.

      • Steven Mosher

        Simple Don.
        The trend don’t care.

      • Mike Flynn

        Simple Steven,

        The trend are useless – well, maybe, less than useless.

      • Don Monfort

        Suppose that particular odd buoy adjustment does not become popular, Steven? IPCC apparently didn’t think to trot it out. Ross ain’t impressed. Nobody else proposed using ships that pass in the night to adjust buoys. Did they? Why this, why now? It’s fishy, Steven. No, I would have say it’s bullcrap.

      • Steven Mosher

        “The trend are useless – well, maybe, less than useless.”

        The haitus is a lack of trend
        When the question is “Is is real”
        Then trends are useful.

        Put another way. if the trend in observations doesnt match the trend in models then its useful.

        Thank you for being a useful idi8t

      • Mike Flynn

        Steven Mosher,

        You still believe you can predict the future better than I, otherwise you wouldn’t be wasting your time, I suppose.

        You can’t. Your manufactured trends, your models, are all completely useless, as far as I can determine. I can do as well, or probably better!

        I might be wrong, of course. You can no doubt point to something of proven benefit to humanity at large, which outweighs, (or even equals), the large amounts of time, effort and money wasted on stupid models and history rearrangement.

        And your factual response?

        I thought as much.

      • Don Monfort

        I was hoping you would provide a useful answer, Steven. Is this paper a politically motivated publicity stunt, in the genre of Cook and Lewandowsky, or is it science? Please read your friend Ross’s analysis and comment with the laudable seriousness and intelligence that you have exhibited on special occasions.

        Do you know any of the pilots flying missions over Iraq and Syria, Steven? Do you know that they are repeatedly being sent into harm’s way knowing that on about 3 quarters of their sorties they will not be allowed to drop their ordinance on the bad guys? They won’t complain about it publicly. Bad for the career. It’s ideology and politics. It’s all ideology and politics with this Clown-in-Chief. The people working at the NOAA have to suck up to the boss, just like everybody else. And that they share his ideology makes it all that much easier.

      • Steven Mosher

        “You still believe you can predict the future better than I, otherwise you wouldn’t be wasting your time, I suppose.”

        1. you don’t have any predictions of the future that I know of, so I would suspend judgement.
        2. the only predictions I have are predictions of SAT. Since you havent
        seen them I don’t know how you could even argue that they are different from yours.

        You can’t. Your manufactured trends, your models, are all completely useless, as far as I can determine. I can do as well, or probably better!

        Lets test that.
        I have some data. Its never been used before in any temperature
        series.
        I will give you the latitude and altitude.
        Predict what the temperature series will look like from 1900 to 2015.
        go!

        ###################

        I might be wrong, of course. You can no doubt point to something of proven benefit to humanity at large, which outweighs, (or even equals), the large amounts of time, effort and money wasted on stupid models and history rearrangement.

        And your factual response?

        ##########################
        1. you assert total uselessness, but assertion is not demonstration.
        2. proven benefit is easy to demonstrate. People continue to buy
        and use the products you say are useless. I believe in markets.
        you must not. In other words, if you cant find a use for it, then nothing follows from that. Historical climate is of course used in forestry, in legal cases, in insurance, in warning systems, in agriculture. In some cases
        these productscost money. In other cases the government and others pay to make it free for the public.

      • Mike Flynn

        Steven Mosher,

        I assumed you thought you could predict the future better than I. Apparently not.

        You decide to test our relative abilities to predict the future by predicting the past. Stupid. However, let us set a different test. Let us each make a prediction of temperature, say, for a location equipped with an agreed temperature measuring device, for a particular date in 2016. Let us ensure that the temperature is monitored by an independent source, say a trained meteorological observer. You make your prediction, I will either agree or disagree. If I agree, then your prediction is no better or worse than mine.

        On the other hand, if I disagree, for whatever reason, facts will determine who, if anybody is correct. You have the choice of setting the parameter to be predicted, if temperature doesn’t suit you, the date, the error range etc.

        I have a presentiment that you will not be willing to wager even a minuscule brass razoo on being able to predict the future better than I, given the same initial conditions. Over to you.

        As I thought, you are unable to point to a proven benefit of climatology which outweighs the cost. Instead, you seem to be claiming that because something is purchased, it is useful, in the sense of being fit for purpose. The US Government buys all sorts of useless things, ranging from incredibly poor advice on winning wars, to weapons systems of such poor standard that they are unusable, to economic analysis and advice which has demonstrated its inadequacy.

        People pay large amounts of money to psychics, astrologers, dispensers of naturopathic remedies, purveyors of various nostrums and drugs. Governments pour money into companies such as Solyndra, and more recently, Tesla. Useful? If you say so.

        Your point about historical climate being used in various areas is just silly. You can’t even define climate in that sense, let alone use it. The past is past. I would agree that a record showing that a particular flood level was reached may well be used in a dispute over an insurance claim. This is not climatology.

        The money spent on climatology to date has been almost totally wasted. I has been mildly useful in demonstrating its inability to provide any specific benefits to humanity at large.

    • Prof Curry,

      This is a common problem in other sciences, such as economics. People rely on the high quality data of current economic stats, and assume that historical estimates (paleo-economic data?) are as accurate – when in fact they’re little more than rough estimates.

      Unfortunately we have little reliable economic data before WWII, and almost nothing before WWI. For example, we have all seen those wonderfully precise estimates of unemployment during the Great Depression. Guesses. The first survey was in March 1940, other than small surveys done by mailing postcards in 1937 and 1939.
      http://www.bls.gov/opub/mlr/1984/06/art2full.pdf

      It gets worse the further one goes back. Often famous “lessons from history” are little more than legends dressed up with guesses about the numbers.

      • Former CEA chair Christina Romer made her first academic splash by showing that the apparent lower volatility of U.S. economic output prior to the establishment of the Federal Reserve was largely an artifact of data weaknesses in the earlier period. I’m not sure what later workers in the field concluded about all that, but it was a remarkable case of findings over philosophy given her pro-demand-management views.

      • Steve,

        Are you referring to her 1999 paper “Changes in Business Cycles: Evidence and Explanations”? I suggest a cautious interpretation.

        (1) She constructs bricks from staw, doing esoteric statistical work on scarce and unreliable data, esp before WWI. It makes paleoclimate data look solid. Academics love this stuff, but no sensible person would wager a dime on the accuracy of the results.

        (2) Her conclusions are narrow: “recessions only slightly less severe” and “less frequent and more uniform.”

        (3) Her analysis does not include our 2 great post-Civil War deep, long depressions: the Long Depression (1873-1879) and the Great Depression. The first is before her start data; she treats the second as an excerption. It’s like asking “other than that how was the show, Mrs. Lincoln?”

        Avoiding depressions is usually considered the primary goal of macroeconomics. That focus was lost during the long post-WWII expansion. The close call of 2008 (quite similar to 1930-1932) refocused people’s minds, as the sight of the gallows usually does.

      • She pulled a kind of Vesalius v. Galen argument (all our human anatomy knowledge is based on studying gibbons!) by pointing out that the old “GDP” data were sectorally biased, covering only a small number of volatile manufacturing sectors and extrapolating from there. That seemed a sound point to me. The Great Depression would have strengthened her argument, while the Long Depression would have weakened it, so leaving both out in order to look at higher-frequency cycles isn’t a terrible idea.

  42. Don Monfort

    How many of the 7 billion folks not losing sleep over AGW change their minds over this latest prefabricated pause denial BS? I would guess fewer than 1,000. The Paree climate alarmist partee will still be a disasteree.

    The pause is killing the cause. You can tell by how desperate they are to debunk it.

  43. You already have it, above. The old canard about not peer reviewed? Behold Marcott (paleo), PMEL and Fabricius (2x ocean acidification), Cazenov (SLR)… All evidence of peer reviewed academic misconduct.
    Other essays in Blowing Smoke, which you appear good at.

  44. Green Sand

    “Has NOAA ‘busted’!

    Yes Maam, they went for a five card trick and ‘bust’ before three!

    Got to love it, future social historians, 50years or so on, are really going to love today!

    Probably class it with ‘Tom and Jerry’

  45. Yes Ristvan, peer reviewed literature is a “canard”. JP

    • Atarsinic, whoever you really are hiding behind that avatar nom de plume, yup. My most recent book proved rather vividly that much of the recent climate peer reviewed literature is a ‘canard’. Read it, then prove those examples otherwise. Marcott’s hockey stick abomination would be a good starter. Previously guest posted here. Or, you could try Thomas on extinctions, Fabricius on corals, or O’Leary on sudden SLR.
      Doubt you would even try on any, since would ‘lose’ on all.

  46. Today Steve Milloy correctly identified Harvard University’s science.

    http://junkscience.com/2015/06/04/harvard-syracuse-researchers-caught-lying-to-boost-obama-climate-rules/

    I deeply regret that blunt statements may be necessary to restore sanity to society and reverse the corruption of science.

  47. I don’t think you can infer much from 15-year trends anyway. Look at how much a running 15-year temperature varies compared to the much more stable 30-year temperature. The 30-year temperature has had a trend near 0.15 C per decade since about 1980, and yes, including the whole “hiatus”. The 15-year one is all over the place including 0.3 C per decade just 15 years ago (now forgotten). The 15 and 30 year trends in C/decade are plotted here.
    http://www.woodfortrees.org/plot/gistemp/from:1900/mean:120/mean:240/derivative/scale:120/plot/gistemp/from:1900/mean:60/mean:120/derivative/scale:120

    • And these are the 15 and 30 year temperatures themselves. Somewhat upward at the end there. Tough to spot a “hiatus”.
      http://www.woodfortrees.org/plot/gistemp/from:1900/mean:120/mean:240/plot/gistemp/from:1900/mean:60/mean:120

      • Curious George

        I hope you will apply the same logic when it comes to warming trends.

      • The 30-year temperature has a robust warming trend.

      • Jim D, using your 30 year trend, natural warming peaked at 0.15C per decade and natural plus manmade warming peaked at 0.20C per decade. Low end is -0.15C per decade. What does that say about manmade warming and the potential for a 0.0 hiatus? To me, a pretty good chance.

        The other interpretation is that the early data is garbage.

    • Curious George

      +10

    • Mike Flynn

      Jim D,

      Once again, use the longest trend there is.

      Earth’s creation to now. Result? Cooling.

      If you don’t like it, pick a shorter trend that supports your point of view. Use Steve Mosher’s HistoryChanger ™, until you get what you want. So, how much warming do you really, really, want?

      • Steven Mosher

        The net effect of adjustments is to COOL THE RECORD.

        get that? raw data has a WARMER TREND.

        Here is a clue. Any time you have to adjust a record you are changing yesterday. You are always adjusting the past, because the future aint here. Now we could not adjust the data and then it would show a steeper rise in temperature

      • Don Monfort

        What’s the net effect of this “pause busting” adjustment? Add it to the dozens of excuses for whatever is alleged to be offsetting the CO2 warming during the pause, and what we got? I will just take a guess: 1.35652 C. That’s probably too conservative. Now you, Mosh.

      • Steven Mosher,

        It doesn’t matter what sort of graph you draw. Upwards, downwards, warming, cooling, red pencil or blue crayon. It has no effect whatsoever, on either the past or the future.

        You cannot change yesterday. It has happened. It is past, gone, will not come again. Pretending that you can predict the future by “correcting” history achieves nothing. The present is no cooler, no warmer, no wetter or drier, than it is. Why bother pretending that you can change history?

        I’m not sure what you are attempting to do. I’m happy enough to base my life on the assumption that no one can predict the future better than I, in any useful way.

        So far, so good. How about you?

      • Steven Mosher

        “What’s the net effect of this “pause busting” adjustment? Add it to the dozens of excuses for whatever is alleged to be offsetting the CO2 warming during the pause, and what we got? I will just take a guess: 1.35652 C. That’s probably too conservative. Now you, Mosh.”

        #########################

        I see nothing terribly important in any pause discussion. We are basically talking about minor tweaking of data, tweaking of models, polishing the bowling ball.

        In 1896 we knew everything we needed to know to understand that a policy on C02 emissions was going to be needed at some point.

      • The issue is comparison of observed temperature anomaly trends with climate model predictions. Even with new NOAA analysis, climate models still running substantially too hot

      • Steven Mosher | June 4, 2015 at 10:10 pm |
        The net effect of adjustments is to COOL THE RECORD.

        get that? raw data has a WARMER TREND.

        This makes a great story. It isn’t true.

        It isn’t hard to find NOAA plots of US temperature that were significantly warmer in the past and until recently

      • +100 PA.

        That’s the thing Mosh et al just can’t appreciate. Constant nonsense data fiddling is just that – fiddling. It’s the creation of functional product, an alt-history.

        I am getting the feeling this nonsense Karl paper is climate science jumping the shark. They’re written a novel and called it a dataset.

        Only in climate science.

      • curryja | June 5, 2015 at 10:24 am |

        “The issue is comparison of observed temperature anomaly trends with climate model predictions. Even with new NOAA analysis, climate models still running substantially too hot”

        I suppose it’s worth noting that this depends on how careful one is when comparing observations with models. If you’re really un-careful you’ll exacerbate the discrepancy – this is what most people do. Comparisons are much more nuanced than perhaps is often let on.

      • Robert Way, I note Ed Hawkins has said precisely what Prof Curry has said http://www.climate-lab-book.ac.uk/2015/global-temperature-comparisons/. It doesn’t seem to me there’s room for nuance, so referring to nuance just seems like yet more climate scientivist weasel-classing wiggle-room.

    • The 30-year temperature has had a trend near 0.15 C per decade since about 1980,

      And the RCP4.5 is modeled to be 0.18C per decade, so the warming is less than the low end of the models – it should be because the forcing is at the low end of scenarios.

      • Any reason to stop at 2006 instead of considering the most recent 30 years?

        Sure looks like some pausing going on and looks like rates less than low end model predictions.

        You will find that 30 year trends starting in 1975 are the greatest which stands to reason since the cooling period of 1945 through 1975 ended that year.

        But those trends have been decelerating ever since.

      • Because the system, despite your claim of weakening forcing, is quite capable of producing trends in excess of .2C and could easily do it again soon. If the PDO is in a positive phase, it could produce trends above .3C.

      • hang on TurbE aren’t we at RCP8.5?

      • Some of this is b*llshit.

        1. There is a strong natural sinusoidal oscillation that contaminates the record. Picking trend intervals of less than 65 years isn’t useful.

        2. RCP8.5 was 9.71 in 2013 so we were ahead of RCP8.5 in 2013 and are now slightly behind it. From RCP 8.5:
        2014 405.27595
        2015 408.90146

        Now, it is pretty obvious that this year the less than 2 PPM rise is not the 3.63 PPM rise expected by the IPCC with the fossil fuel emissions being almost the RCP 8.5 expected amount. So something is going off the rails.

        3. Fossil emissions doubled (increased 100%) since 1980. Only 21% of the post-1980 emissions increase is going into the atmosphere.

        This bad news for global warming. If only a tiny fraction of any emissions increase stays in the atmosphere we will run out of fossil fuel long before we significantly increase the CO2 level in PPM and 480 PPM seems to be the upper limit.

        Between that and the 0.2 W for 22 PPM forcing level the global warmers have a problem. Past warming has been 0.89W/m2 or about 0.24°C. The future worst case CO2 level of 480PPM means only 0.64 W/m2 more down dwelling IR or about 0.17°C of further warming.

        So it is going to be interesting. For the rest of the century it really depends on nature whether it gets warmer or cooler.

      • JCH | June 6, 2015 at 12:58 am |
        The deceleration has pretty much stopped.

        The warmunists have a few challenges ahead and the view that temperatures have only paused and will soar off into oblivion is unrealistic and misinformed.

        1. There is a strong natural sinusoidal oscillation that contaminates the record. Picking trend intervals of less than 65 years isn’t useful.

        2. RCP8.5 was 9.71 in 2013 so we were ahead of RCP8.5 in 2013 and are now slightly behind it. From RCP 8.5:
        2014 405.27595
        2015 408.90146

        Now, it is pretty obvious that this year the less than 2 PPM rise is not the 3.63 PPM rise expected by the IPCC with the fossil fuel emissions being almost the RCP 8.5 expected amount. So something is going off the rails.

        3. Fossil emissions doubled (increased 100%) since 1980. Only 21% of the post-1980 emissions increase is going into the atmosphere.

        This bad news for global warming. If only a tiny fraction of any emissions increase stays in the atmosphere we will run out of fossil fuel long before we significantly increase the CO2 level in PPM and 480 PPM seems to be the upper limit.

        Between that and the 0.2 W for 22 PPM forcing level the global warmers have a problem. Past warming has been 0.89W/m2 or about 0.24°C. The future worst case CO2 level of 480PPM means only 0.64 W/m2 more down dwelling IR or about 0.17°C of further warming.

        So it is going to be interesting. For the rest of the century it really depends on nature whether it gets warmer or cooler.

      • JCH,
        Because the system, despite your claim of weakening forcing, is quite capable of producing trends in excess of .2C and could easily do it again soon.
        Perhaps, but your links don’t support that. The maximum30 year trend you can find is 0.19C/decade, not in excess.

        Here is what the thirty year trends look like:

        Close, but no cigar and decelerating.

        This wouldn’t be such a big deal, I guess, if the IPCC hadn’t predicted in the AR4 that we’d see 0.2C per decade for all scenarios.

        Oops.

      • hidethedecline:
        hang on TurbE aren’t we at RCP8.5?

        Emissions were at the high end, but GHG forcing is not.

        If you go the the NOAA GHG index, get the values and shift them to the RCP scenarios 2000 ( because they use different baselines ) the chart below is what you get. The observed ( purple ) is lower than all the RCPs:

  48. So evidence is mounting that the pause never existed and was just noise in the data.

    This can also be seen in NASA data

    And hadsst data is demontrating why it’s a foolish idea to expect short-term trends to last in any meaningful way
    http://www.woodfortrees.org/plot/hadsst3gl

    • Nebakhet,

      Perhaps you could explain, specifically, why it makes sense to adjust the buoy temperatures up to match ship observations, when the buoys are more modern, and designed to measure temps, and ship measurements are a hodgepodge of different methods, and are in no way designed for the purpose of measuring global temps?

      Rather than beating on skeptics, who are simply doing their part to vet the “science” that is highly politicized, and should have already been vetted by peer review – how about you simply explain why these particular adjustments make sense and are appropriate?

    • Your global temperature is about 0.24°C Computer Generated Anthropomorphic Global Warming (CGAGW). Eliminating CO2 does not stop CGAGW.

      Further – if you remove CGAGW it does not appear possible to hit the 2°C target in 2100 so we might as well “burn baby burn” fossil fuels since we will run out of fossil fuel before 2100 and can’t cause dangerous warming. And besides – since CO2 didn’t cause CGAGW, eliminating CO2 emissions won’t stop CGAGW.

  49. This again from the simple mind of a retired high school chem/physics teacher
    First thing I saw was the name Tom Karl. oh, no the same Tom Karl that made the homogenized adjustments, that paid much attention to Time of Day, Change in instrumentation and more and at the same time paid almost no attention to the continual corruption by an ever increasing heat island effect. But rather, found a way to make it a non factor.
    I looked to see if there was any kind of peer review to the methodology.
    put forth by Tom Karl.
    Could not find it. It seemed to just materialize. However, I certainly am not
    aware of all proceedings.

    Folks, do you ever check old records which get changed, then changed and then changed again. Old temps lowered, New raised.
    The NOAA seems to be at the forefront of temp change artistry.

    Recently Roy Spencer had a little piece titled (essentially) It warms even when it doesn’t warm. In it he showed the NOAA change of record by a factor of three in the rate of U.S corn belt warming. That change of record was from 2014 to 2015. I just do not understand how this continual change and then change to the change to the change can continue.
    What am I missing?. At the same time much of the opaqueness of some
    procedures is totally transparent. Note here the usual names. Michael Mann. Trenberth et al
    I permanently lost respect for the likes of Trenberth and Dessler with
    their treatment of Roy Spencer regarding a withdrawn paper. Emphasizing
    here the treatment of Spencer, not of the paper.

    Would it not be a blessing if the scientific community could determine that
    there was minimal danger in AGW In fact, added CO2 might very well be one of the greatest blessings of the century. Imagine, an increase in one of the sustainers of life on this planet.

    Then again, how many thousands would not be getting grants. How many thousands might be out of jobs.

    On both sides of the issue, the focus should be the citizens of the world and not
    the climate community.

    At one time I thought scientists were the sole source of that which is undeniably honorable. Foolish me. There may be some elements of
    accuracy in this paper. However, it is so obvious that is an overreach and that the procedure is to find, someway, anyway to torture and manipulate
    data and scientific procedure to come up with a desired result, regardless of whether it is a correct result. A mindset Akin to the hockey stick eliminating the medieval warm period.

    To say it much better, ‘Figures don’t lie, but liers’ figure.

  50. The climate’s been shot. Round up all the usual suspects.

  51. Reblogged this on This Got My Attention and commented:
    Color me unconvinced, too.

  52. Dr. Bouldin took a look at the data:

    …“So that’s 0.133 vs 0.086, which is a ratio of about 1.55. If one picks the year 2000 as the demarcation point between the two periods, then it’s a little trickier because Karl et al don’t actually give an estimated rate from 2000 to 2012. But one can estimate it by assuming the rate difference between 1998 to 2014, and 2000 to 2014, both of which they do give, is a decent approximation of the difference between 1998-2012 and 2000-2012. When I do so, I get a ratio of warming rates between the two periods (1951-1999 and 2000-201.2) that’s very similar: 1.47. Taking the inverses, the rate of warming between the slowdown period and the pre-slowdown period, is about 2/3. Given the large number of data points (grid boxes) that make up each year’s mean value, it has to be essentially certain that, according to these data, there has in fact been a very highly statistically significant difference in warming rates between these two periods, regardless of whether you use 1998 or 2000 as the breakpoint year.”…

    https://ecologicallyoriented.wordpress.com/2015/06/05/did-the-rate-of-global-warming-decline-around-year-2000-or-not/

    • It Is very important to not come on board the Karl adjustment ship after the ships initial departure.
      Data used in the aforementioned was itself homogenized.

      I have over 50 articles from the 1970’s warning of the next ice age.
      The most common line to be found. ‘There is a complete consensus
      (where have I heard that word recently) among the worlds leading scientists
      that the earth will be as much as –fill in the blank— (one example six degrees Celsius) colder by the year 2,000.

      Included were graphs, more graphs, and more graphs yet.
      They all showed cooling from about the early 1940’s to the late 1970’s in various amounts.

      But Tom Karl was a leader in making that all disappear. As I mentioned before, for the fun of it check NASA /GISS temperature records for any one location recorded say every two years. Then plot the temperature change on the ordinate as a function of the years you checked.

      Raw U.S. temperatures in the 1930’s, were much higher than today, until Karl and his gang waved their homogenization wand. Somehow my parents would have thought very little of Karl.
      (so far I only have access to U S records)

      —and if we had drought now like we had then, there would be AGW hysterics. Someone might decide to write a book entitled ‘THE GRAPES
      OF WRATH’

      • “I have over 50 articles from the 1970’s warning of the next ice age”

        newspaper articles.

        The Myth of the 1970s Global Cooling Scientific Consensus
        http://journals.ametsoc.org/doi/pdf/10.1175/2008BAMS2370.1

        “Raw U.S. temperatures in the 1930’s, were much higher than today, until Karl and his gang waved their homogenization wand”

        Look if you climate skeptics are too lazy to process the data yourselves that’s your problem. You deny BEST, deny NOAA, deny Met Office, deny NASA. All efforts to process the weather data into a global temperature record you deny.

        Remind me, why should we care what you say? We ignore people who conspiracy theorize about tower 7. How are you guys any different? You bring no data, no analysis, just conspiracy theories.

      • Mike Flynn

        nebakhet,

        There are the Seebeck effect, the Peltier effect, the Hall effect, amongst others. They are have been measured and provide useful, predictable results.

        On the other hand, there is the non existent “greenhouse effect”. No measurable effect, no use, and no predictable results.

        I’m sure you have the ability to completely believe the non existent. I can offer you a wide range of non existent things to believe in, if you wish. From phlogiston, the luminiferous ether, zero point energy, and orgone energy to n Rays. Free of charge, of course.

        Why stop at the greenhouse effect?

      • greenhouse effect is 100% proven science. Science is settled. Check the textbooks.

      • What Karl et al did is increase more accurate data buoy temperature data to match the “bucket” method of measuring temperature which has an error of +/- 1.7C, and accepted a statistical significance of 0.1 instead of the generally accepted 0.05. How is this even remotely scientific?

    • Looks like he’s going to take another look: “Update, 6-6-15: I’m revising this post based on an issue that Gavin Schmidt of NASA made me aware of on Twitter, involving the averaging of trends. It will change the results I’ve given below. Stay tuned.”

  53. “Kevin Trenberth, climate analysis chief at the National Center for Atmospheric Research, said the new work was “good and careful analysis” but only confirms what most scientists already knew, that there was no such hiatus.”

    Trenberth’s Travesty was fiction?

    • Wonderful observation. Apparently warmunists have memories equivalent to Alzheimers patients. They forget the web and Wayback Machine do not.
      It gets to be more fun as Time Goes By (thanks, PBS and BBC, for the human comedy, NOT your CAGW scare).

      • maybe you are the one suffering an early onset mental illness.

        Given that what Trenberth said now about a pause in surface temperature and said years ago about heat content of the oceans is not in contradiction.

  54. Pingback: NOAA tries to make global warming slowdown vanish | Tallbloke's Talkshop

  55. How do you ascertain which dataset or model is better/more skillful?

    Predictive ability? The metrics used are themselves a product of the model…(?) Men formed in a circle it might seem.

  56. When the UAH satellite temperature record was adjusted recently, warmists accepted such adjustment were a valid part of the scientific process, because we are pro-science and rational people.

    When the NOAA record is adjusted, skeptics go nuclear, spewing bile and conspiracy theories without any focused argument.

    Climate skeptics have steadfast refused for years to engage with the scientific process and get to the heart of the matter and process the weather data themselves into a global temperature record, yet have expended massive amounts of time attacking anyone who does. Nor do climate skeptics publish any arguments against the adjustments to peer review. One can only conclude that climate skeptics WANT the issue of adjustments, from their ignorant point of view, to be a mystery. They get lots of play out of that. If they were to do the work and discover the warming was genuine they’d lose some of their conspiracy theories.

    Climate skeptics aren’t even consistent about whether they accept the records in the first place. They will lean on records like the met office record to claim there is a pause or no statistically significant warming for X years, implying that they accept those records are good enough to draw conclusions about the trend of the earth’s warming, but then in the next moment they are telling us the records are manipulated and cannot be trusted.

    It seems skeptiic’s acceptance of a record is proportional to how convenient it is for their argument. With 2014 and now 2015 breaking new highs and questioning of whether the pause was just noise in a continued longterm trend, I notice skeptics are falling back to form of pure denial of the records themselves.

    • David Wojick

      Nice spewing!

    • Mike Flynn

      nebakhet,

      I am unaware of anyone who denies the existence of climate. Maybe you could name one or two.

      I am also unaware of anyone who refuses to accept that the Earth has cooled since it was created, but I accept that it is impossible to,prove that he Earth was created a second ago.

      • Mike Flynn

        Whoops! Fat finger again!

        To continue –

        So four and a half billion years of greenhouse effect trapping or accumulating heat, due to GHGs in the atmosphere, has failed to stop the Earth cooling to its present temperature.

        Of course it’s settled science. It is settled that the greenhouse effect has never been demonstrated or experimentally verified.

      • Mike Flynn

        It’s also settled that you cannot provide a single specific instance of humanity having benefitted from climatology (or phrenology, for that matter, but I might be wrong about phrenology).

        More hand waving, and unsubstantiated assertions.

        Have you anything useful to offer?

      • Brian G Valentine

        “better have a good climatology to predict how much snow you expect to get.”

        This is climate prediction based on historical trends, and is every bit as good as stock market forecasting.

        Climate Projection (inference from first physical principles) = nothing at all.

      • > This is climate prediction based on historical trends, and is every bit as good as stock market forecasting.

        The converse might not be true.

      • Steven Mosher,

        I suppose you refer to the benefits which accrued to the operators of Heathrow Airport. They sold much of their snow clearing equipment on the finest climatological advice that snow was rapidly becoming a thing of the past.

        They received a severe lesson about the stupidity of listening to climatologists’ forecasts, after record snowfalls caught them with their clever pants down.

        I am glad you use climatological forecasts for sales forecasting and inventory control. I hope you aren’t confusing weather forecasts with climate forecasts. As you have have said, in your company you revise history on a moment by moment basis. Sales history, inventory history, weather history – you can just revise it all.

        No one can forecast the climate better than I. My forecasts of both weather and climate in the past have proven 100% accurate.

        Maybe all the Californian walnut farmers should have asked for the climatological predictions of drought in their specific areas? Or are you saying you can only make useful predictions after the event.

        Your comment about consulting a climatologist before building a new ski resort is just bizarre. Who would actually put money on a climatological forecast? Companies waste money on such follies to demonstrate their green credentials, beg for Government handouts, or lull investors into a false sense of confidence.

        I’m not interested in asking Judith about her business, any more than I would ask a drug company about its reasons for selling products of doubtful benefit. I don’t use either product, but what others choose to do is their business.

        You made a statement that “we can fix your stupidity”, and I presume you are using the Royal “we”, as befits a supremely intelligent and August personage such as yourself.

        You are not concerned you may have made a foolish statement, are you? How do you intend to fix my stupidity? A magic spell, perhaps? Possibly a Climatological Intervention ™?

        You still haven’t provided a single instance of a documented, measurable, benefit of climatology to mankind. Surely there must be at least one! Even astrology gets things right once in a while.

        Sorry Steven, but it still doesn’t appear that climatology is useful. At least belief in climatology doesn’t seem to cause the believer any physical harm!

    • daveandrews723

      People like you are the ones in denial about the abysmal failure of the models to predict the influence of CO2 levels on global temperatures. The CAGW hypothesis (not scientific fact, as you warmists like to assert) has not withstood the scientific method. Actual observations have a stubborn way of disproving flawed models and hypotheses. It is laughable the way warmists are now trying to blame every drought, heavy rain, hurricane (or lack thereof), harsh winter, mild winter and every other natural weather/climate event on CO2 increases. Your song is getting old. The temperatures are not going up, so “global warming” is now “climate change.”

      • “Globalclimatewarmingchange” – the perftect liberal cause, anything can be blamed on it. It can therefore be used to justify massive goblement intervention to save the masses at their expense while enriching the elite for saving the masses from themselves.

    • “The temperatures are not going up”

      get a clue

      “so “global warming” is now “climate change.””
      IPCC

      get a clue

      • David Wojick

        Neb, it is amusing that on a thread about the inaccuracy of surface statistical models you start off by offering one. According to UAH, which actually measures the atmosphere, that GISS graph is completely wrong. It did not warm 1978-1997, nor 2000-now. The only warming was a small (0.16 degrees C) step up coincident with the giant ENSO cycle.

        On sea level my conjecture is that straight line is more or less accurate for the last 10,000 years or so.

        The sea heat graph is hugely less accurate that the surface models because it is 3D, not 2D, and it uses a far sparser convenience sample.

        You have simply missed the point, which is that the Karl adjustment indicates that the surface statistical models are probably junk.

      • David,
        If you want to measure the greenhouse effect, you need to measure the surface temperature, not what is measured by the RSS and UAH datasets.

        They may be more accurate, but they are less relevant to the question at hand.

      • David Wojick

        Bob, the greenhouse effect does not happen at the surface so why would I want to measure it there?

      • Mike Flynn

        bobdroege,

        Maybe you should tell that to the people that measure anything but the surface temperature? That is, they don’t actually measure the temperature of the surface. They just pretend to.

        Just like the greenhouse effect. Apparently nothing to do with greenhouses, but they didn’t think anybody would notice.

        Silly!

      • Steven Mosher

        “Bob, the greenhouse effect does not happen at the surface so why would I want to measure it there?

        huh?

        read the science. “the” green house effect happens in the oceans, in the ice, in SAT, in the stratosphere.. “The” effect is not simple.

      • The effects of the greenhouse effect are maximum at the surface, so that is where you want to measure it.

        If it causes cooling in the stratosphere, you don’t want to measure it there, and at the surface and then average the numbers, that is sure lunacy.

        I’ll take a decent modeled temperature of the surface, take a bunch of bi-metallic resistance readings at a thousand various locations, take the anomalies of each location and average, or something like that.

    • How naive do you have to be to believe global warming isn’t happening?

      • How naive do you have to be to think man is causing it?

        Also – since the satellites are clearly skewed against the tidal gauge trend the satellite algorithms apparently are going to have to be fixed by legislation, since the satellite analysis teams aren’t smart enough to do to fix their algorithms on their own.

    • Neb

      Adjustments are to climate science what steroids were to sports. If you can’t win on your own, get a little help from your friends. It is all a pitiful sign of desperation.

    • You forgot to mention Koch Bros. Bush, the tea party and Faux news…oh ans Sarah Palin

    • nebakhet | June 5, 2015 at 6:27 am | Reply
      When the UAH satellite temperature record was adjusted recently, warmists accepted such adjustment were a valid part of the scientific process, because we are pro-science and rational people.

      When the NOAA record is adjusted, skeptics go nuclear, spewing bile and conspiracy theories without any focused argument.

      This is a warmist fantasy.

      The numerous and particularly vicious attacks of the warmunists against any study, data, or scientist who casts the slightest doubt on global warming orthodoxy is legendary.

      http://dailycaller.com/2014/03/17/u-s-college-professor-demands-imprisonment-for-climate-change-deniers/
      http://scienceblogs.com/illconsidered/2008/11/hansen-wants-skeptics-in-jail/
      http://www.weeklystandard.com/blogs/sen-whitehouse-d-ri-suggests-using-rico-laws-global-warming-skeptics_963007.html

      The constant calls to jail skeptics by warmunists are offensive to any honest freedom loving American. The time has come to terminate the several $ 10s of billions in annual climate change funding until the advocates learn to operate by the rules of fair and honest debate in a free society.

  57. David Wojick

    We have all these warmer scientists publishing papers explaining the pause and here is Tom Karl saying it never happened. This will put the warmers into disarray for sure. Are they suddenly to abandon their statistical base?

    Karl by the way is the top proponent of a NOAA Climate Service to match the Weather Service, which he would head of course. His is a bold move but it may backfire. He also chairs the USGCRP committee.

    I have long said that the surface numbers are inaccurate, so now comes King Karl saying the same thing, with the ultimate adjustment. It does not get much better than this.

  58. Pingback: Has NOAA ‘busted’ the pause in global warming? | ajmarciniak

  59. Pingback: No “pause”? | …and Then There's Physics

  60. Steinar Midtskogen

    Over at Realclimate Gavin writes: “The ‘hiatus’ is so fragile that even those small changes make it disappear”.

    I posted a comment there (which I don’t really expect to pass moderation):

    How is that statement different from the “denier” viewpoint that any warming over the past two decades is so fragile that just small changes make it disappear? This looks like an admission of Curry’s uncertainty argument to me.

    • David Wojick

      Indeed, these statistical models are very sensitive to small assumptions, which makes them worthless. Should all the papers purporting to explain the hiatus now be retracted for using erroneous data? Should the papers now in the publication pipeline be rejected and the grants terminated? The data is adrift!

    • Steven Mosher

      “How is that statement different from the “denier” viewpoint that any warming over the past two decades is so fragile that just small changes make it disappear? This looks like an admission of Curry’s uncertainty argument to me.”

      not really. The statement differs in that gavins is true and yours is not

      • Gavin’s statement isn’t true, though, is it Mosher. Think harder mate.

      • Steven Mosher

        The existence of the pause is not robust with respect to varying methods and data. It may be in a few years. Patience.
        The existence of a longer warming trend is robust to a wide variety of assumptions about data and methods.

        People keep making the same mistake on both sides.. Isolating specific data sets and focusing on one or two methods.

        5 more years..

      • Mosher thanks for replying but 5 more years? The existence of the pause not being robust in the face of varying climate science ‘methods’? Back to the ‘long term trend’ so long as it’s a cherry picked nice cold year?

        Come on mate. Talk about moving the goalposts.

        Climate science says that with as much CO2 about as there is these days we should be much warmer than we are and we should have been warming at 0.2 degrees centigrade per decade. The pause shows climate science models run hot. It is what it is. 5 more years of as is, climate science is not robust to the existence of the pause.

        5 more years of Karl-quality data fiddling, Schmidt dissembling and persistent NOAA ‘adjustments’ and the climate scientists will be pointing at their stupid graphs and telling the Grapes of Wrath was never written and there was no dustbowl.

        Alt history is a crappy fictional genre. Get out of it, mate.

      • Steinar Midtskogen

        Steven, how much longer, roughly, is the distance from A to B than the distance from B to A?

      • Climate science says that with as much CO2 about as there is these days we should be much warmer than we are and we should have been warming at 0.2 degrees centigrade per decade. …

        .2C per decade starting at what date?

    • Brian G Valentine

      “The statement differs in that gavins is true and yours is not”

      Gavin showed nothing. Gavin has no std deviation to argue from, nor a std deviation demonstrated to have decreased as SQRT(# of measurements)

    • > This looks like an admission of Curry’s uncertainty argument to me.

      What would be Judy’s argument, Steinar?

      • Steinar Midtskogen

        Willard, Judy is in my opinion one of the few in this debate who dare to say aloud that uncertainty exists and is a problem. I think Gavin fears uncertainty.

      • Your probing of Gavin’s mind and your praise of Judy are duly acknowledged, Steinar. However, I reject both as ways to formulate Judy’s argument.

        Thank you for your concerns.

    • Steinar Midtskogen | June 5, 2015 at 7:17 am | Reply
      Over at Realclimate Gavin writes: “The ‘hiatus’ is so fragile that even those small changes make it disappear”.

      35% (100*0.45°C/0.69°C = 65%) of warming since 1910 is Computer Generated Anthropomorphic Global Warming (CGAGW). This technically is machine made global warming, not man made global warming and should be noted on all charts where it is included. Note: CGAGW is not caused by CO2 increases and eliminating CO2 emissions will have no effect on CGAGW. CGAGW is caused by computers and can only be stopped by defunding the department involved, scrapping the hardware, and firing the personnel.

      http://data.giss.nasa.gov/gistemp/graphs_v3/
      0.24°C or 35% is significant. It is about the same as the CO2 contribution to global warming since 1940 which is 0.24°C or about 35%. According to GISS the post 1940 warming is 0.8°C if by land and 0.5°C if by land and sea.

      http://wattsupwiththat.com/2012/12/07/a-brief-history-of-atmospheric-carbon-dioxide-record-breaking/
      It should be noted that fossil fuel emissions didn’t catch up to the rise in CO2 until 1960 (over 1/2 of fossil fuel emissions came after 1985). So it is unclear when we should really start blaming man for causing warming.

      The tendency of government scientists to alter historic data makes attribution between anthropogenic, natural warming, and natural cycles difficult since the constantly changing CGAGW adjustment has to be removed first, Here is how much GISS has distorted data since 2008:

      • > CGAGW is caused by computers

        More precisely, CAGW is created by contrarians. They they used computers to create this strawman may only be a correlation.

      • So much dissembling, isn’t there PA?

        Gavin Schmidt writes:
        “…if by ‘hiatus’ or ‘pause’ people mean a change to the long-term trends…”

        and
        “If people use ‘slowdown’ to simply point to a short-term linear trend that is lower than the long-term trend…”

        People use ‘pause’ to identify the period during which there has been an absence of the warming that IPCC, CMIP5 models and RCP8.5 scenario told us would occur when CO2 emissions reached the levels in that RCP8.5 scenario. The CO2 levels have been reached but not the warming. Ergo pause.

    • Schmidt:
      “Thus I predict that while contrarians will continue to bleat about this topic (the Pause, as far as Ragnaar can tell), scientific effort on this will slow because what remains to be explained is basically now well within the bounds of what might be expected.”
      See more at: http://www.realclimate.org/index.php/archives/2015/06/noaa-temperature-record-updates-and-the-hiatus/#more-18571

      What remains to be explained is basically now well within the bounds of what might be expected. Two qualifications. Basically and might.
      What remains to be explained is now well within the bounds of what is expected.
      He did not say the immediate above. Bounding things such as 1.5 C to 4.5 C for a doubling. As I tell you you have from $1.5 million to $4.5 million in your bank account. You can now find a financial planer and use those numbers, and here’s my bill. I think we had a very good meeting. Thank you.

  61. “Paris is a coming,
    And so are the lies,
    No warming pause,
    Now there’s a surprise!
    The weather recorded
    Doesn’t fit with the plan;
    Adjust temperature records,
    Keep blaming man….”
    Read more from:
    http://rhymeafterrhyme.net/no-warming-pause-now-theres-a-surprise/

  62. Mosher: “Technically It doesnt matter whether you adjust the recent past or the deep past.”

    What your suggesting is that these important adjustments are just Squiggology.

    How predictable.

    Andrew

    • Steven Mosher

      No. I am making a math point

      You have a scale. you weigh yourself every morning. every morning
      you weigh 200 lbs.
      you buy a second scale. It measures 202. for the next 3months you measure with both scales. One says 200 the other says 202.

      Now I ask you: Please estimate your change in weight over the last year

      Your data looks like this

      Scale1 : 200,200,200, 200,200,200,200,200,200,NA,NA,NA,
      Scale 2 NA NA NA NA NA NA ,202,202,202,202,202,202,

      Do you
      1. Average the two?
      2. Adjust 200 to 202
      3. Adjust 202 to 200.

      Note that option 1 gives the wrong answer
      note that the trend doesnt care whether you use #2 or #3

      • I’m not sure why you feel compelled to adjust data, even when weighing yourself. Why not let the data be the data?

        Oh… wait…

        Andrew

      • Bad analogy. It should look like this.

        You have no scale. you look at old pictures of yourself taken infrequently, over many years, none of which shows your whole body.
        you weigh 100 lbs. .
        you buy your first scale. It measures 100.01 for the next 3months. But it can only weigh one leg, half an arm and a third of your torso.

        Now I ask you: Please estimate your change in weight over the last year

        Note that the trend doesn’t care because you have no real measurements of your total weight, just guesses and extrapolations.

        Would you give control over your and your families’ entire lives to a government bureaucracy because they think you might have gained .01 pounds in the last year?

        No?

        DENIER!

      • The actual criticism is that Karl, et al., chose to adjust the more-reliable buoy data because it was an increasing-over-time proportion of all measurements (which are, to a certain extent, “averaged” to produce global data). So rebutting the criticism is not as simple as your analogy suggests.

      • The above comment was for Steven Mosher, not Gary M, in case that wasn’t clear.

      • Steven Mosher

        “I’m not sure why you feel compelled to adjust data, even when weighing yourself. Why not let the data be the data?

        Oh… wait…

        Andrew
        ####################

        Nothing is done to the data. it remains the data.

        The question has your weight changed?

        IF you change the method of observation, you have to account for a bias.

        Suppose I had a thermometer outside a city. it reads 70
        Then I move it in the city. it reads 75.

        with yoour approach to data that would be a valid trend.

      • Steven Mosher

        “The actual criticism is that Karl, et al., chose to adjust the more-reliable buoy data because it was an increasing-over-time proportion of all measurements (which are, to a certain extent, “averaged” to produce global data). So rebutting the criticism is not as simple as your analogy suggests.”

        WRONG.

        mathematically for trend analysis it doesnt matter. there may be PRACTICAL reasons for adjusting the Deep past as opposed to the recent past. That is what Karl does with land records. And of course folks criticize him for doing that.

        so with the land records they adjust the deep past and people bitch
        with SST the adjust the recent pas and people bitch.

        The actual difference between the methods is zero. zip. nada.

      • the issue with the buoy adjustment is this: the offset is 0.12C +/- 1.7C. This is not a clean adjustment. Adjusting a relatively clean dataset (buoys) with this very uncertain adjustment seems a strong way to muck up your data set, beyond the original quality of the data

      • Steven Mosher

        “You have no scale. you look at old pictures of yourself taken infrequently, over many years, none of which shows your whole body.
        you weigh 100 lbs. .
        you buy your first scale. It measures 100.01 for the next 3months. But it can only weigh one leg, half an arm and a third of your torso.”

        #################

        you have measurements taken from ships. They are temperature measurements.
        you have OVERLAPPING COLLOCATED measurements with bouys
        They are temp measurements.

        I dont like the fact that we may have to pay more taxes because of global warming. Perhaps the skeptics can save us from these taxes.

        To DO THAT, they have to have smarter people than you giving smarter arguments. You have an opportunity to make smart arguments. Follow Nic Lewis and do what he does. At least people in power actually listen to him. Why? because he doesnt make dumb arguments like you do.

      • Don Monfort

        Your friend Ross is a smart guy, Steven. He said this:

        “A. Looking at the first adjustment, K15 take the buoy data and add 0.12 oC to each observation. They computed that number by looking at places where both buoy data and ship data were collected in the same places, and they found the ship data on average was warmer by 0.12 oC. So they added that to the buoy data. This is similar to the amount estimate found by another teams, though the bias is usually attributed to ships rather than buoys:

        Recent SST observations are conducted primarily by drifting buoys deployed in the global oceans (Figs. 1, 2). The buoys measure SST directly without moving seawater onto deck or to the inside of a ship. Therefore, buoy observations are thought to be more accurate than either bucket or ERI data… In the present study, we regard this difference as a bias in the ERI measurements, and no biases in drifting buoy observations are assumed. The mean ERI bias of +0.13 oC is obtained and is within the range for the global region listed in Table 5 of

        Kennedy et al. (2011).

        (quote from Hirahari et al. 2014 p. 61)

        That quote refers to a paper by Kennedy et al. (2011 Table 5)[5] which reports a mean bias of +0.12 oC. However, Kennedy et al. also note that the estimate is very uncertain: it is 0.12 clip_image036oC ! Also, the bias varies by region. This is a key difference between the method of K15 and that of others. K15 added 0.12 oC to all buoy data, but the Hadley group and the Hirahari group use region-specific adjustments.”

        Can you tell us if in your opinion adding the .12C to the buoys is justified?

      • Don Monfort

        should be:

        “…also note that the estimate is very uncertain: it is 0.12 + – 1.7C !”

      • Mike Flynn

        Steven Mosher,

        The trend doesn’t matter. Neither does your slightly odd and irrelevant analogy.

        If I attempt to measure the air temperature today, at a particular location with any accuracy or precision, I will be indulging in a fairly pointless exercise.

        As Tyndall pointed out over 100 years ago, “To determine the true temperature of the air is, it may be remarked, a task of some difficulty: . . .” Tyndall practised the English habit of understatement quite well.

        You are probably not measuring the temperature of the air. Even if you manage to do this, the air is in constant motion. Worse, it is transparent. The air you measured 5 minutes ago may not be the air you measure 5 minutes hence. The environment has most certainly changed in the interim, and assuming that measurements made at regular intervals will ensure the identical environmental conditions for each measurement is not rational.

        Averaging anything is fraught with potential silliness. Average car colour? Average telephone number? Average weather at an average locality?

        It is becoming apparent to real scientists, that supposed temperature measurements from buoys, ships, land based thermometers, satellites, etc., may be sufficiently imprecise to be worthless in relation to past conditions.

        So you might as well invent past temperatures. There are worse ways of wasting time, effort and money.

      • Steven Mosher:

        You do not seem to grasp the point of the criticism. It is an accusation that Karl, et al., chose to adjust the “good” data upwards (rather than the “bad” data downwards) with the intention of pushing future trends higher. Whether this is the actual effect of their adjustment process (buoys vs boats) may be debatable but you are not debating it. The critics are referring to this specific case — not a generic “equal choices” hypothesis.

        Earlier in the comments you said, in response to a claim the warming trend was increased by Karl, et al.’s, methodology:

        Take the karl paper we are discussing.

        The change to the long term trend.. from 1880 to present is the OPPOSITE of what you claim.

        See the table in the SI. YUP. the new version has a slightly lower trend.

        Yet when I checked that table I found what appears to be the opposite of your assertion:

        Table S1. Trends of temperature (°C/decade)…

        1880‐2014 Global
        New:
        0.068 +/‐ 0.017
        Old:
        0.065 +/‐ 0.015

        2000‐2014 Global
        New:
        0.116 +/‐ 0.067
        Old:
        0.066 +/‐ 0.076

        As I understand it, the paper’s buoy adjustments impact ERSSTv4. In the noted table, that would apply to the ocean trends, which also show a significant increase in warming trends over ERSSTv3 in recent years (and potentially confirm the critics’ suspicions):

        1880‐2014 Ocean
        New:
        0.055 +/‐ 0.017
        Old:
        0.055 +/‐ 0.015

        2000‐2014 Ocean
        New:
        0.099 +/‐ 0.078
        Old:
        0.036 +/‐ 0.097

        http://www.sciencemag.org/content/suppl/2015/06/03/science.aaa5632.DC1/Karl-SM.pdf

      • Don Monfort

        Mosher doesn’t want to address the substantive criticisms of the Karl BS. Very disappointing, Steven.

      • 0.12C+/-1.7C

        A perfect metaphor for the goofiness that poses as normalcy in a closed system, a system so closed that brains have become oxygen deprived. Strange things happen. Sort of like being in a life raft for days unprotected from the sun.

        And they call this science.

      • Don Monfort

        Close enough for gubmint work, but could get the perps jailed in the real world.

      • Hello,

        Regarding the 0.12degC number. In our table 5, we presented results from matches between individual ship and buoy observations. Globally, there were 21,870 matches with a mean SST difference of of 0.12degC and a standard deviation of 0.85degC. This gives a standard error of 0.01degC to two decimal places.

        The question is, what is the appropriate uncertainty – standard deviation or standard error – to assign to that mean difference when adjusting the ship data as a whole? (the “as a whole” bit is key). I think that the appropriate uncertainty is the standard error of 0.01degC rather than 0.85degC. Here’s why…

        We know that individual ships will be biased differently and that every measurement from any one of those ships will have additional random errors that change from one measurement to the next. We can estimate the uncertainties associated with these two factors and this has been done several times. A large component of the standard deviation of the difference between the SSTs measured by ships and SSTs measured by buoys will come from those two factors. If the population of ships *as a whole* was unbiased (i.e. the individual ship biases had a mean of zero) then averaging all the measurements together from all the ships would have a mean close to zero. You would expect it, in fact, to be within a few standard errors of zero.

        However, the mean of all the observations is 0.12C, which suggests there is some common bias across the fleet of ships. The uncertainty of the mean of that distribution is the standard error. It’s this common bias and its uncertainty we are interested in removing before combining ship and buoy data. The uncertainties associated with the other types of error that affect single observations are also factored in through the uncertainty estimates (in the case of HadSST3) or via the relative weights given to ship and buoy observations (in ERSST).

        To make it clearer we can ask what the same numbers tell us about the bias in a single solitary ship observation. In that case, our best estimate of the bias in that single solitary ship observations is still 0.12degC, but in this case the uncertainty would be the standard deviation of 0.85degC which is close to the uncertainty associated with errors of a single ship observation estimated in other ways. As I mentioned before, that uncertainty is already included.

        The question has also been raised as to whether it is better to adjust the ship data using the 0.12degC or to adjust the buoy data. There are arguments both ways. Adjusting the ship data brings the biased ships into line with the unbiased buoys. On the other hand adjusting the buoy data brings the buoys into line with the ship data, which constitute the majority of the historical record. Either way, when these are presented as anomalies relative to the 1961-1990 base period, you have to take into account the fact that data from the climatology period was mostly ship data. Subtracting 0.12C from the ship data would cool the climatology by 0.12C (more or less) which would mean that the anomaly for a drifting buoy observation relative to that climatology would increase by that amount. If you are looking at anomalies, the net effect of adjusting the ship data is the same as the net effect of adjusting the buoy data.

        When we made HadSST3, one of the tests we did was try both. There was, as expected, little difference between the two choices. See part 2 of the HadSST3 paper, section 4.4 “Exploring the sensitivity of bias adjustments”

        http://www.metoffice.gov.uk/hadobs/hadsst3/

        Cheers,

        John

      • David Springer

        @john kennedy

        You state the majority of the historical record is from ship data. Okay.

        In the same vein the majority of historical time stamps are from sundials.

        Would you propose correcting modern time keeping devices because they don’t agree well with sundials?

        You aren’t fooling anyone, John. Changing the much more robust, much more recent buoy data changes the most recent trend (read “the pause”) while changing ship data changes the older trend but leaves the pause unaffected.

        Your goal was to make the pause go away so you chose to correct atomic clocks instead of sundials. Asshat.

      • Don Monfort

        Well, put that way it seems very reasonable. What difference does it make if your pretend that buoys are ships that pass in the night, or you pretend that ships that pass in the night are buoys?

      • Don Monfort

        “Globally, there were 21,870 matches with a mean SST difference of of 0.12degC and a standard deviation of 0.85degC.”

        Please define “matches”.

      • Hi Don,

        The full description is given in the paper:
        http://www.metoffice.gov.uk/hadobs/hadsst3/part_2_figinline.pdf

        copied (pdf’s don’t copy and paste well. I think I got all the typos, but check the original in preference) here:
        “A database of nearly coincident ship and buoy observations for the period 1998-2007 was created in which ship-buoy pairs were selected that lay within 50km of one another and on the same day. To avoid complications from diurnal heating, only observations taken close to local dawn were used. The average differences were calculated for each ocean basin, and for the globe. The average difference between ship and drifting buoy observations in the period 1998-2007 was 0.12degC, with ships being warmer than drifting buoys.”

        Cheers,

        John

      • Don Monfort

        Thanks, John. That certainly inspires confidence.

      • “you have measurements taken from ships. They are temperature measurements.
        you have OVERLAPPING COLLOCATED measurements with bouys
        They are temp measurements.”

        So speaks the defender of all things CAGW.

        Funny, I never saw a global temp reconstruction based on bucket temp measurements from ships, particularly for paleo measurements going back as far as claimed by the gods of Thermageddon for whom Brother Mosher so eloquently obscures.

        Didn’t Galileo invent the thermometer in the 1600s?

        Yet the IPCC puts out, and its acolytes like Mosher defend:

        http://www.theguardian.com/environment/2010/feb/02/hockey-stick-graph-climate-change

        But these are “OVERLAPPING COLLOCATED!!!”,antiquated, imprecise measurements of a tiny proportion of the global climate area going back a thousand years. So that’s OK.

        Silly me, climate tricks are for ‘scientists’.

        “I dont like the fact that we may have to pay more taxes because of global warming. Perhaps the skeptics can save us from these taxes.”

        Anybody who still believes Mosher is any kind of lukewarmer, email me for a prospectus on Solyndra at http://www.suckers‘r’us.com.

      • David Springer

        Steven Mosher | June 5, 2015 at 7:05 pm |

        “To DO THAT, they have to have smarter people than you giving smarter arguments.”

        Nah. The skeptics just need the warmists to have more supporters like Steven Mosher.

      • From what I’ve read of Nic Lewis, his argument is that accepting the temp records as they are, ECS is lower than the Church of CAGW claims.

        My argument is that it is ridiculous to take the temp records as accurately reflecting “global average temperature” at all with the precision and accuracy claimed by the CAGW clergy, let alone as the basis for global decarbonization.

        Notice, Mosher never rebuts such an argument, ever that I have seen, But he sure does like to give vent to his inner narcissist in obscuring his lack of an argument.

        Don’t forget that the Mosherism that gave birth to this sub-thread was “Technically It doesnt matter whether you adjust the recent past or the deep past”, In defense of which he offers an analogy equating the chimera of temp data to scale measurements.

        The farce is strong with this one.

      • Don Monfort

        Let’s not forget that Nic Lewis’s science is putting a dent in the bogus climate consensus and ranting Gary M is just an obscure anonymous blog character.

      • David Springer

        Nic Lewis isn’t doing jack diddly squat to change the debate. Neither is Moser. Neither have the chops either politically or as experts in the field. Granted Lewis has far more credibility than Mosher. Curry is a different story altogether but still successfully countered by peers. It’s a political football. Science left the arena a long time ago.

      • Steven Mosher

        Judith

        “the issue with the buoy adjustment is this: the offset is 0.12C +/- 1.7C. This is not a clean adjustment. Adjusting a relatively clean dataset (buoys) with this very uncertain adjustment seems a strong way to muck up your data set, beyond the original quality of the data”

        That is not a cogent argument.

        1. There is a dataset A. ship data
        2. there is a dataset B . bouy data

        The goal is to create data set S;

        To create S you take A and B and adjust either A or B. in the creation of S

        dataset A isnt Mucked up
        dataset B isnt mucked up.
        neither are mucked up regardless of how you process them.

        They still exist.

        Dataset S will be the same regardless of which you adjust.

        Nothing is mucked up.

      • Don Monfort

        The goal is to bust the pause. Read their press release.

      • Steven Mosher

        Don

        “Mosher doesn’t want to address the substantive criticisms of the Karl BS. Very disappointing, Steven.”

        That’s because there are none. That doesnt mean the paper is correct.
        Everything I have seen is from people who haven’t worked with the source data and who have never produced a product themselves.
        When you find a good argument let me know.

        Personally, I will wait to see what the hadley guys say. and see how it compares with some of the work we started Then after patient study I might tell you what I think the weakest points are. dont hold your breath.
        because that would take well over a year.

        I can tell you that deciding whether to adjust ships to bouys or bouys to ships will not top the list of good arguments.

        The really sad thing is that we could do a much better job if pirates didnt exist.

        That’s my test. If a commenter doesnt understand the problem that pirates have caused in metadata then their comments are not worth reading.

        First one on the thread to answer the pirate question wins. ( kennedy cant play and neither can Zeke)

      • Don Monfort

        You seem to be intent on marginalizing yourself, Steven. I won’t hold my breath hoping for change in your tactics or attitude. I think you are suffering from the Stockholm Syndrome.

      • “If a commenter doesnt understand the problem that pirates have caused in metadata then their comments are not worth reading.”

        It’s a trick question. We don’t need no stinkin’ metadata. We can interpolate, extrapolate and krig all the data we need for areas affected by piracy – from tree rings in Nova Scotia. That’s the magic of ‘climate science.’

      • Don Monfort

        Does adjusting the ship data instead of the buoy data bust the pause? How about adjusting neither? Where is the rule written that says you have to adjust one or the other? Why haven’t the other fabricators of SST products made the same adjustment? Why should anybody believe that the choices made are not more politics than science?

      • Mike Flynn

        Steven Mosher,

        You wrote –

        “The goal is to create data set S;”

        One might ask why, but I suspect that the aim of creating data set S is to create a data set that supports a preconceived notion. The notion is, of course, the GHE.

        Unfortunately, you apparently do not know whether the original datasets are correct or not. This is a situation where two possible wrongs may not make a right.

        About as silly as running 100 computer simulations giving different answers, which ensures at least 99% are wrong, and averaging or adjusting them, claiming that that the resulting answer is somehow useful.

        You may create as many contrived data sets as you wish. They will not change tomorrow’s temperature at all. Or anything else, for that matter.

        Possibly, with less effort, one might devise an experiment showing the ability of CO2 to warm an object. I would certainly change my mind about the supposed GHE if it could be demonstrated.

        Just a thought.

      • from Steve Mosher ” I am making a math point …
        to Don Monfort The goal is to bust the pause. Read their press release. (and all in between)

        IMO, the adjustments to the SST data may well be legitimate and correct. The explanations seem not implausible to me. The value of the data, the methods and uncertainty, could be a subject of debate. It is an interesting subject.

        However, the Spin, Hype and Hoopla attending the release of this paper certainly gives the appearance of politically motivated science. For this reason, the scientific value of their work becomes irrelevant.

        Why did they do that?

      • +1 The narrative of the paper and the hype that went with it was IMO its main undoing, because the actual data underlying the paper seems to be on the weak side with large error bounds.

      • Don Monfort

        Ken:”Why did they do that?”

        Because they wanted to bust the pause. They started out with a need and came up with a “plausible” story to do it. They can’t be proven wrong, so why not go for it. It passes pal review, no problem. Standard consensus BS climate science using their well-worn tactic of employing deliberate confirmation bias. Got to save the planet, by any means necessary.

      • I greatly appreciate John Kennedy’s response to questions about the buoy adjustments. However, I remain unclear as to the impact of the paper’s adjustments on recent trends.

        Steven Mosher suggests the impact is a reduced trend since 1880. Yet my reading of the chart in the supplemental information (see #comment-708969 , above) is that the trend is actually increased, particularly in recent years.

        Am I reading the chart incorrectly?

      • Because they wanted to bust the pause. They started out with a need and came up with a “plausible” story to do it.

        Don, what is the difference between explaining why we are seeing a “pause” (science) and getting rid of the pause? In this case the explanation might be that the apparent pause was an artifact of the SST measurements.

      • Don Monfort

        That’s just some claptrap, joey. That’s a question for the IPCC and the gaggle of well-funded SST data product fabricators who didn’t spot your alleged artifact. The IPCC recently reviewed the science and admitted to the pause. The researchers in the field were aware of the differences between the relatively good buoy data and the pretty sketchy ship data. Nobody thought that .12 had to be added to the freaking buoys, until these clowns got the idea to create a pause busting artifact. Explain why adding .12 to the buoys is a good idea now, joey?

      • Explain why adding .12 to the buoys is a good idea now, joey?

        Honestly, I don’t know, Don, I assume that if there is some problem with what they have done, someone will demonstrate why they have erred. I am not so sure why you are so certain about their results. You obviously don’t have much a of background in climate science.

      • And, Don, you didn’t answer my question. I would appreciate it, if you would.

      • Nobody thought that .12 had to be added to the freaking buoys

        Did you know the point of science is discover things that people didn’t know before? If it was already common knowledge then it probably wouldn’t be published.

      • Don Monfort

        I don’t have anything else to discuss with you, joey. Hey, maybe I am just annoyed that they are spending a lot of public money to deploy buoy systems that collect pretty good data and then some public employee clowns come along and decide to make the buoy data match the crappy ships passing in the night data, so they can erase the pause. Just look at their press release. It ain’t science. End of story.

      • http://www.thecourier.com.au/story/930754/sos-to-naval-ships-as-pirates-bother-buoys/
        Interesting comments Mosher. I get the idea there’s isn’t a lot of traction available when this kind of issue comes up, dealing with adjustments.

      • End of story.

        No, it’s not over, if there is fraud here. someone will speak out and prove it. I don’t believe climate scientists or NOAA or anyone else is in on some conspiracy related to climate science which would prevent them from letting something that significant go unchallenged. If there is no conspiracy, things should progress like they always have in any science related area.

      • A huge problem with this bias calculation goes back to the stakes and the lack of trust on both sides.

        As to trust, its hard to trust parties who claim that science is settled (on almost any complex subject area, much less climate) and that anyone who claims otherwise is a flat-earther or corrupted denier (see various recent public statements from the President and the White House staff). It is also hard to trust when the ultimate stakes include the sacrifice of a wide swath of freedoms and huge quantities of wealth for all posterity.

        With that in mind, and given the importance of the bias calculation, I have some concerns with a few omissions in K15 when the process is described, most pointedly this one:

        ““A database of nearly coincident ship and buoy observations for the period 1998-2007 was created in which ship-buoy pairs were selected that lay within 50km of one another and on the same day. To avoid complications from diurnal heating, only observations taken close to local dawn were used.”

        Note that the authors did not say that ALL coincidental observations taken near dawn were used. Is that true? How was the database generated,… the actual queries? Is there original data for ALL coincidental observations. ALL taken near dawn? The final data set which was used? Could a third party replicate the culling process? Remember, taxpayers paid for the information and will have to make decisions based on the results, so unless claims of national security are made, then taxpayers are entitled to an open book. Anything less and trust simply descends yet another notch.

  63. What do you do when data and models disagree?

    In proper science you revise the models. In climate science, you revise the data.

  64. The way I see it one can adjust all they want until the satellite era commenced in 1979.

  65. This is more of their BS agenda manipulation of data.

    I have pointed this out so many times in the past when I have said repeatedly that if the data does not conform to AGW theory it is either IGNORED, WRONG , or MANIPULATED.

    This is just another attempt and will not be the last attempt to try to keep this soon to be obsolete absurdity alive, that being AGW theory.

  66. The Karl et al paper only brings the temperature record into a consistent position with the recent observed acceleration in sea level rise.

    • Oh my gawd, how could they do such a thing?

    • izen, “The Karl et al paper only brings the temperature record into a consistent position with the recent observed acceleration in sea level rise.”

      It actually is a good example of how “splicing” various generations of measurements can produce “unprecedented” results. You know that NASA has satellite models of hurricane wind speeds all hurricanes are stronger now than they were with previous measurements. Your drinking water has unprecedented levels of “toxins” compared to 30 years ago as well.

    • Don Monfort

      Why do we need thermometers? They just need to be placed out in the ocean, maintained and monitored, and then the actual observations have to be adjusted to bring into a consistent position with sea level rise. We can just figure out the temps from the freaking sea level rise. Save a lot of money. You warmist fellas are really smart.

    • David Springer

      Izen – looks like a pretty frickin’ linear (no acceleration) slope to me

      • What is pretty obvious is the sea level rise preceded the CO2 so the natural warmth is driving the sea level rise and when it stops naturally warming it will stop naturally sea level rising.

        The MWP had a 6 inch higher sea level, we should expect the sea level to top out 6 inches higher. The sea level isn’t going to rise forever no matter how much the global warmers scream and stamp their feet.

      • David Springer

        Thanks PA for replacing that wickedpedia link with same one from another source.

        Interestingly sea level keeps going up regardless of global average temperature which has fallen twice for several decades at a stretch during the same time period .

        http://woodfortrees.org/plot/hadcrut4gl/mean:48

  67. “Science” like this is a few rungs below Phrenology, but it doesn’t matter. The MSM are already running with its press release.

    https://thepointman.wordpress.com/2013/07/05/the-pause/

    Pointman

  68. Isn’t Congress trying to shift climate science/funding from EPA to NOAA? The Karl study now has me confused as to which Agency is frying pan, and which is fire.

    Using the 0.10 significance test and other methods to move the goal line is something that EPA perfected long ago. See EPA’s botched analyses on secondhand smoking (environmental tobacco smoke) or residential radon exposure risks for examples. NOAA should have higher standards.

    • David Wojick

      Congress is trying to increase NOAA’s National Weather Service funding because the satellite fleet is decaying. EPA has nothing to do with this goal. Congress wants to cut back climate funding across the board, including not funding Karl’s proposed National Climate Service, but it is an elusive goal, because there is no climate budget per se. USGCRP used to publish a detailed breakdown of each agency’s climate funding but it stopped several years ago. From a budget perspective climate has gone underground.

    • Well, time has come to eliminate the EPA. The enforcement powers can be moved to other agencies or the states, The environment is clean enough and the EPA is a major tool of social progressive mischief.

      The EPA in social progressive’s hands is like giving dynamite to a small child. The solution is to dispose of the dynamite.

      Plus, if we start eradicating government agencies that misbehave – the ATFE would be the next on the list – there are many benefits, including:
      1. Smaller less oppressive government.
      2. Reduced taxes and regulatory cost.
      3. The abject lesson of the eradication of some agencies may induce the surviving agencies to straighten up and fly right.

  69. Interesting plot from Ed Hawkins comparing new NCDC data set with HadCRUT4.3
    Ed comparing MOHC and NOAA data sets http://www.climate-lab-book.ac.uk/wp-content/uploads/noaa_had_comp1.png

  70. We’ve nothing to fear about global warming but the lack of it. Everything else is dogma.

    • Don Monfort

      Yeah, and Mosher doesn’t want to look at it.

      • Shock of shocks, Roossy is good. Who could have predicted it? Time for more acting lessons. June anomaly posed to breach .90C.

      • Steven Mosher

        Huh.

        I’ve actually spent a bunch of time with the source data. Ross’ comments suggest he has not.

        At berkeley, of course there have been a couple of projects aimed at redoing the SST. The technique was pretty cool. basically exactly what mcintyre had suggested to me in one of our conversations.

        Its a pretty cool approach. but I doubt it will go anywhere.

      • Don Monfort

        But you don’t make any freaking specific criticims of Ross’s analysis other than the BS claim that he hasn’t read ICOADS. What has happened to you, Steven?

      • “I’ve actually spent a bunch of time with the source data. Ross’ comments suggest he has not.”

        What a brilliant rebuttal of McKitrick’s arguments. I particularly marvel at the point by point refutation of McKitrick’s list of weaknesses in historic data collection practices, and their impact on the accuracy of the data.

        “…the K15 themselves explain that their new assumptions—not new observations—are what introduced the warming trend at the end of their data set.”

        And that, in a nutshell, is what CAGW climate ‘science’ is all about.

        Your assumptions don’t create the results you need? Change them,.

        The data doesn’t match your predictions? Adjust it.

        The data you need doesn’t exist? Manufacture it.

        And then demand control of the global energy economy because of your results.

      • Here, Don Don:

        Are the new K15 adjustments correct? Obviously it is not for me to say – this is something that needs to be debated by specialists in the field.

        http://wattsupwiththat.com/2015/06/04/a-first-look-at-possible-artifacts-of-data-biases-in-the-recent-global-surface-warming-hiatus-by-karl-et-al-science-4-june-2015/

        That about covers it.

        Lett’s thank Ross for his concerns.

    • That post was an unusually well-tempered and non-polemic analysis for the climate-o-sphere. Too bad such non-polemics are the exception not the norm.

    • Moshe says Ross is wrong.

      What’s the likelihood of that? Slim to non-existent.

    • McKitrick doesn’t say it is wrong to make the K15 correction. He goes half way to demonstrating why it should be corrected with his first example table, but he misses having a column which is the apparent temperature from his mix of observations that would start near 4.0 with ships dominating, and end near 3.0 as the buoy observation dominate later despite the real temperature going from 3 to 4. In his example, a correction is clearly needed for the changing mix of observations. Why didn’t he say that? Instead he goes off on some tangent.

      • Don Monfort

        Here is a cogent comment from WUWT that puts the paper it in the proper perspective, yimmy:

        “Jquip
        June 6, 2015 at 12:16 pm

        It needs to be repeated often: This paper is about establishing a new data product. That is, it is focused solely on the errors and inadequacies baked into the HadNMAT2 data product. And if this paper is held as legitimate, it does not speak to global warming as such: It is a refutation of all previous uses of HadNMAT2. Despite what the authors would like to claim, their paper cannot speak to overall global warming when it is solely and strictly a refutation of — and a creation of a new — single data product amongst many.

        Much like the ‘ensemble averages’ of GCMs used by the IPCC, we have an ensemble problem with the data products. They cannot, quite obviously, all be the most accurate. So we’re faced with either refuting all but a single data product or refuting any that are inconsistent with the experimentally demonstrable correctives employed in the Karl paper, or refuting the Karl paper — and the validity of peer review along with it.

        If this is unpalatable, and I suggest that it is, then the gold standard in Climate Science is to take an unweighted ensemble average of the data products to produce the data product. And that ensemble average, and all its ranges, is then the only valid input to apply to various GCM runs.

        And, of course, it remains that if they can’t get the ensemble average of data products to produce a trend that doesn’t straddle nought, then there is no manner in which to claim that there is any warming at all from within the standard practices — valid or invalid — of Climate Science.”

        Can you say why this new data product is any better than any of the others, yimmy? Start with UAH and RSS.

      • It is the newest product, and includes a correction that others may want to apply to theirs in the future. As Peterson asked Watts in their email exchange, if you knew there was this bias introduced by the changing observations, what would you do about it?

      • Well the main thing that I wouldn’t do is pretend that the error/uncertainty in the end result is small.

      • Don M, if you have something against correcting for known biases you need to say what it is. Karl is one of the originators of the TOBS correction, now widely applied and even accepted by McIntyre. Ocean measurements are more varied, but untangling them involves finding these corrections that are needed to make sense of the trends. Note that a long-standing warming correction of early 20th century ocean data related to buckets goes in the opposite direction and is accepted by everyone. We only get all this thrashing about at WUWT with ten new postings on it in the last couple of days because the correction went in the wrong direction for them and removed their beloved pause.

      • The bigger correction was related to ship bucket data that had not been corrected before, not the buoys. I don’t know how the UAH and RSS people calibrate their satellites against the surface, but perhaps they need to pay attention to this.

      • Don Monfort

        Schoolmarm is protecting teacher’s pet from having his little feelings hurt. How cute.

        Look for the other 97% consensus data product manufacturers to suddenly discover that if they add .12 to the buoys and do that other crap, they can erase the pause that is killing the cause. Another new pause busting data product mooned over by the warmista media every few weeks leading up to the big partee in Paree. I don’t think it is going to help them with the 7 billion folks who are not losing any sleep over AGW. The sound of the next big belly flop you hear, will be coming from Paree. Sorry, jimmee.

      • No, I’m trying to prevent this blog from becoming a cesspool of insulting comments, which makes a lot of people not want to read the comments or participate here.

      • David Springer

        Don’s point is you’re not doing it fairly. Do you think it’s a coincidence that both Don and I used the term “teacher’s pet” or maybe instead we see something you refuse to acknowledge? Teacher’s pets, actually. There’s more than one.

        JC comment: People who participate here regularly in a civil manner do not deserve gratuitous insults. Jim D is one of those people. Try making your criticisms substantive in terms of arguments, and lose the pejoratives.

      • David Springer

        [x] Refuses to acknowledge bias
        [ ] Acknowledges bias

      • Jim D,

        The Earth has a powerful radiator (the Sun), about 150 million km. away.

        After four and a half billion years of heat trapping, the Earth has somehow managed to cool. Not all that much, really.

        If you imagine the Earth to be the size of an apple, the solid crust is about as thick as the skin. The rest varies from merely viscous red hot rock, to dazzling white heat at the centre.

        So the majority of the heat – better than 99.9% – is hidden, beneath the crust.

        The heat from the Sun is unable to stop night time cooling. Winter is colder than Summer, some places freeze during the Winter, some are quite hot.

        Antarctica was once ice free. Now it is not. Much of Europe was once ice bound. Now it is not. Pompeii had a severe attack of local warming. Hot gases galore.

        Again. Over four and a half billion years, plenty of GHGs, no GHE at all. Just Nature. Sometimes more energy here, sometimes more there. Ceaseless unpredictable magma, hot plumes, volcanism on land, and under the sea. Clouds, atmospheric chaos. Changing land use. Land rising, falling, moving sideways.

        Do we really need to invent a GHE? Are we all so dumb that we can’t understand what we see before us?

        I’m pretty dumb, but I still prefer facts to figments of the imagination. What about you?

      • Don M, if you are getting stuff deleted, it is because you project more anger than sense. You need to do something about that. Perhaps McIntyre will look into this and agree, as he did with TOBS, and then the skeptics (mostly) will just move on. I don’t think McSteve is wedded to the pause.

      • Don Monfort

        Remove the mendacious comments and those that just insult the readers’ intelligence and you got something to brag about, Judith. Of course, you wouldn’t have a lot of comments.

        To whom it may concern:

        “There are arguments both ways. Adjusting the ship data brings the biased ships into line with the unbiased buoys. On the other hand adjusting the buoy data brings the buoys into line with the ship data, which constitute the majority of the historical record. Either way, when these are presented as anomalies relative to the 1961-1990 base period, you have to take into account the fact that data from the climatology period was mostly ship data.”

        We know from the many “adjustments” with accompanying uncertainties that have been piled onto the sketchy ship data, and from the discrepancy with the more reliable buoy data, that the ship data is not good. Why should we have any confidence that doing these machinations to line up the better buoy data with the funky ship data is sciency and not just another effort to erase the pause that is killing the cause? Adding .12C to the buoys in the pause era is BS. If you want to erase the pause, use the freaking more reliable measurements of SST, or STFU.

      • As I mentioned the ship data were adjusted for the bucket data a long time ago, and that had the effect of warming the first half of the 20th century. If you want to remove adjustments, go back to the raw bucket data.

      • Mike F, the main difference between now and 50 million years ago when nowhere was glaciated is that CO2 levels were double back then. Go figure.

      • Jim D,

        It’s amazing!

        A cylinder of highly compressed CO2 at 20 C is exactly the same temperature as a cylinder of highly compressed O2 at 20 C.

        Even weirder, an empty CO2 cylinder at 20 C is also at exactly the same temperature! How completely amazing is that?

      • What would be even more amazing is if you could look at them in infra-red wavelengths where they would be completely different, with the O2 being completely transparent, and the CO2 not. It’s just physics.

      • David Springer

        ‘Fraid not Jim D. All you see in infrared is the cylinders containing the gas which at equilibrium will be the same temperature as the contained gas. Don’t forget that good absorbers are good emitters (Kirchoff’s Law). What happens is that if you shine an infrared beam through the CO2 gas it will be scattered equally in all directions whereas it won’t scatter going through the O2.

        The problem with that is that no one can demonstrate that backscattered CO2 illuminating the ocean surface will transfer any heat into the water because it is completely absorbed in the top several microns of skin layer driving evaporation where the energy is transported insensibly to altitude where it releases the extra energy through condensation and precipitation. So it warms the atmosphere higher up but not much at the surface. It also cools the stratosphere due to it being an equally strong emitter as it is an absorber.

      • Don Monfort

        Good point, yimmy. The amazingly precise bucket adjustments. That took care of the mess with the alleged measurements that were haphazardly done on boats of all shapes sizes and colors by various untrained and disinterested peoples of the world plying the sea lanes.

        Everybody knows the shipboard measurements ain’t good, yimmy. This allegedly new data product doesn’t change that by adjusting the freaking buoy data to line it up with the crappy ship data.

        Those SkS Kidz, Cowpan and Whey, got a lot of warmista media attention a couple years back with their attempted pause buster paper. They turned to UAH to get data to fill in the Arctic. Here is the comment of one of the maroons on the SkS echo chamber when they were celebrating the paper by the SkS Kidz that put the final nail in the coffin of the pause:

        “We had been assuming that the reason measured estimates of deep ocean warming didn’t quite cover the ‘missing heat’ was because we still weren’t finding all of it. If this result holds up then the last of the ‘missing heat’ may finally have been found.

        The fact that they used UAH satellite data to ‘bridge the gap’… that ‘popping’ sound you hear is Roy Spencer’s head exploding.”

        Hey yimmy, why don’t we use UAH to infill data that those ships following the sea lanes miss out on? Better yet, toss the freaking no good ship data and go with the buoys and satellites.

        I was refreshing my memory on the Cowplanet and Whey story and ran across this thread on CA. Robbie the grad student made the mistake of participating and he took a beating. Por ejemplo:

        http://climateaudit.org/2013/11/18/cotwan-and-way-2013/#comment-449976

      • Don Monfort

        That Permalink doesn’t work so well. It’s the comment immediately below Mosher’s:

        ilmis
        Posted Nov 19, 2013 at 12:08 PM

      • Mike Flynn

        Jim D,

        It doesn’t matter how you look at them, does it? Infrared, ultraviolet, visible light – they stubbornly remain the same temperature.

        It’s just physics – no GHE to be seen. Is that amazing, or what?

      • > Well the main thing that I wouldn’t do is pretend that the error/uncertainty in the end result is small.

        Of course not. You’d rather let yourself unconvinced and portray Mr. T like he was seven feet tall.

        Corrections usually entail something about errors.

      • > Robbie the grad student made the mistake of participating and he took a beating. Por ejemplo:

        That beating starts with “maybe OT“.

        That doxxing was quite thin, Don Don.

      • Don Monfort

        You are getting more and more superfluous, willy.

      • Robert Way does not seem to have taken much beating, Don Don. The Auditor walked on eggs in the comment thread, including the technical comments. URL manipulation can lead to hacking, BTW:

        http://www.theinquirer.net/inquirer/news/2079431/citibank-hacked-altering-urls

        I reject your characterization of that thread.

      • Don Monfort

        Then he did a lot of whining for such a small beating. But you are entitled to your warped viewpoint, willy.

      • The Auditor has recently declared moving up in the data food chain. Ross has been strangely polite. You just can’t buy a clue, Don Don. Extortion doesn’t work very well either. Your protection is less threatening when your leach gets pulled that often.

        Please, do continue.

      • Don Monfort

        That’s rather cryptic willy, but I am pretty sure you are offending my poor little self. I hope Judith protects me from you scurrilous attack the in the same way she takes care of little jimmy.

  71. . They added 0.12 oC to readings collected by buoys, ostensibly to make them comparable to readings collected by ships. As the authors note, buoy readings represent a rising fraction of observations over recent decades, so this boosts the apparent warming trend.

    My reply

    Really based on what!

  72. In other words all of the data from various sources that show otherwise to this study are wrong ,while this shame of a study is correct.

    This is not going to stand and satellite data going forward is going to keep affirming that this study is a shame and needs to be dealt with and shown for what it is, which is just more agenda driven manipulation of the data which does not support AGW theory.

    What more could be said.

  73. Looks to me like Karl et al have produced some confirmation bias crapola. I am not particularly surprise since Menne is part of the team. When Menne et al. violated Watts surface station dot org intellectual property rights it was a pretty solid indication of his character/competence.

  74. May anomaly from RSS .31 which is the correct non agenda driven data.

  75. How the shame has come about read below.

    But few scientists have believed a hiatus meant climate change had stopped. Trenberth says the slower warming is, in part, caused by unusual currents in the Pacific Ocean. Others have cited volcanic activity.

    Now Karl’s team, which is directly responsible for taking the Earth’s temperature, says a technological shift in the way the measurements are taken has also obscured the temperature’s climb.

    Here’s why: The single number — average global temperature — comes from tens of thousands of independent temperature readings. And, in recent decades, the technology for getting those readings has gradually shifted.

    On land those measurements are made by weather stations; on the sea, the job has generally been done by commercial and military ships for decades. But starting in the 1980s, governments also began dropping buoys into the ocean to do independent measurements.

    Karl and his colleagues decided to look at stretches of water where ships pass very near buoys, in order to compare the two temperatures. And they made a surprising discovery.

    “The buoys actually read colder than the ships,” Karl says.

  76. sham not shame although they both apply.

  77. Humanity has been egregiously deceived.

    NOAA doubles down on their mistake.

    Atmospheric CO2 has been identified as a possible climate change forcing. Forcings, according to the ‘consensus’ and the IPCC, have units of Joules /sec/m^2. Energy, in units Joules/m^2, divided by the effective thermal capacitance (Joules/K/m^2) equals average global temperature (AGT) change (K). Thus (in consistent units) the time-integral of the atmospheric CO2 level (or some function thereof) times a scale factor equals the AGT change. When this is applied to multiple corroborated paleo (as far back as 542 million years ago) estimates of CO2 and average global temperature, the only thing that consistently works is if the effect of CO2 is negligible and something else is causing the temperature change.

    CO2 has no influence on climate, ocean cycles and solar cycle 24 are both on down-slope. The only way to make it appear that it is still warming is to change the temperature numbers.

    See the proof that CO2 has no effect on climate and discover what does cause climate change (explains 97+% average global temperatures since before 1900) at .http://agwunveiled.blogspot.com

  78. Delingpole’s take on this latest dose of CAGW PR disguised as science.

    “The thrust of Karl’s paper is this: that far from staying flat since 1998, global temperatures have carried on rising. It’s just that scientists haven’t noticed before because they’ve been looking in the wrong place – on land, rather than in the sea where all the real heat action is happening.

    And how did Karl et al notice what everyone else has missed until now? Well, by using a specialised scientific technique called ‘getting your excuses in early before the Paris climate conference in December.’”

    http://www.breitbart.com/big-government/2015/06/05/hide-the-hiatus-how-the-climate-alarmists-eliminated-the-inconvenient-pause-in-the-global-warming/

    The hiatuspause is the scientific gift that keeps on giving.

    • It will be interesting to watch the specialists in the field sort this question out in the coming months. ~Ross McKitrick

      I’d be sort of funny if we weren’t all paying through the nose — putting the economy on hiatus and pausing job grown for a generation — as we all wait for the specialists to work things out.

    • Ah Paris! …

      Non, nous regrettons rien …
      data tricks ‘n cherry picks,
      economy loss ‘n economies’ loss,
      environmentalist gloss …non,
      nous-regrettons – rien.

  79. ————————————-in global warming.
    ——————————-e—————————
    ——————————s—————————-
    —————————–u—————————–
    ————————a———————————-
    ——————-p—————————————
    Here lies the —————————————–

    The AlMightO AMO:

    The acceleration in SLR:

    OHC:

    I figure 98.5% by August…

    • Well, we will see in a couple of years after the La Nina comes through and the dust settles.

      It has been almost 2 decades since 1998. Some variation is expected.

      Most of the interglacial has been warmer than current temperatures. Claiming current temperatures are unusual or extreme is an outright lie.

      • No, it’s stabilized now, and poised to start going back up. But it’s heartening to see at least some skeptics now realize it’s impossible warming stopped 18 years ago as 2005, the middle of the almost completely dead pause that took place after 2006, was the zenith of modern warming, not 1998.

        The last Triple Crown… 37 years ago.

      • Well… we’ll see.

        The CO2 warming seems to be about 0.24°C. The total warming since 1900 has been 3-4 times as much, depending on whether you include CGAGW. It might get an equal amount warmer due to CO2 – but it really depends on what the rest of the climate is doing.

        No one has completely modeled the pre-1940 warming correctly or seems to completely understand it.

        After a century of warming it is going to take another century for the temperatures of the oceans to even out.

        Thirty years or more from now we will know if CO2 is a major climatic influence. We only have reasonable data for about 12 years and somewhat reasonable data back to 1979. We really need to collect data for a full 65 year cycle.

        You might be right about warming short term. I was surprised that warming stopped and that does seem to indicate low CO2 forcing. Don’t know if the pause is possible or impossible. The warming had a 20 year run, so until 2020 it is too early to weigh in on the pause. The warmers dance around when it gets a little warmer, the deniers dance around when it gets a little cooler. I left my dancing shoes in the closet.

      • David Springer

        JCH whistling past the graveyard.

        The pause killed the cause. Get used to it.

    • It really doesn’t matter what the next La Nina does unless it is an extremely powerful one. Like the one in 2011, which some people say was the 2nd most powerful in the record. If there are series of very powerful la Nina events, which is highly unlikely, there could be cooling.

      Last La Nina (blue) and subsequent period of negative ONI

      None of the above should have been that friendly to warming.

      If there are normal La Nina events, it’s going to continue warming at around .165C to .245C per decade (30-year trends) for the rest of the century. Just as it has been.

  80. David Wojick

    McKitrick correctly concludes this: “….small changes in assumptions about how to deal with uncertainties in the data can have a large effect on the final results.” To me this implies that any specific results are largely uncertain, yet these statistical model outputs are repeatedly used as though they were precision measurements. Hottest year on record and all that jazz. Maybe the Karl piece will finally wake people up. There is no record, just a lot of poor data.

    • David Wojick

      Plus a lot of questionable assumptions, of course,

    • Well… Yeah.

      That seems to be an accurate description of the problem.

      Do you have any solutions to suggest or do you see it as an insolvable problem.

      • David Wojick

        The solution is to live with the problem. The only way to know what is happening is to build the proper systems to measure it. Wanting to know what the average global temperature is requires a lot, although the satellites get us a lot closer than these goofy statistical models. What it once was is unknowable. Many facts about the past are simply unknowable, also about the present for that matter. Science is not omniscient.

      • Steven Mosher

        Satellites don’t measure temperature. Peroid.
        They record brightness at the sensor.
        From that data you have to apply models, regressions, first guesses, and then you get an estimate of what the temperature is. If you had ever cared to look at validation you would be stunned.
        For example rss adjusted temperature data using a gcm. Uah has different approaches but they all involve adjusting data with models.

      • worse than that; satellite measures voltage

      • David Springer

        Even worse than that. Satellites convert the voltages to radio wave modulaton and earth based recievers measure the radio waves.

        This is an exercise in absurdity. Stop it. These instruments are physical systems not abstract models of physical systems. I don’t expect Mosher to know better he’s not schooled in any physical science, engineering, or even math so the line between the physical and the abstract is blurred beyond detection for him. He’s an English major and his speciality is semantic pedantry. He’s a rock star in that.

      • That’s shocking.

      • David Wojick

        All true, Mosher, but at least UAH is sensing the atmosphere, which the surface models are manifestly not doing, much less the SST buckets et al.

        King Karl says BEST is wrong. Got a reply?

      • David Wojick

        I am reminded of the NAS/NRC workfest back in the late 1990s before the giant El Nino hit. UAH said there was no warming but the surface models said there was a lot (AGW is based on the latter). The workshop conclusion was that the uncertainties in both approaches were so great that they did not actually disagree. I read this to mean that we did not know if it was warming or not. Apparently we still do not, in which case science has nothing to explain.

      • Technically a satellite measures accumulated charge (that’s why it is charge coupled device).

        But a weather station measures resistance or volume of fluid (manual stations) which isn’t a great improvement.

        To this point no one actually measures the temperature…

      • All thermocouples and digital thermometers measure voltage. Most all digital equipment do too. Only mercury thermometers with mark 4 eye balls measure temps thru calibrated pieces of glass.
        Scott

      • Mosher – if sat data is that bad, why does it agree with radiosonde data so well?

        Also, PTRs in modern weather stations measure voltage and current as a function of sensor resistance. Each has to be calibrated independently in order to determine a set of calibration coefficients applicable only to that individual sensor.

      • PTR should have been PRT. Slysdexia. (posted in wrong spot the first time.)

      • davideisenstadt

        all known instruments that attempt to measure temperature actually measure some manifestation of temperature rather than temperature itself…like the volume of a known quantity of mercury…or the brightness (in whatever wavelength of EMF one wants to measure) so mash’s obsession over just what it is that satellites measure seems misplaced.
        Of course, he comes at this enterprise from a background in English studies…

      • > all known instruments that attempt to measure temperature actually measure some manifestation of temperature rather than temperature itself…

        All known instruments don’t actually attempt anything. The attempts are actually ours. Actually, temperature itself may not even exist.

        Another engineer with parsomatics problems.

    • Dr Robert E Stevenson, cited in a comment by dennisambler
      @ Jo Nova 4th june ’15, 2.12am:

      “Sources of 20th Century Ocean Temperatures
      I learned to deploy Nansen water bottles and reversing thermometers for deep-sea sampling in 1949. I spent the rest of the subsequent decade seagoing, for the most. I can’t remember how many bottle casts I made, or how many bathythermographs I deployed. There had to be thousands in the waters off coastal California. Other students and post-docs were doing the same farther offshore in the eastern Pacific, from the E.W. Scripps. In the westernmost Atlantic, a similar cadre worked from the Atlantis.

      In the 1960s, more ships were out at sea: from Fisheries Laboratories, U.S. Coast and Geodetic Survey (now NOAA), and research institutions at Scripps (La Jolla, Calif.), Woods Hole (Massachusetts), Miami, and Texas A&M (in the Gulf of Mexico). The British sailed the new Discovery, the Germans the new Meteor, and there were small ships sailing from Denmark, Japan, and France. Many cruises were dedicated to the geophysics of the sea floor, where deep-ocean casts for water and temperatures were few and far between.

      Surface water samples were taken routinely, however, with buckets from the deck and the ship’s engine-water intake valve. Most of the thermometers were calibrated into 1/4-degrees Fahrenheit. They came from the U.S. Navy. Galvanized iron buckets were preferred, mainly because they lasted longer than the wood and canvas. But, they had the disadvantage of cooling quickly in the winds, so that the temperature readings needed to be taken quickly. I would guess that any bucket-temperature measurement that was closer to the actual temperature by better than 0.5° was an accident, or a good guess. But then, no one ever knew whether or not it was good or bad. Everyone always considered whatever reading was made to be precise, and they still do today. The archived data used by Levitus, and a plethora of other oceanographers, were taken by me, and a whole cadre of students, post-docs, and seagoing technicians around the world. Those of us who obtained the data, are not going to be snowed by the claims of the great precision of “historical data found stored in some musty archives.”

      …Yet buoys’ data is adjusted upwards to align with bucket, er,
      data ?

      • Beth

        You may remember my article here

        https://judithcurry.com/2011/06/27/unknown-and-uncertain-sea-surface-temperatures/

        The idea that anyone can believe that global SST’s are known to fractions of a degree back to 1860 is completely baffling to me.

        They are not precise measurements with any irrefutable wider value to science until probably the 1960’s or even the Argo era. That is not to say that some scientific expeditions measuring SST’s don’t have merit-such as the Challenger and those using reversing thermometer with a rigid methodology of sampling- but other than that there are just too many variables in the mix to have any confidence in the figures. I am surprised John Kennedy-a fine scientist-has not popped up ton defend them.

        tonyb

  81. So presumably neither mosh nor Judith would use the satellite record for sea level change? That would be wise as its a terrible and inaccurate system .

    Tonyb

  82. The CATO post puts this tempest in a teapot into an interesting perspective:

    “And finally, even presuming all the adjustments applied by the authors ultimately prove to be accurate, the temperature trend reported during the “hiatus” period (1998-2014), remains significantly below (using Karl et al.’s measure of significance) the mean trend projected by the collection of climate models used in the most recent report from the United Nation’s Intergovernmental Panel on Climate Change (IPCC). “

    • David Springer

      Yup. 0.1C/decade is not significant. It’s noise.

    • What a laugher. The 30-year trend is .165C per decade. Wait until the El Nino is in the books. It’s going to be over .17C by the end of 2015. If the PDO is in a positive phase, it’ll be well over .2C per decade by 2020.

      The pause is paws up. Just like I said it would be.

  83. I meant to put this link in the previous comment:

  84. The future is certain. It’s the past that keeps changing.
    – Russian Parable

  85. THE ‘NEW’ LOOK:

    http:// cdn.arsteshnica.net/wp-content/uploads/2015/06/noaa-karl-etal-640×486.jpg

  86. This new paper raises once again the issue of data adjustments, that turn out to rather be data manipulation and corruption. But beside the questionable validity of the new data adjustments, another question should also be addressed regarding the constant “fluctuations” of those adjustments.

    1/ With HADCRUT4 data, the Hadley Centre has introduced new adjustments compared to previous HADCRUT3 data series :
    http://www.woodfortrees.org/plot/hadcrut3vgl/from:1970/mean:60/offset:0.025/plot/hadcrut4gl/from:1970/mean:60

    Curiously corrections are always in the warming direction… But why?
    Has anyone assessed the validity of HACRUT4 adjustments compared to HADCRUT3 ones ?
    I guess the answer is the unfortunately that nobody knows except those who have defined the adjustments.

    2/ Data adjustments appears to be obviously “fluctuating” and indeed corrupted.
    When looking at US Temperature record as published in Hansen et al 1999 (graph fig. 6) :
    http://pubs.giss.nasa.gov/docs/1999/1999_Hansen_etal_1.pdf
    Data were also available at the following address but NASA has recently deleted the file… Guess why…
    http://www.giss.nasa.gov/data/update/gistemp/graphs/FigD.txt
    – Warmest year is 1934
    – 1998 only ranks 4th after 1934, 1931 and 1921…

    In Hansen et al 2001 new adjustments made 1998 tight to 1934
    This situation has been maintained up to 2007
    http://icecap.us/images/uploads/NEW_RANKINGS.pdf

    In 2007, NASA GISS made a fruitless attempt to make 1998 ousting 1934 as Hottest U.S. Year
    In [Link]
    The “trick” has been discovered by McIntyre and NASA had to step back.

    But the record published in 2012 finally reached the objective of ousting 1934 as warmest year in the US :
    http://data.giss.nasa.gov/gistemp/graphs_v2/Fig.D.txt
    Compared to the 2000 publication :
    – 1998 average temperature anomaly has been adjusted by +0.35°C
    – 1934 average temperature anomaly has been adjusted by -0.21°C
    NASA also deleted those inconvenient data, but the resulting curve can be seen in Hansen et al 2010.

    3/ looking at individual weather stations, one can also observe significant and questionable adjustment evolutions :
    Few examples or how to hide the inconvenient truth that temperature have been warmer in the past, despite small anthropogenic signature :
    Station Data: Reykjavik (64.1 N,21.9 W)
    – Old adjustments : the 30’s are clearly warmer than current period.
    – New adjustments : Current period becomes much warmer. But why ?

    Station Data: Punta Arenas (53.0 S,70.8 W)
    – Old adjustments : 1st half of 20th century is clearly warmer than current period.
    – New adjustments : Current period is slightly warmer ??

    Conclusion :
    Temperature data sets are manipulated and corrupted by questionable adjustments and nice “tricks” whose aim is “to hide the decline”.
    When observations’ data do not support your favorite AGW theory then modify the data…
    That’s climate junk science.

    • As long as global warmers keep adjusting historic temperatures we don’t need to do anything about global warming or CO2.

      If it was really warming the historic temperatures wouldn’t need to be played with because the current temperatures would be warmer.

      If they ever stop adjusting temperatures and it gets 2 °C warmer we might have a problem worth addressing. It isn’t even worth starting to track the trend as long as they keep playing footsie with historic temperatures.

      It is unfathomable how climate scientists view historic data as a moving target.

    • Dear Eric,

      Some of the differences between HadCRUT3 and HadCRUT4 are due to the way that the SSTs were adjusted.

      If you want to see the overall effect of the SST adjustments on the SST data, see the figures from the HadSST3 paper here:
      http://www.metoffice.gov.uk/hadobs/hadsst3/diagrams.html

      Anyone is free to download our data sets and do their own assessments. HadSST3, for example, can be found here:
      http://www.metoffice.gov.uk/hadobs/hadsst3/
      The raw data, also publicly available, are from ICOADS:
      http://icoads.noaa.gov/

      If you read the HadSST3 paper, you’ll find some tests of the SST adjustments. These are by no means exhaustive, and we’d welcome more. We tried various things: separating out different subsets of the data to see if they were consistent, comparing to air temperatures made at sea, varying different assumptions in the analysis to see if the answer depended critically on any of them.

      Since the paper was published we’ve also compared the SST data to oceanographic measurements. Oceanographic measurements are likely of higher quality having been made (for the most part) on research cruises:
      http://onlinelibrary.wiley.com/wol1/doi/10.1029/2012GL052975/abstract

      In the modern period, comparisons have also been done with the best available SST data set derived from satellite sources:
      http://onlinelibrary.wiley.com/wol1/doi/10.1029/2012JC008400/full (Figure 14).

      The new ERSST paper with its comparisons of various SST data sets is an important part of the ongoing understanding of uncertainty in the historical SST record.

      What these studies show is that there are uncertainties in these records, but that they are limited.

      Best regards,

      John

  87. Just call it what it is- Karl and NOAA started with the result, and then worked backwards from there to devise the method. As a scientist, I find this appalling.

  88. Mike Flynn

    Good grief.

    Today’s conditions are what they are. It makes no difference whether yesterday’s temperature was higher, lower, or the same. That was yesterday. Adjust history all you like. It won’t alter the future at all, meteorologically speaking.

    All of this kerfuffle is designed to support the wondrously and inaccurately named greenhouse effect, which has never been demonstrated to exist in the presence of skeptics or unbelievers. Rather like Uri Geller’s spoon bending ability, I suppose.

    But just for one mad moment assume that the “surface” is warming. Combined with increased CO2 levels, this will result in more plant life, presumably leading to increased biodiversity, and the general greening of the planet.

    Who could possibly complain about more food, more pasture, increased useable land? The anti fossil fuel “back to the stone Age” crowd, that’s who!

    Bah! Humbug!

  89. I’m cautious about the opinions of Thomas Karl et al. After all, aren’t they just a bunch of climate pause deniers?

    • ianh001 | June 5, 2015 at 8:57 pm | Reply
      I’m cautious about the opinions of Thomas Karl et al. After all, aren’t they just a bunch of climate pause deniers?

      http://onlinelibrary.wiley.com/doi/10.1029/1999GL010877/abstract
      The record breaking global temperatures of 1997 and 1998: Evidence for an increase in the rate of global warming?
      Authors:Thomas R. Karl, Richard W. Knight, Bruce Baker
      First published: 1 March 2000

      He has a track record of being wrong about the pause, and increasing global warming.

  90. Where’s the Amazing Randi on this paper? Is he skeptical? Or did Bill Nye give him the Evil Eye?

  91. Brian G Valentine

    My attempted communication with Mr Dr Karl in the past has remained without response

  92. Brian G Valentine

    Tom Karl could start his own blog, as Judith as done.
    And if he had the guts enough he wouldn’t censure it either

  93. Brian G Valentine

    I don’t think any real, live, Red-Blooded, Freedom Fightin’, Gun Totin’ (that’s not me) Deniers had the opportunity to review this animal before it hit the press and I don’t think the Editors will ever send a proposed paper to one

    good night

  94. Global temperature is like a beautifully presented fruit salad with rotten apples to cover the taste of rotten oranges which cover the taste of rotten apples.

    The good thing is that a global temperature, if ever ascertained, would be trivial. Climate being a bewildering flux ‘n all. The Australian Alps are presently carpeted in snow. The globe didn’t cool to cause that. It’s just been snowing down here, that’s all.

    In fact, here’s been a marked cold snap in much of Oz. That doesn’t mean we won’t soon be copping a perilous spring with hot inland wind fanning fires. Because short term trends end just like long term trends end…which is why they call ’em trends.

    Trends end.

    • Today’s Australian: “Two of Victoria’s largest snow resorts … are experiencing their earliest snow dumps [> 25 cm] since the record-breaking year of 2000, while at Thredbo in NSW pre-season conditions have also been almost perfect.” … The BOM declared an El Nino event on 12 May; their “climate prediction manager says that there is still reason to be cautious about the year ahead. … Dr Watkins said ‘but having said that we have low humidity and quite cool nights, which is really good for snow making.’”

      So we don’t knows if it snows, but we knows we’re headin’ for Armageddon’. I don’t think that I’ll cancel my 2090 skiing holiday just yet.

      Faustisnowman

  95. Pingback: Road to Paris, e la pausa non c’è più | Climatemonitor

  96. Pingback: Global warming pause too much to hope for - Weekly Observer

  97. Robert Way

    Several days ago I asked you a question.

    ‘We can clearly see other examples of arctic amplification, for example the 1920 to 1940 period, the 1818 to 1860 era, the first few decades of the 16 th century and of course the extended warming during the Viking period.

    As Enhanced co2 couldn’t have been responsible what do you think could have been their causes?’

    I note your Inuit background and have read your profile with interest so good luck with the PHD. Have any of your studies required you to look at the history of the Arctic? If so, you will be well qualified to answer my question. The arctic seems to change dramatically on a regular basis-why is today any different to past episodes?

    tonyb

    • Thank you Tony for your work. You ask some of the most difficult questions posed at this site. I hope Mr Way attempts to rise to the challenge.

    • Buried beneath the permafrost I guess, Tony.

      • Gee, and I suppose that lends more support to your absurd argument for AGW.

      • Yes, it does. Modern warming fought the PDO all the way to 2006. The AMO does very little. We’re about to get very hot. The May anomaly looks like it will exceed .80C, and June is roasting so far.

      • David Springer

        RSS is out for May. 0.31C less than half your wild ass guess.

        Read it and rejoice, global warming is not a problem:

      • David Springer

        RSS is out for May. 0.31C less than half your wild guess.

        Read it and rejoice, global warming is not a problem:

      • Who gives a squeezed rats ring with what RSS is out? At the surface, it’s garbage, and anybody with with an ounce of genuine skepticism should “sense” it.

      • JCH | June 6, 2015 at 9:08 am | Reply
        One major difference, in 1910 to 1940 the PDO progressively grew in intensity whereas from 1983 to 2003 the PDO progressively decreased in intensity.

        Well…

        I’m not sure I would characterize the PDO as decreasing in intensity, it has changed sign. Between the drop in solar and the decline in PDO – it could be argued that it is going to cool and perhaps even get cold.

      • You really don’t have much choice.

        More low numbers; less high numbers = less impact on the GMST.

      • Lol. It cannot be wrong. When the PDO index changes trend direction, the GMST follows it like an obedient puppy. Except around 1983. It is you guys who are wrong.

      • David

        I am sure that jch is equally sceptical about the merits of sea level measurements derived from satellites.

        Tonyb

      • I’ve looked at it. There are multiple lines of evidence for sea level rise.

      • Gravity
        Altimetry
        Tide gauges
        Ice melt
        Water storage
        Paleo
        OHC
        physics of thermal expansion

        Whatever.

        How many times has it been claimed that there has been no warming for 18 years, sometimes more, because of RSS? It been obviously wrong for years.

      • JCH | June 11, 2015 at 3:21 pm |
        I’ve looked at it. There are multiple lines of evidence for sea level rise.

        This claim is dumb.

        The tidal gauges are still in use.

        If the rate of change of the tidal gauge aggregate hasn’t changed, the sea level rise hasn’t changed.

        We don’t care if the subsidence of inland areas due to ground water/oil extraction throws the geoid off.

        If the satellites aren’t calibrated to the tidal gauges it is pointless to do satellite measurement of sea level. We should treat satellite measurement of sea level as a different measurement since it is pretty obvious it isn’t measuring sea level.

        However – if the satellite sea level is divided by two it is roughly comparable to the real sea level rise. Perhaps the satellite teams should be required by law to divide their sea level anomaly by two so that their result correlates with real measurements giving the satellite measurements some physical significance by proxy.

      • JCH said “I’ve looked at it. There are multiple lines of evidence for sea level rise.”

        Yes – the sea level has risen 120 meters over the last 20,000 years.

        The average sea level rise per year over that time is 6mm/year.

        Currently we are only rising at 1/2 the average rate.

        Perhaps humans adding CO2 to the atmosphere has slowed down sea level rise (grin).

        But seriously – are you talking about sea level rise or sea level rise caused by humans? You have to subtract out the natural sea level rise from the total to obtain the human cause sea level rise. I have never seen a convincing analysis for that number (or any at all for that matter).

      • I live 100 yards from a tide gauge

        http://www.geograph.org.uk/photo/2713719

        When they wanted data for a possible new sea wall they decided the satellite data was too unreliable and put in a more modern tide gauge as well. If jch would give me the magic formula for determining the human caused element of sea level change I will go over in the morning and do the necessary calculations. Of course, we will need to take into account the isostatic change as well, which in many paces is greater than the sea level change.

        Tonyb

      • Of course they are still in use… who is calling whom dumb? LMAO.

    • I note Robert Way has not addressed Tony’s question. As I said above, Tony asks difficult questions. Apparently too difficult.

    • “If so, you will be well qualified to answer my question. The arctic seems to change dramatically on a regular basis-why is today any different to past episodes?”

      Not relevant. and ill defined question.

      Lets start with the not relevant part. The arctic very well may have changed ‘dramatically” in the past. Some of roberts work is in paleo, so he can tell you what he knows. HOWEVER, the problem is not change in the past.
      The problem is the RISK of FUTURE CHANGE.

      That risk can only be estimated by our best science. Our best science says:

      1. If we continue on the path we are on, the changes we will see will be more
      dramatic than those we have seen in the past.
      2. the causes of those changes will be both natural and human caused.
      3. the human caused portion of the change is not insignificant.
      4. We can take action to reduce this risk.

      In short PAST CHANGE means nothing. future change is the concern.
      The best science says we will exceed changes we have seen in the past.

      history is meaningless here. you cant forget it, because we are going to blow past any changes that civilized humans have seen in that area.

      • I totally disagree. Unless we understand the nature and magnitude of past variability (which has been very large in the Arctic), there is much room to fool ourselves about what the impact of human caused climate change will be.

      • You make this sh#t up. Total anal extraction.

      • But hang on Mosher, what ever happens in the future doesn’t happen in a vacuum, it happens along a continuum of time. You can’t wish away history.

        The value, or meaning, of PAST CHANGE is that it is evidence:
        – that CURRENT CHANGE IS NOT ‘UNPRECEDENTED’
        – that CHANGE CAN OCCUR NATURALLY
        – that FUTURE CHANGE WILL ALSO OCCUR NATURALLY.
        – that MAN MAY CAUSE SOME FUTURE CHANGE, BUT NOT ALL

        PAST CHANGE is the control.

      • Steven Mosher,

        You wrote –

        “That risk can only be estimated by our best science. Our best science says:

        1. If we continue on the path we are on, the changes we will see will be more
        dramatic than those we have seen in the past.”

        Oh goody! When does the best science estimate the Antarctic will return to its previous abundant flora and fauna? What about the Sahara returning to its previous lush, wet, and fertile condition?

        More fortune telling. Reducing CO2 much below present levels will result in the extinction of the human race. At least, so real scientists estimate.

        Why would you desire the extinction of humanity? Something along the lines of the U.S. Military thinking I guess.

        “‘It became necessary to destroy the town to save it’, a United States major said today. He was talking about the decision by allied commanders to bomb and shell the town regardless of civilian casualties . . . ”

        Climatological thinking – “It is necessary to remove humans from the Earth to avoid emission of poisonous CO2.”

        You believe your best science, and I’ll believe mine. I prefer to keep exhaling and breathing poisonous CO2. You may choose to stop, if you wish.

      • Mosher has been too elaborate in trying to make his point. I’ll help him with a little different take:

        Past change is whatever it was. Let’s stipulate that the magnitude of recent change is not unprecedented. That does not help us, IF the change that humans MAY be causing leads us into dangerous territory.

        “Unless we understand the nature and magnitude of past variability (which has been very large in the Arctic), there is much room to fool ourselves about what the impact of human caused climate change will be.”

        But we don’t understand the nature and magnitude of past variability. If we did, it would not necessarily help us to predict the nature and magnitude of future variability. If it should get warmer naturally, whatever AGW there is will be added. In other words, finding out that alligators once lived in the Arctic should not reassure us at all.

        What we need to be concentrating on is the issue of positive feedbacks. If water vapor is not a strong positive feedback, game over. We can fire 90% of climate scientists. They can catch on with Uber. Mosher doesn’t have to worry, because he has a day job and they are not paying him anyway.

      • Steven Mosher

        Don WINS the thread

        “Past change is whatever it was. Let’s stipulate that the magnitude of recent change is not unprecedented. That does not help us, IF the change that humans MAY be causing leads us into dangerous territory.”

        YUP.

        I’ve said it before and have yet to hear or read a cogent counter argument. you could abolish all the past records you have and STILL conclude that we need to take action on climate change. While the ppast is interesting while it may help you FINE TUNE a policy by understanding the contributions of natural variation, in the end all you need to know
        is fundamental physics. We knew it in 1896. We know it today.

        The huge mistake is thinking that one can fine tune a policy.. It would nice to know sensitivity down to 1/10degree C. Sorry, life isnt burger king and you cant have things your way. That means making a policy under deep uncertainity.. grasping for more better data by understanding natural variation is just a stubborn refusal to come to grips with uncertainty.

      • Ahhh, the old “we need to decarbonize the planet based on back of the envelope math” meme.

        You gotta wonder how they fit all the feedbacks on the back of an envelope? Particularly without even knowing whether they are positive or negative.

      • maksimovich1

        While the ppast is interesting while it may help you FINE TUNE a policy by understanding the contributions of natural variation, in the end all you need to know is fundamental physics.

        The first fundamental problem with the physics ( read fluid equations) is that they are not fundamental equations they are phenomenological ie non stationary.

        The second constraint is the longue duree ie you can have a run of bad luck that exhibits historical behavior,without being recurrent.

        Long memory is a constraint on the role of chance in variability eg hurst .

        Although in random events groups of high or low values do occur, their tendency to occur in natural events is greater. … There is no obvious periodicity, but there are long stretches when the floods are generally high, and others when they are generally low. These stretches occur without any regularity either in their time of occurrence or duration (Hurst, 1951)

        http://arxiv.org/abs/1406.6018

      • Steven Mosher,

        I assume you are talking about Svante Arrhenius’ speculations re CO2 in the atmosphere, rather than the settled science of the indivisibility of the atom, the luminiferous ether, the meteoric origin of the Sun’s heat, or the age of the Earth.

        One of his speculations was –

        “We often hear lamentations that the coal stored up in the earth is wasted by the present generation without any thought of the future, and we are terrified by the awful destruction of life and property which has followed the volcanic eruptions of our days. We may find a kind of consolation in the consideration that here, as in every other case, there is good mixed with the evil. By the influence of the increasing percentage of carbonic acid in the atmosphere, we may hope to enjoy ages with more equable and better climates, especially as regards the colder regions of the earth, ages when the earth will bring forth much more abundant crops than at present, for the benefit of rapidly propagating mankind.”
        – Svante Arrhenius.

        Good thing he was wrong about the benefits of increasing the amount of CO2, eh?

        He was wrong about the coal running out, too, I guess.

        I assume you have some other basis in the physics of 1896 for thinking that it is a good idea to reduce the amount of CO2 in the atmosphere, have you? Could you let me know? I realise I am dumb in your view, so you will need to be specific.

        Thanks.

      • Mosh

        Of course its a relevant question and its perfectly well defined. That you don’t like it doesn’t make it any the less valid.

        Of course the past matters as it shows us likely boundaries.

        At present we have an important branch of science claiming all sorts of things are unprecedented when they are clearly not.

        Have we demonstrated that the warming (since 1700 not 1880) is due to us? Most certainly not. Giss for example is being used as proof that the physics and observed temperatures back each other up..

        I know you don’t like questions, especially as badly defined as this one no doubt is, but what decade would you pinpoint as the time when GLOBAL SST figures became accurate to the level often claimed?

        Whilst we are communicating, perhaps you would like to answer the Arctic question I posed Robert in the absence of a reply from him?

        tonyb

  98. As long as the magnitude of the effect (temperature trend) remains in the same ballpark as the adjustments – I’m afraid this discussion is doomed to continue…..

    • Some models for comparison:

      Yup. Looking at the “new” graph it still looks flattened after 1998.

      Plots of CMIP5 seem to indicate that the temperatures have to increase from 1998 by about 0.6°C by 2020 and 0.9°C by 2030 to match the models. Further – the CO2 rate of increase has to hit 3+PPM/Y by 2020 and continue increasing or this is a futile exercise. If the CO2 increase drops below 2 PPM/Y by 2020 there is no point in discussing global warming, because the hysteria has literally run out of gas (CO2).

      Another chart from the ARStechnica article.

      Looking at the changes… and the result… and the interpretation.

      For the global warmer position a couple words come to mind… Titanic, deck chairs, rearranging…

      • The actual warming since 1980 is about 0.5 C from your other graphs, but on Spencer’s plot his warming is only near 0.1 C since then. What went wrong?

    • Yes, this is exactly the issue. the magnitude of the trend is about the same as the magnitude of the adjustments. Any realistic assessment of uncertainty would state that we do not have high confidence in the trend.

      • > Any realistic assessment of uncertainty would state that we do not have high confidence in the trend.

        This in turn argues in favor of refraining to speak of a pause.

        Mr. T is a lone wolf.

      • David Springer

        Mr. T is a member of the A team.

        Evidently you don’t know Mr. T from a hole in the ground!

      • Don Monfort

        Somebody forgot to tell the IPCC that there is no pause, willy. Maybe you could holler at them. And all those papers conjured up to explain away the pause that never was. If we add up all the missing heat that they claimed to find, we would be burning up right now. Paree will still be a big flop, willee.

      • > Somebody forgot to tell the IPCC that there is no pause […]

        Mr. T’s not saying that there ain’t no pause, Don Don.

        Mr. T just doesn’t know.

        Only Goldilocks know when Mr. T overplays or underplays his hand.

      • What puzzles me is why the claims of uncontrolled warming are taken seriously.

        There is some remaining energy from the 20th century warming to be redistributed.

        This may make a 0.3°C difference in global temperature.

        At that point there are no new sources of energy to increase the global temperature. If the solar trends don’t change it might get cooler perhaps much cooler.

        The claims of warming and melting have been almost uniformly wrong and this doesn’t seem to bother anyone in the warming community.

        The warming community has been saying we going to drive over a cliff for over 20 years. Much like a back seat driver that does nothing but complain. And we should respond the same way. If we don’t go over the cliff by 2020 – we should kick out the global warmers and leave them by the side of the road so we can enjoy the rest of our journey in peace and quiet.

      • > Mr. T is a member of the A team.

        Thank you for the reminder, Big Dave. In return, please note that only Mr. T’s character is a member of the A team.

        The show boosted the career of Mr. T, who portrayed the character of B. A. Baracus, around whom the show was initially conceived. Some of the show’s catchphrases, such as “I love it when a plan comes together”, “Hannibal’s on the jazz”, and “I ain’t gettin’ on no plane!” have also made their way onto T-shirts and other merchandise.

        http://en.wikipedia.org/wiki/The_A-Team

        Mr. T belongs to another world.

        The A-team was a team of soldiers of fortune, incidentally, which suits my purpose just fine.

        ***

        In my earlier comment, the expressions “Mr. T” and “Goldilocks” were used as catchphrases. They were not meant to refer to this Mr. T or that Goldilocks. Mr. T is the name of a monster Judy created, and Goldilocks refers to a wedge strategy that she used a lot when she started.

      • David Springer

        I reject your analysis of Mr. T.

      • Don Monfort

        Thanks for that very useful clarification, willy. We all been wondering about that crap. Maybe you should compose an e-book glossary of your cute little willyisms. It seems that 99 cents is a good price point, according to Brandoon.

  99. Pingback: En trist dag for vitenskapen | Klimarealistene

  100. Pingback: Paussin hävittäminen, kipeä isku vai peukaloitu toivetutkimus? | Roskasaitti

  101. OT, but wanted to let you know I found a map that shows moisture surpluses as well as drought.

  102. Has anyone been following this? Obumbles is trying to push through a ‘living’ trade agreement that would give him a path around Congress to implement climate law, and laws in other areas as well. He has been trying to keep it secret, but the secret is getting out.

    On the climate war, from the article:

    “On May 6th of this year, I sent you a letter (enclosed) regarding your request for Congress to grant you fast-track executive authority,” Session wrote to Obama on Friday in a letter provided exclusively to Breitbart News ahead of its public release. “Under fast-track, Congress transfers its authority to the executive and agrees to give up several of its most basic powers.

    “These concessions include: the power to write legislation, the power to amend legislation, the power to fully consider legislation on the floor, the power to keep debate open until Senate cloture is invoked, and the constitutional requirement that treaties receive a two-thirds vote.

    “The latter is especially important since, having been to the closed room to review the secret text of the Trans-Pacific Partnership, it is clear it more closely resembles a treaty than a trade deal.

    “In other words, through fast-track, Congress would be pre-clearing a political and economic union before a word of that arrangement has been made available to a single private citizen.”

    http://www.breitbart.com/big-government/2015/06/05/exclusive-sessions-to-obama-why-are-you-keeping-obamatrades-new-global-governance-secret/

    • From the article:

      Obama admits that climate change will be in Obamatrade
      By Howard Richman
      In a speech on the Senate floor on May 22, Democratic senator Ron Wyden of Oregon ridiculed those who thought climate change regulation would be part of Obamatrade:

      But in an interview on NPR’s Marketplace yesterday (June 3), President Obama said that enforcing climate change regulations will indeed be part of the Trans-Pacific Partnership, the Obamatrade pact that he is currently negotiating with Malaysia and 10 other countries. He said:

      If we want to solve something like climate change, which is one of my highest priorities, then I’ve got to be able to get into places like Malaysia, and say to them, this is in your interest. What leverage do I have to get them to stop deforestation? Well part of the leverage is if I’m in a trade relationship with them that allows me to raise standards.

      In December, Obama will negotiate a multi-country climate agreement in Paris. We already know from Obama’s joint announcement with China that he will commit the United States to a huge reduction in carbon emissions of 26%-28% from 2005 levels, but he will let China, already a much larger carbon emitter, continue to expand its carbon emissions until 2030.

      Obama would not need to get Congress to approve the unfair climate change treaty terms that he negotiates. Instead, he could get the Commission set up by the Trans-Pacific Partnership agreement to add those terms to the Trans-Pacific Partnership.

      http://www.americanthinker.com/blog/2015/06/obama_admits_that_climate_change_will_be_in_obamatrade.html

      • jim2

        Americas credibility and influence has been waning for some time. The treaty would have to be very one sided-i.e against US interests -for it to be sufficiently juicy for a country like Malaysia to bite.

        tonyb

      • My first comment was moderated, so I will try again – not that it adds much, but simply points out that our republican “leaders” or potential leaders are once again taking the Nancy Pelosi approach to passing something so we can find out too late how damaging it will be. Fortunately, there seems to be growing opposition to the TTP in the House and hopefully that will kill this deal.

      • Too many of the Republicans are as you describe. I believe they have been beaten down by Dimocrat operatives and the liberal media every time they try to do something, like block an excessive budget bill, or try to block just about anything the liberals want.

        I understand they are gun shy now, but they really need to suck it up, grow a pair, grow a backbone, and stand up to Obumbles and the remaining Dimocrats.

      • There are several very frightening aspects to what is happening – one being the low/no information voter (and that now apparently includes too many of our elected representatives). Another is when someone like Cruz, who according to the article did read the secret text, yet still supported and voted for this deal. It also appears that GOP leaders are easily duped. One very sad side of this is that if it were known by the MSM and most other progressives that this deal had the possibility of irrevocably committing the US to onerous and expensive decarbonization targets, they would likely blindly and loudly support it with no understanding of the consequences.

      • I think part of why the GOP leaders are wanting to pass it is because any problems caused by the trade pact will be on Obumble’s hands, thus keeping them out of the harsh glare of the liberal media floodlights. But again, they need to step up to the plate and oppose it anyway.

    • Anyone who thinks paranoia rules conservatives needs to read this. From the article:

      The CTO of Fight for the Future — the non-profit activism group behind Battle for the Net, Blackout Congress, and Stop Fast Track — Jeff Lyon, is seeking advice regarding a problem with facing the website they created — stopfasttrack.com — to fight the secret Trans Pacific Partnership trade deal.

      The site been blacklisted by Twitter, Facebook, and major email providers as malicious/spam. Over the last week, nobody has been able to post the website on social networks, or send any emails with their URL. Lyon has posted a summary of the relevant details on Reddit in the hope of obtaining useful feedback regarding what the cause might be. However, none of the answers there right now seem particularly useful, so I’m hoping the Slashdot community can help him out by posting here.

      Lyon indicates that the blackout has occurred at a particularly crucial point in the campaign to kill the TPP, as most members of the House of Representatives would likely vote against it were it brought to a vote now, and as pro-TPP interests have started to escalate their lobbying efforts on the House to counteract what would otherwise be a no vote.

      http://yro.slashdot.org/story/15/06/06/163238/anti-tpp-website-being-blacklisted

      • I’ve always said that paranoia is an adaptive feature, but in this case, he seems to have gotten the help he needed on Reddit.

        Paranoia is usually defined somewhere in the space of “everybody’s against me”, or “there’s a conspiracy against me”, but in this day of a complex, interlocking internet, it’s just as likely to be one enemy who knows how to work the system.

        Come to think of it, that’s probably been true since language was invented. Or even before that. Although when “working the system” involves politics, “there’s a conspiracy against me” is a good enough way to describe the results.

        Bottom line: if you’re going into activism on the web, you need a good IT staff.

      • From the article:

        To answer that question scientists at the Universities of Bath and Manchester have been studying a single-celled amoeba, also known as slime mould, which displays certain behaviours that have been labelled as “cheating” or “cooperating.”

        In a study, published in the journal Current Biology, the team found that cheaters don’t necessarily win in terms of overall survival, suggesting that biologists should re-evaluate how they define and measure social cooperation.

        Their research has medical implications when it comes to developing therapies that use socially successful bacteria to fight diseases such as lung infections.

        Professor Chris Thompson from Manchester’s Faculty of Life Sciences explains: “If the cheats always win, then according to Darwin, altruism shouldn’t exist. To study this we looked at why the single fittest strategy in the amoeba community doesn’t dominate.”

        http://www.sciencedaily.com/releases/2015/03/150331073942.htm

  103. Interesting exchange between Thomas Peterson and Anthony Watts regarding the paper under review here;

    http://blog.hotwhopper.com/2015/06/the-perversity-of-deniers-and-pause.html#more

    tonyb

    • It’s not interesting. It’s like being forced to listen to a psychic surgeon lecture a real surgeon on how to remove a tumor.

    • David Springer

      Calling Peterson a psychic surgeon is harsh but true. Good one.

  104. Mosher Part One: Three Instructions To Change The Data

    “1. Average the two?
    2. Adjust 200 to 202
    3. Adjust 202 to 200.”

    Mosher Part Two: Deny What You Just Instructed

    “Nothing is done to the data. it remains the data.”

    And Warmers Pretend To Wonder Why There Are Skeptics

    Andrew

  105. If this paper had been the subject of normal peer review processes and on the basis that their data and code has been made available so that other scientists are able to replicate their key results, then good luck to them.

    If not, then I must concur with Judith that this paper does not add any substance to the view that global temperatures have continued to rise unabated over the past 18 years, notwithstanding observations to the contrary having been made from several alternative data sources.

    The forthcoming climate talks in Paris will no doubt be influenced by the conclusions of this paper as yet another example of preaching to the converted. The stakeholders being Governments and scientists intent on pushing for decarbonisation policies across the western world.

  106. NOAA has done nothing to bust the pause in global warming. Their study is just one of many that either ignores ,says the data is wrong or manipulates it to make it conform to AGW.

    Their study is BS ,so they can keep AGW alive. All agenda driven..

    • No, that was reason #1!

      The measurement uncertainty with any measurements showing a pause has always been the crux of the biscuit.

      Some wise men say you need at least 17 years to determine a trend.

  107. I think it was Tukey who said that he didn’t understand seasonal adjustment.

    Anyway some big statistics and fourier transform guy. The procedure has its theoretical puzzles.

  108. Global warming is a Left vs. right issue, government is inherently Leftist, NOAA is government agency and therefore, NOAA is a facilitator of global warming alarmism.

  109. btw, doesn’t version 4 mean there were at least 3 previous screw ups? At least UAH lists their latest as 6.0 beta :)

  110. Pingback: NOAA Study Takes World ‘by Storm': No Global Warming Pause! | Watts Up With That?

  111. Reposted from Hotwhopper:

    Email exchange between Anthony Watts and Tom Peterson

    Dear Sou et al.,

    I thought you might find an email exchange I had yesterday with Anthony Watts interesting. 16 hours ago I received this email from Anthony Watts:

    Dear Dr. Peterson,
    This latest paper, Karl et al. 2015 is an embarrassment to science. It epitomizes president Eisenhower’s second warning in his farewell address about science and politics becoming hopelessly intertwined, and thus corrupted.
    In my last telephone conversation with you, I stated (paraphrasing) that “I believe you folks aren’t doing anything fraudulent, but you are doing what you feel is correct science in what you believe is a correct way”.
    After seeing the desperate tricks pulled in Karl 2015 to erase “the pause” via data manipulation, I no longer hold that opinion. You needed it to go away, so you prostituted yourselves, perhaps at the direction of higher ups.
    This will be NCDC’s Waterloo, and will backfire on all of you terribly on the world stage. Take a lesson from Yamamoto’s own observation after he bombed Pearl Harbor. Take a lesson from what is on WUWT today.
    How sad for you all.
    Anthony Watts
    cc: [undisclosed recipients]

    ***********************

    14 hours ago I sent Anthony Watts this email response:

    Dear Mr. Watts,
    As you might imagine, my views about our paper and our motives are somewhat different than yours. To explain why, I should start by explaining my views on what science is and how it works.
    Here

    is a 14 minute TEDxAsheville talk I gave in January on What is Science. While I can’t do justice to a 14 minute talk in a single sentence, the bottom line is that science is the result of tests.
    So let me give you two examples from our paper. One of the new adjustments we are applying is extending the corrections to ship data, based on information derived from night marine air temperatures, up to the present (we had previously stopped in the 1940s). As we write in the article’s on-line supplement, “This correction cools the ship data a bit more in 1998-2000 than it does in the later years, which thereby adds to the warming trend. To evaluate the robustness of this correction, trends of the corrected and uncorrected ship data were compared to co-located buoy data without the offset added. As the buoy data did not include the offset the buoy data are independent of the ship data. The trend of uncorrected ship minus buoy data was -0.066°C dec-1 while the trend in corrected ship minus buoy data was -0.002°C dec-1. This close agreement in the trend of the corrected ship data indicates that these time dependent ship adjustments did indeed correct an artifact in ship data impacting the trend over this hiatus period.”
    The second example I will pose as a question. We tested the difference between buoys and ships by comparing all the co-located ship and buoy data available in the entire world. The result was that buoy data averaged 0.12 degrees C colder than the ships. We also know that the number of buoys has dramatically increased over the last several decades. Adding more colder observations in recent years can’t help but add a cool bias to the raw data. What would you recommend we do about it? Leave a known bias in the data or correct the data for the bias? The resulting trend would be the same whether we added 0.12 C to all buoy data or subtracted 0.12 C from all ship data.
    You are, of course, welcome to share this with your readers (or not), as you deem appropriate.
    Regards,
    Tom

    ****************

    13 hours ago I received this email reply from Anthony Watts:

    Thank you for the reply.
    I’ll consider and advise.
    Anthony

    *******************

    And that is the last I heard from Anthony up to now.

    Anthony hasn’t yet taken Dr Peterson up on his offer to post his comment at WUWT. If you want to share it with the WUWT-ians, feel free :)

    • It may seem like Waterloo to Watts, but he is Napoleon in this one.

    • “science is the result of tests”

      hmm… I was looking at the results of tests made in this study:

      Hull-Mounted Sea Surface Temperatures from Ships of Opportunity

      found here:

      http://journals.ametsoc.org/doi/full/10.1175/1520-0426%281997%29014%3C1237%3AHMSSTF%3E2.0.CO%3B2#_i7

      In this study, the authors installed surface-mounted thermometers on ships to measure SST and compared the results with intake (injection) water.

      Quoting… ” The hull sensor SSTs differed from the ship injection SSTs by as much as 6.0°C, and the hull sensors responded to temperature changes that were completely missed by the injection temperatures.” Quoting further: “This relatively large temperature difference calls into question many of the intake or ship injection temperatures that have been traditionally used as SST estimates.”

      Moving along from science to plain old anecdotal observation. In mid-Atlantic I happened to take a snapshot of the “instrument panel” of a beautifully maintained one-year old cruise ship of NCL’s Gem class. The array of data was impressively extensive. How accurate would you assume the water temperature data might be? I was curious, so I looked again at my photo. I would say that it was not at all accurate… water temperature was the only field unpopulated.

      • David Springer

        You can’t make a silk purse out of a sow’s ear.

        Temperature data from ships is simply not robust enough to be useful for the purpose of determining global average temperature trend. There was not enough diligence in adhering to procedures in the same type of readings, too many different means of taking water temperature, and the coverage was almost exclusively confined to shipping lanes.

        FAIL FAIL FAIL

      • I will beat the horse one more time…. I posted this on another blog a few minutes ago but it seemed to fit here also:

        I am not sure why we should pay for an expensive fleet of super-accurate temperature-sensing buoys going forward. Based on the brilliant new methods described by Karl, I would propose the following:

        1. On an hourly basis have each ship crew member record their guess as to local SST. Average the guesses and record along with time and ship position.

        2. The NOAA will later compare each hourly ship-estimated SST to the nearest buoy measurement and record the difference.

        3. On an annual basis average all of the resulting differences to calculate a delta.

        4. When ten years of “data” has been collected simply average the annual deltas to arrive at a single value.

        5. Now that the bias has been “scientifically” obtained simply retire the expensive buoy fleet, continue to collect the ship SSTs, apply the bias, and viola, world SST data, yours for free!

        It must be accurate because lots of “data” was collected and any errors surely would sciencemagically disappear when they were averaged over time and location, n’est–ce pas?

      • sciguy,

        Since satellite, buoy, argo etc. all just don’t cut Karl et al.’s mustard it seems the republicans that are of the mind that NASA should get out of the climate business, are more up on the science than the democrats.

      • Don Monfort

        The Congressional oversight committees need to drag these federal employees into a hearing and compel them to justify these adjustments. Ask the little rascals why they didn’t do it sooner, if it is a necessary adjustment. Ask them why they suppose the alleged need for it wasn’t noticed by the IPCC and other fabricators of SST data products.

      • Now Don you know that would just stir up trouble. You can only be concerned about productivity and cost effectiveness in business. Bureaucrats and academics have to be held above that silly nonsense in order to be themselves. Heaven help the politician that tells some lower tier bureaucrat to communicate through channels instead of blurting crap out all the time.

        They have an inalienable right to stupidity doncha know.

    • David Springer

      Peterson’s reply is gibberish. Ship data is hopelessly inaccurate for the intended purpose. You can’t make a silk purse out of sow’s ear. No matter how badly you need a silk purse it cannot be done.

  112. Pingback: NOAA Study Takes World ‘by Storm': No Global Warming Pause! | I World New

  113. I it not a “no-brainer”?
    Given the preponderance of buoys ovcer ship reports now compared with the past, it follows that the cooling bias be corrected. Or else the record has been artificially cooled.
    Whether you take the ships or the buoys individually the trend is the same.

  114. I get that if enough of a certain kind of bucket used a certain way is compared with a nearby buoys you can come up with some kind of discrepancy.

    But for the long record? Engine intake water versus over the side? Old US versus old British method? Canvas (way to cool!) versus other material? How do you make a silk bucket from that sow’s ear?

    Look, I’m one of those who can’t see the point in worrying about a bit of warming or SLR in a Holocene which has consisted of nothing but rising and falls in temp and sea levels (and sea ice, for that matter). But we need to stop making these graphic masterpieces out of scrap data. You just get scrapterpieces.

    • Well, what happens if you toss the bad data and just plot the good data since 1998?

      Isn’t plotting the reliable data is going to give a more accurate post 1998 trend than throwing all the scrap data together and trying to make sense of it?

      • Yep. Sometimes you just have to know when to toss out the garbage. Some of us don’t just live with the stink.

      • PA, “Isn’t plotting the reliable data is going to give a more accurate post 1998 trend than throwing all the scrap data together and trying to make sense of it?”

        that should be up to the groups that used lots of public money to produce that newer more reliable data to defend. I would think that if they don’t come up with a fairly convincing rebuttal they may be replaced with a rowboat and bucket brigade.

      • The land data has a similar issue.

        1. The class 1 & 2 stations don’t show a lot of warming (the stations that mean siting standards).

        2. The class 3, 4, 5 stations show more warming.

        3. The combination 1.2,3,4,5 as practiced by NOAA shows even more warming.

        Now if somebody can explain how adding low quality heavily adjusted station data to quality station data gives you better data I would love to hear it.

        I’m familiar with electronics and and adding noise to signal effectively cuts signal gain, about the only time adding noise to signal improves things is in spread spectrum and only because you are syncing on the noise. While we would hope climate scientists are not deliberately syncing on the noise with their data analysis, the evidence is mixed.

        The basis for the corrections that the error is well characterized and the correction reduces error. Given the gross mistakes that are GCMs: the characterization the errors are well understood, the claim that the corrections reduce error, and the view that adding bad adjusted data to good data gives you better data should all be challenged.

  115. Judith: Whether or not Karl et al are right, they have proven that a peer-reviewed reanalysis of historical data publishable in a top journal can make 0.3-0.4 degC of “missing” global warming disappear. This change amounts to about half of 20th century warming and a larger fraction on warming during the better-documented last half-century. The older methodology and analyses had been endorsed by five reports from the IPCC. If we can’t trust the hiatus – what can we trust?

    Your profession is turning into Alice-in-Wonderland, where no one knows what, if anything, is real. The policymakers in Paris should have MORE doubts about the reliability of climate science, not fewer!

    After massaging the data for more than a decade, a hot spot in the tropical upper troposphere still couldn’t be found in the radiosonde data. Now it has been. Of course, the hotspot is still missing from the UAH and RSS satellite records. The UAH and RSS records also showed that the hiatus was real.

    I’ll also note that about 0.2 degC of surface temperature rise comes from correcting breakpoints in station records without any metadata demonstrating a cause or any proof that the “homogenized” record is superior.

  116. Imagine this..

    Obama: Dr. Karl I know you guys over there at NASA are supposed to be doing serious ‘science stuff,’ but I have a legacy to think about and I don’t want inconvenient facts like the ‘pause’ to get in the way of my climate policy. Which is, of course, to take over the energy sector and ultimately as much of the economy as get my hands on before I leave in a few years.

    So whatever you have to do, I want you to get rid of this ‘pause’ before the next major climate talks. If you need to bring on some others, that’s fine, but make it clear that their job is to get rid of the rid of the pause. Alright?

    Ok, Dr. Karl, keep my staff update and you can go..

    • oops that should read NOAA sorry..

    • “The 2007 report Atmosphere of Pressure, by the Union of Concerned Scientists and the Government Accountability Project, extensively documents Bush administration efforts to manipulate the work of federal climate scientists and exercise strict control over which scientists are allowed to talk to the media and which scientific results are communicated to the public.” – http://www.ucsusa.org/our-work/center-science-and-democracy/promoting-scientific-integrity/manipulation-of-global.html#.VXOdWUZtHNI
      President Bush would do something like this and President Obama would not.

    • Obama email:

      To:Staff
      From:POTUS
      Subject:Pause?

      Have we got an update from Dr. Karl on how he is coming along in getting rid of the pause?

    • President Obama would not.

      It’s all in your imagination until the facts bear you out.

      • In black and white terms, Bush did it. Obama did not.

      • Good work guys, now that you have done away with the pause, it’s time to get back to work on the pesky Medieval Warm Period.

        Spit spot!

        And while you are at it, can you make the 1940 to 1979 trend in the northern hemisphere look more like the trend in the southern hemisphere.

        Thanks, and look for a little extra special in your pay envelope.

    • The Canadian radio story (I listened to the beginning so far) tells us that even though the academics are not suppressed by their (anti-science) government, they know what the government can do (defunding), so they tend to not say things that the government may not like. In the US we see Republicans in Congress actively trying to slash funding for climate science, which shows how they can try to suppress the academics because they don’t like what they are finding. Australian climate scientists also went through something like this. Academic freedom doesn’t free you from government suppression.

      • By the way, Part 1 of that CBC program, Science Under Siege – recommended.

      • Jim D,

        Here, let me fix this for you.

        “In the US we see Republicans in Congress actively trying to slash funding for climate science”

        should read”:

        “In the US we see Republicans in Congress actively trying to slash funding for CAGW PR”

        There, that’s better.

      • But, Jim, the story I am telling is about a group of researchers fabricating their findings and publishing them. Do you have any examples of that?

      • I forgot to add “at the request of the President.”

      • Joseph, yes, unfortunately the real story is that the type of work Karl is doing is exactly what Congress doesn’t like and is leading them to try and slash climate funding. All the President can do is threaten to veto such a cut if it comes to his desk, and he has.

      • Part 2 of the CBC series is also excellent. It talks about the separation of evidence-based science from natural philosophy, where the difference is starting with experiments and observations rather than theory, and then it goes on to how science went from privately funded to publicly funded, and the current opposite trend in places like Canada. It mentions the lack of scientists in US and Canadian congresses, who tend to argue more the lawyer perspective of arguing towards a desired end result than from an evidence-based perspective. It is a very interesting and educational program.

  117. Pingback: Kritiserad artikel förnekar uppvärmningspausen - Stockholmsinitiativet - Klimatupplysningen

  118. Bill O’Reilly needs to write a new book.

    Killing the Hiatuspause.

  119. I’ve noticed a number of people saying it doesn’t matter whether you adjust one data set down or the other data set, such as Steven Mosher above saying:

    Technically It doesnt matter whether you adjust the recent past or the deep past.

    If you do an adjustment you will either hold the current past constant and adjust the deep past. or hold the deep past constant and adjust the current past.

    In fact, Mosher goes so far as to say:

    Finally since we are talking about anomalies and focusing on trends it doesnt matter either way.

    I was thinking about whether or not the first of those statements is true, but it’s easy to know the second is not. Rather than explain the technical reason for it myself, I’ll just quote the authors of the paper. They say:

    To make the buoy data equivalent to ship data on average requires a straightforward addition of 0.12°C to each buoy observation. This impacts the trend only because the number of buoys and percentage of coverage by buoys has increased over this period.

    The authors themselves admit their adjustment to the bouy data affects the trend they come up with. I’m not sure why Mosher thought it wouldn’t, but the more interesting question is whether or not he was right the choice of which data set to adjust would matter. I’m not sure on that one.

    On the one hand, I get why usually adjusting the past in one way is effectively equivalent to adjusting the present in the other. On the other hand, there are differences in coverage rates between ship data and buoy data. That means adjusting one will have a different spatial (and temporal effect) than adjusting the other. It seems to me that would mean the choice of which data set to adjust could affect the trend you come up with. What do you guys think?

    Also, if the choice of which data set to adjust does impact the resulting trends, does it make any sense to adjust the buoy data instead of the ship data? Not only is the buoy data more precise in general, the authors specifically give it more weight than ship data because of its higher quality. That means any potential effect of their adjustments would be exacerbated.

    I’m curious if the authors tested for this possibility, and if so, what they found. They didn’t say anything about it so maybe they found it had no effect at all (for some reason I’m missing), but I’m not sure.

    • David Springer

      “I’m not sure why Mosher thought it wouldn’t, but the more interesting question is whether or not he was right the choice of which data set to adjust would matter. I’m not sure on that one.”

      The sound blaster salesman was wrong of course. He failed to consider the mix of different sensor types. Best to just ignore him.

      • David Springer, I would have been fine with ignoring Steven Mosher, but he made an argument I felt needed to be rebutted. The idea that adjusting buoy data data made no difference is something I think should be acknowledged as untrue. If people ignored him, that point would remain unchallenged, and more people might come to believe it.. That seems bad.

        And hey, it gave me a springboard for the issue I’m mainly interested in. I know Mosher says it doesn’t matter which data set you adjust, but I’m not convinced that’s true. And even if it is, I think it needs to be demonstrated not merely asserted. Thus far, it seems the authors of this paper might have just chosen the form of adjustment best suited for their intended purposes. That’s not good.

        This is especially interesting to me because a number of people have claimed adjusting the buoy data instead of the ship data makes no difference. I’d like to know if they’re wrong or right.

        JC comment: I agree that this is a key issue.

      • > This is especially interesting to me because a number of people have claimed adjusting the buoy data instead of the ship data makes no difference. I’d like to know if they’re wrong or right.

        Of course it would be interesting, since:

        The idea that adjusting buoy data data made no difference is something I think should be acknowledged as untrue.

    • Don Monfort

      They knew exactly what they were doing and why they were doing it. They wanted to bust the pause. Look at their press release. The way to bust the pause was to adjust the buoys. Period.

      “This impacts the trend only because the number of buoys and percentage of coverage by buoys has increased over this period.”

      Yeah, that’s why they did it. Because it impacts the trend at the time and in the direction that suits their purpose. Ain’t it freaking obvious?

    • How many ways did they try to fix the data set before they got the result they wanted?

      Y’know, ya never can tell!

      And that’s one of the problems with adjusted data per se.

      If they couldn’t spin the result the way they wanted – would the paper have been published? or Deep Sixed?

      I bet Davy Jones Locker is just full of stuff we’ll never know!

    • Steven Mosher

      brandon

      “To make the buoy data equivalent to ship data on average requires a straightforward addition of 0.12°C to each buoy observation. This impacts the trend only because the number of buoys and percentage of coverage by buoys has increased over this period.”

      My argument is NOT that adjusting the bouys will not effect the trend
      My argument is that the difference in trend will be the same
      whether you shift the bouys OR shift the ships.

      The text you quote says Nothing about that,

      OF COURSE the adjustment will shift the trend.. in t his case by .014C per decade.

      The argument is will shifting the ships instead give you a different trend.

      • No Steven Mosher, that’s not the argument. That’s not what some people have been talking about, including ones you’ve responded to. You’ve responded to people who were just talking about whether or not any adjustment should be made. When a person is talking about whether or not two data sets should be adjusted at all and you respond by saying it doesn’t matter, it’s very easy for readers to take that as you saying the adjustments don’t matter.

        You don’t get to dictate what “The argument” is. People talk about things, and when you respond to them, you should respond to what they say. Responding to something they didn’t say, while pretending it is what they said, is just wrong.

      • David Springer

        You still don’t get it Steven. The preponderance of recent data is from buoys. If you don’t change buoys the recent trend won’t change.

        The preponderance of old data is from ships. If you change ships only the old trend will change.

        The authors wanted the recent trend to change. So they chose to adjust the more reliable recent data instead of the less reliable old data. It’s painfully obvious to anyone possessing even a modicum of talent of talent with numbers which of course leaves you out in the unadjusted cold.

      • This discussion is bizarre. As I understand it, the point is that you want to combine two datasets. Doing so requires adjusting one to be consistent with the other. As John Kennedy points out below, if you were interested in the actual, absolute temperature then it would matter which was adjusted. However, that isn’t what is being computed. What is being computed are anomalies. Given this, it doesn’t matter if you adjust one dataset up, or the other down. Once you’ve done the adjustment and combined that datasets, you calculate a baseline average, and determine the anomalies. These anomalies should not depend on whether or not you adjusted the buoys to match the ships, or the ships to match the buoys. Similarly, the trend is not affected. As Steven pointed out above, the trend changes because of a change in the number of buoys and a change in the coverage, not because of which dataset was adjusted.

      • Anders says:

        This discussion is bizarre. As I understand it, the point is that you want to combine two datasets.

        I understand a discussion may seem bizarre when one misunderstands the point of it. My participation on this subject began in a Twitter discussion where our host suggested the two data sets needn’t be combined at all. I think there’s a meaningful argument for that position. I think combining two data sets requires showing that combination is worth enough to justify any downsides which may come with it. Adjustments are relevant to that because the more adjustments need to be made, the more downsides there are to the approach.

        For instance, just below John Kennedy directed me to an interesting paper about a different data set which also combined ship and buoy data. In it, they discuss adjusting the two data sets. What I thought was interesting is the shift in baseline was acknowledged to only be designed for large-scale, or even global, areas. That’s an important caveat. It’s important to know when using that dataset, there may be issues if you try to examine regional patterns. There could be spatial biases caused or obscured by the adjustments to the data.

        That doesn’t mean it is wrong to combine ship and buoy data. That doesn’t mean the HadSST3 data set should be discarded. What it does mean is there may be times the HadSST3 data set isn’t appropriate for the problem being studied. There may be cases where it is better to use the ship/buoy data separately.

        The other thing that paper shows is there are a lot of details which go into trying to properly combine/adjust these data sets. It’s not an easy task, and it’s a process which requires a lot of explanation/justification to assure people you’re doing it right. That seems largely absent from the paper we’re looking at here.

    • Dear Brandon,

      When we made HadSST3 we tried adjusting the buoys and we tried adjusting the ships. It made no appreciable difference to the anomalies, or to the trends. See the HadSST3 paper, part 2, section 4.4:

      http://www.metoffice.gov.uk/hadobs/hadsst3/

      On the other hand, if you are interested in the absolute SST, it does make a difference. We know the buoys are less biased, so, in that case, you should adjust the ships.

      Two different cases, two different answers.

      Best regards,

      John

      • Thanks for the comment John Kennedy. I see there are some differences depending on the choice, but nothing notable. I think that’s something which needed to be shown. I’m not sure why it wasn’t with this other data set. I’m guessing the result would probably hold true for it too?

        Also, I’m glad to see that paper addresses another issue I had. I thought it was weird every buoy was adjusted by the exact same amount. Your paper addresses that though by making it clear the adjustment was designed for examinations of the data set as a whole and won’t work (as) well when examining individual regions.

        I’ll have to read the paper in full later today. It looks interesting.

    • Cui Bono? Who benefits? Could it be those who were desperate to explain the temperature pause? Finding no acceptable scientific answer, it must be time to remake the temperature record. It will take years to determine if the changes are valid. In the meantime public policy will push CAGW mitigation forward, grant monies will favor the consensus, and publication reviewers and editors will continue to gatekeeper for the powerful.
      Where is Cicero?

  120. David Springer

    Even more sad is that this isn’t the first time a US government agency pencil whipped buoy data into CAGW compliance.

    Correcting Ocean Cooling

  121. With 20/20 hindsight, there never was a hiatus in “global” warming: OHC was always rising. OHC content is the average ocean temperature down to a certain depth and is the temperature that requires the longest time to change in response to weather, ENSO, and forcing.

    The troposphere has the smallest heat capacity, so UAH and RSS change the most in response to weather, ENSO and forcing. If you attach the phrase “statistically significant” to warming, it is easiest to find a hiatus in this signal because it is so variable.

    The land surface record has somewhat more heat capacity. The global surface temperature record adds the heat capacity of the mixed layer.

    A truly “global” temperature would reflect all of the internal energy in the portion of the globe that warms and cools. Unfortunately that record is only a decade long, with some data that is clearly flawed covering the last half century. SST records before the satellite era are problematic, which is why we are now being told there was no hiatus. Due to all of the adjustments and changing measurement technology, no COMPOSITE record can be trusted to a few tenths of a degC. A hiatus in SSTs can only assessed by examining each HOMOGENEOUS record type ONE at a time. This probably will place some geographic limitations to the area where a hiatus could be said to exist.

    Since the GHE is real, the hiatus was always going to end – someday. What would be left behind is a warming rate well below the predictions of the IPCCs models. It was always stupid for skeptics to focus a hiatus – that would end someday – instead of the low warming rate. Even a period of rapid warming – which few currently expect – will take several decades to reach the most likely rate predicted by the IPCC.

    • Pretty good, but there are lots of people who expect an accelerated rate of warming. Some within 5 years. Me, it’s already here.

      May looks like +.80C. June is roasting so far. El Nino forecast to get stronger and stronger, and to last into the NH autumn.

      So beat the pause with a shovel. Then shoot it a few times, Then throw a grenade on it. And then roll over it with some heavy machinery. It’s going to be killed with extreme prejudice.

    • David Wojick

      Franktoo, you are merely stating conjectures as facts. We may well be in for a period of extended cooling, as some scientists predict. There is nothing in GH theory to preclude this because climate is not governed by GHGs. Oh and the OHC estimate is far less accurate than the wonky surface statistical models, so we really have no idea what the system is actually doing, except for the satellite readings.

      • David Wojick wrote: “Franktoo, you are merely stating conjectures as facts. We may well be in for a period of extended cooling, as some scientists predict. There is nothing in GH theory to preclude this because climate is not governed by GHGs. Oh and the OHC estimate is far less accurate than the wonky surface statistical models, so we really have no idea what the system is actually doing, except for the satellite readings.

        GHGs are responsible for the fact that the surface must average about 290 degK and radiate about 390 W/m2 in order for 240 W/m2 to escape to space. This is not conjecture – this observation is consistent with expectations from laboratory measurement on the spectra of GHGs. If we add more GHGs, the surface must warm – but how much depends on feedbacks.

        The irrregularity of pre-ARGO measurements of OHC makes them highly suspect. So far, I haven’t heard any good arguments against the record from ARGO.

      • Well, at least until the OHC data gets adjusted again, the rise in OHC is pretty consistent with the increase in tropical SST/HC. The tropical oceans should be the primary heat source for the oceans and if CO2 equivalent gases are responsible for the rise something else with almost an identical impact must have miraculously quit just in the neck of time.

        Once another 5 or so years passes we will see how cleaver Karl and co actually are. I think Karl and co will catch up with Windows around version 13.

    • David Springer

      franktoo

      Stop making crap up. Best guess at OHC increase is the estimated 0.5W/m2 radiative imbalance at top of atmosphere. If every bit of it goes into the ocean it’s enough to warm the basin by 0.2C in a CENTURY. No typo – point oh two cee in one hundred years.

      • This is completely irrelevant. A diversion… a shell game from a trickster.

      • Well, warming the top 2000 meters 1 degree takes 260 watt-years/m2 so that would be about right.

        I tend to believe the temperature profile will be triangular (zero on the bottom – large on top) so there will be a 0.4°C change in SST.

      • And what difference will 0 .4C on the SST make to the land temperature?

        Tonyb

      • David wrote: “Stop making crap up. Best guess at OHC increase is the estimated 0.5W/m2 radiative imbalance at top of atmosphere. If every bit of it goes into the ocean it’s enough to warm the basin by 0.2C in a CENTURY.”

        The ocean is highly stratified and the deep ocean is not in equilibrium with the surface. Bottom water comes from the Arctic and Antarctic and wasn’t any much colder even during the last ice age. If the current radiative imbalance of 0.5 W/m2 only warms the top 1/10 of the ocean used your calculation, it will warm 2 degC in a century. If warming is concentrated near the surface and decreases linearly with depth, that warming will reach the top 1/5. No one expects the ocean to equilibrate in only one century. And if TCR is 1.3 degC (as energy balance models suggest), that imbalance can warm the top 3/10 of the ocean 1.3 degC in a century.

      • David Springer

        Franktoo responds in character by making more crap up. We don’t have a frickin’ clue what the deep ocean temperature was during the last ice age yet you state it is well known.

        At least you’re right that if only top tenth of the ocean absorbs the 0.5W/m2 imbalance it will warm 2C in one century. The top tenth of the ocean is 400 meters.

        The problem with that is that ARGO buoys have already measured about a third OHC change already accumulating below 700 meters which begins at a depth two tenths of the way down from the surface.

        http://commons.wikimedia.org/wiki/File:Change_in_total_heat_content_of_earth.svg

        So you’re wrong as of right now. This is what happens when you make crap up.

        STOP making crap up.

      • David generously writes: “Franktoo responds in character by making more crap up. We don’t have a frickin’ clue what the deep ocean temperature was during the last ice age yet you state it is well known.”

        Try reading about benthic foraminifera, which live on the ocean floor and produce CaCO3 shells with an O18 isotope ratio that depend on the temperature at the bottom of the ocean. Then read about the Ocean Drilling Program, which has collected about 2000 sediment cores from oceans around the world and used the benthic foraminifera found in them to date deep ocean temperature through the ice ages and back as far as 65 million years ago. Alternatively, read my source: “Frozen Earth: The Once and Future Story of Ice Ages”, by Doug MacDougal, particularly the the twenty-page chapter on “Coring for Details”. Used paperback copies cost $1 plus shipping at Amazon. Perhaps next time, you will have the decency to ask for a reference before you accuse someone of making things up.

        David kindly wrote earlier: “Stop making crap up. Best guess at OHC increase is the estimated 0.5W/m2 radiative imbalance at top of atmosphere. If every bit of it goes into the ocean it’s enough to warm the basin by 0.2C in a CENTURY. ”

        David’s argument was a straw man. No one expects all of the top 2000 m to warm at the same rate. I provided scenarios where an imbalance of 0.5 W/m2 would provide more than 1 degC of warming of the ocean surface as long assuming most of the heat remained in the upper ocean.

        David replied: “The problem with that is that ARGO buoys have already measured about a third OHC change already accumulating below 700 meters which begins at a depth two tenths of the way down from the surface.”

        Unfortunately, the idea that about 1/3 of ocean heat uptake has occurred in the deeper ocean (700-2000 m) is based mostly on unreliable observations for the last half-century (not ARGO) and no one provides a confidence interval for this fraction. Box 3.1 from AR5 WG1 is a more reliable source that the graph from Skeptical Science he linked. “Better” data from ARGO comes from the hiatus, an unusual period when the top 100 m of the ocean lost heat. Coverage of the deep ocean below 700 m was limited and problematic before ARGO. According to Lyman & Johnson (2014), the starred values* are what ARGO tells us about heat accumulation (W/m2 globally) in 2004-2011 period. I calculated the numbers in the column “Warming” and the accumulation in separate layers (100-300m,300-700m, 700-2000m). Warming in W/m2/100m was converted to degC of warming using the conversion factor 1 W/m2 = 0.1 degC/yr/100m and multiplying by 8 years. Assuming my calculations are correct, the warming detected by ARGO below 300 m over this period is 0.01 degC and less. Note that the radiative imbalance is only 0.29 W/m2, not 0.5 W/m2 in this period.

        depth Accumulation W/m2 Warming W/m2/100m (degC)
        0-100 m -0.04* -0.04 (-0.032)
        0-300 m 0.10*
        0-700 m 0.13*
        0-2000 m 0.29*
        100-300 m 0.14 0.07 (0.056)
        300-700 m 0.03 0.0075 (0.006)
        700-2000 m 0.16 0.012 (0.0096)

        http://floats.pmel.noaa.gov/sites/default/files/atoms/files/gcj_4n.pdf

        If one believes this data – which lacks confidence intervals – 100-300m is accumulating 5X as much heat as 300-700m and warming 10X faster, consistent with my expectation that the top of the ocean should accumulate most of the radiative imbalance. Meanwhile, the deeper ocean (700-2000m) has collected more than half the total radiative imbalance and is warming at 2X the rate of the water immediately above (300-700m). If correct, there are certainly are some strange things occurring – heat is supposedly reaching the deeper ocean without raising the temperature of the 300-700m layer and while the top 100m is cooling. Eight years is a very short period and these rates may be skewed by the state of ENSO or some other transient phenomena at the beginning or end of the period. IMO, none of the OHC data proves that most of the radiative imbalance over the next century CANNOT end up concentrated near the surface, as I suggested and climate models predict.

        In the next few decades, we may obtain unambiguous evidence that the upper and deeper ocean DO warm at similar rates. We might have a better understanding of the mechanism that is carrying heat to the deep ocean against the existing density gradient and how it compares with transport of CFCs and C14 (from atmospheric testing of atomic bombs) into the deeper ocean. IF the rates turn out to be similar, David would be correct in concluding that an 0.5 W/m2 radiative imbalance alone can’t produce changes in SST of about 1 degC or more in the coming century. This discovery would also invalidate all of today’s climate models (probably causing OHC data to be endlessly reprocessed and homogenized – like radiosonde data – until no one except partisans knows what to believe.)

        Even if David were correct, rapid warming of the deeper ocean would merely postpone, not prevent, the surface warming caused by a radiative imbalance. Conservation of energy demands that the planet warm somewhere until any forced radiative imbalance is eliminated.

  122. CO2 has no influence on climate, solar cycle 24 and net of ocean cycles are both on down-slope. The only way to make it appear that it is still warming is to change the temperature numbers.

    See the proof that CO2 has no effect on climate and discover what does cause climate change (explains 97+% average global temperatures since before 1900) at .http://agwunveiled.blogspot.com. It has been a plateau, not a hiatus. Temperatures are going down.

  123. When will people stop agonizing and arguing about every little blip on the global surface temperature change. It is the worst possible environment to hope to get a low variance measurement. Why is NOAA not referring to the rest of their ARGO measurement data which shows only 0.42 watts/m2 global warming average over its last 10 years. Global warming is happening and it is such a low rate that no one regardless of their frenzied concern for the planet should care. The fact that is denies the conjecture 1.6 watt/m2 due to CO2 alone should engender the interest even to a government agency like NOAA. The fact that they can reference their ARGO Buoy data and ignore their own measurements is worthy of question by the interested community.

    • Mosh

      The secular student alliance? Where do you find these people? It would be good to know who wrote it so we can judge their likely competence

      Tonyb

      • Whoever they are, it looks like McKitrick was schooled by them.

      • David Springer

        Secular Student Alliance == atheist organization

        It’s a political response from the demolib side of the culture wars. No science in sight just the usual suspects on the extreme left of the political spectrum.

      • Part of that post seems to be making the same point that John Kennedy made here.

      • It’s always been more significant that the 30 plus year trends are lower than modeled, rather than there being a pause or not.

        And Karl et, al, indicate even lower 30 year trends?

        Still, the trends for nearly all the data sets other than Karl, do indicate negative trends since 2002, raising questions about whether Karl is accurate or not.

      • ATTP

        During which decade do you believe we start to have a genuinely reliable and accurate global SST?

        Tonyb

      • Still, the trends for nearly all the data sets other than Karl, do indicate negative trends since 2002

        Really, I seem to get positive trends for all datasets from 2002 – and quite large uncertainty intervals.

      • Looks like at least one really smart kid at the University of Michigan.

        Only takes one.

      • JCH: “Looks like at least one really smart kid…”

        Yes, but likely still naive to the ways of the world. He’s been well indoctrinated. He finishes by the whatever argument: authority is always correct.

        I think the best thing to do is to consider the authors as knowing what they’re talking about.

        USA college student attitudes have come a long way since the 1960s.

      • He’s calling McKitrick out for misrepresenting some papers and numbers. I think he should be allowed to do that. No respect for seniority there. Good for him.

      • If the Michigan guy thinks the best ocean temperature data should be adjusted with garbage data, that puts him in the less-than-genius camp.

      • Don Monfort

        The clown doesn’t even understand what Ross is talking about. He had made a judgement and had a preconceived plan of attack, before he ever read Ross’s analysis. This shows he’s got his head somewhere it don’t belong:

        “Why does McKitrick assume that the correction should be a massive over-statement like this? He gives zero reason anywhere in his post, and I would much prefer the idea that the researchers who have been working on this correction for a few years now probably know what they’re doing, especially considering that they all came to very similar answers.”

        He is talking about the numerical example that Ross gave to ILLUSTRATE how making changes in assumptions “can cause important changes to the results.” Ross is not proposing that as an alternative correction. That’s ridiculous. Don’t those jokers know how to read?

        So the Karl et al. gubmint employees must be right because they have been working on this for years and they all came to very similar answers. Really? Why did it take them years looking at the data to make a few assumptions? And why haven’t the other SST data product fabricators thought of employing those same assumptions? They have been aware of the same data for the same length of time. How come the overseer of the settled climate science dogma, IPCC, didn’t know about this neat trick? They recently gave an overview of the settled science that included the pause. Karl goes against the dogma. Why is this one paper based on dubious and novel assumptions being promoted as overturning IPCC pause dogma? The climate science is a joke science.

      • Ross misused standard deviation and misrepresented other papers on the subject. This was important to point out, otherwise some people would have been misled.

      • Steven Mosher

        “The secular student alliance? Where do you find these people? It would be good to know who wrote it so we can judge their likely competence”

        Huh.

        I dont care about the competence or who writes stuff. Thats why I read YOU. I read the argument. anyone can make one.

        play the ball not the man.

      • TurbEd

        Your data ended at 2014.33

        Try HadCrut4

        And then come back and try it once May is updated. It won’t get better.

      • Mosh

        That was not what I was implying at all. Yes, read the argument but is it being given by a phd with decades of experience or an aspiring phd? It makes a difference in as much the arguments as I saw them appeared lightweight and incomplete as the error bars seemed wildly different to those posed by Ross.

        Without knowing if the author is an expert in statistics or analysis or whatever it is difficult to know who should be given more weight.

        I do think it strange that the pause can be massaged away, suggesting that the thousands of hours work on this subject by other authors has previously come up with inaccurate analysis.
        tonyb

      • well its not just the pause that ‘disappeared’, its also the 1998 El Nino warm bump

      • And then come back and try it once May is updated. It won’t get better.

        Yes, we’re seeing an ENSO event, one that appears to be ending the cooling trend since 2001, and now the trend since 2002.
        But you know, ENSOs don’t last forever.

        Also, the Karl pronouncement is for data through 2014, so it is at odds with other data sets which indicate negative trends from 2002 through 2014.

      • Climate Reason—I’ve been writing about climate for a while now, and started at Skeptical Science a while back before my time at the University of Michigan. Am now starting to get back into the swing of things, but mainly on our SSA blog.

        David Springer—most of the students who post at our blog are liberal, yes. While my response to McKitrick did not include a lot of “science”, it did include discussion of the problems of his handling of numbers, as well as representations of papers. My second linked post responding briefly to Michaels and co. deals more with some of the algebra.

        Ron Graf—that finishing statement is not an argument. All of the preceding article was the argument. Maybe you missed it.

        Don Montfort—McKitrick’s numerical example, in order to illustrate how certain corrections can create a trend, should actually include corrections that accurately portray what happened in Karl et al. They did not accurately portray Karl et al. I think I was pretty clear about that. And no, Karl et al. are not right because they’re government employees, or even because they’re experts. You’ll notice I very clearly bolded the request for a serious consideration of the NMAT corrections, instead of the bad ones McKitrick gave.

      • Captain Dallas:

        A distribution does not have to be “well-behaved” in order to have the CLT apply to it. If you sample from something that has a much higher standard deviation than what you were sampling before, and lump in those observations, the total distribution will indeed gain standard deviation too but the standard error will catch up in the long run as sample size grows.

        And about the normal distribution: not really, you can test this out for yourself as well. The sample mean distribution approaches normal (that is in fact the CLT), even given a very very wide variety of sampling distribution choices.

        My comment below responding to Hide the Decline has more details and a link.

      • Don Monfort

        Nice footwork, Alexander. However if your reading comprehension or you honesty were on a little higher level you would have to admit that Ross didn’t say he was offering an example that was analogous to the Karl paper. Maybe you would benefit by reading Ross again, if reading is the problem and not the other.

        Why don’t you explain why adjusting the buoys was a good idea, rather than adjusting the ships that pass in the night, or adjusting both, or adjusting neither? The many other producers of SST data products have known about the discrepancy between the better buoys and the sketchy ships for a long time. They have not chosen to screw with the buoys. Do you wonder why?

        I see that Judith has observed that they also erased the 1998 El Nino. Nice work, Karl et al.

      • (These chains are long so I’m scrolling a lot to reply; I think this is in the correct chain. This is for Don Monfort (not Montfort).)

        “you would have to admit that Ross didn’t say he was offering an example that was analogous to the Karl paper”

        Ross and I have had a bit of an exchange at my original article and I have clarified what I meant, in response to clarifications he gave on what he meant. My main point is that his example *was* meant to show how some of the corrections can have an impact on trends, and that with the very large over-correction that was applied to the buoy simulated data, the uptick at the end deviated from the simulated “true” value inappropriately. I doubt that many others at WUWT have given a very close look at this issue, so when the numerical example showed warming above the “true” value, I was very suspicious of the intent behind the example (as well as, of course, the methods). Either way, my post now reflects a more nuanced description, though I have not weakened my stance.

        “Why don’t you explain why adjusting the buoys was a good idea, rather than adjusting the ships that pass in the night, or adjusting both, or adjusting neither?”

        I do, in my next post. (It is also linked in that one, in the indented section describing correction (1), by the word “later”.)

      • And Then There’s Physics: yes, you are correct; from the time stamp of my original article I may have posted it a couple hours after John posted his reply here. I swear I came to that conclusion myself though, I swears! :-)

      • Don Monfort

        Is it the reading comprehension this time, Alex?

        “Why don’t you explain why adjusting the buoys was a good idea, rather than adjusting the ships that pass in the night, or adjusting both, or adjusting neither?”

        You assert that at there is a bias that has to be corrected. The difference between the pretty good buoy data and the dubious ships has been known for some time and nobody else has felt compelled to adjust the buoys, or the ships. Some might suspect that this has more to do with erasing the pause in time for Paree than it has to do with the science.

        An aside, about your secular student alliance:

        “Our group has grown considerably since then and is the only group for atheist, agnostic and free thinking students on the University of Michigan campus!”

        Only secularists can be free thinkers, Alex?

        “While the group is primarily geared toward atheists, agnostics, and non-theists, we welcome any atheist-friendly person or anyone who would like to see Church and State remain separate.”

        I don’t know anyone who wants to unite Church and State, other than the beheaders who follow a prophet whose name we won’t mention. Are those the only people you want to keep out? You might want to tighten that up lest a religious person who is not interested in uniting Church and State slips in.

        When I was a misguided youth, I helped trash some of those Detroit neighborhoods that your group is cleaning up. Kudos on that. By cleaning up, I assume you mean beautification performed during daylight hours. I could do a real clean up, with one brigade of the 82nd Airborne. The locals would raise statues in my honor. Of course the ivory tower types would be horrified.

      • “By cleaning up, I assume you mean beautification performed during daylight hours. I could do a real clean up, with one brigade of the 82nd Airborne. The locals would raise statues in my honor. Of course the ivory tower types would be horrified.”

        I wonder if the 82nd Airborne would feel the same way as you about military strikes on American civilians. Glad to know I’m dealing with someone who thinks mass murder should be celebrated.

      • Don Monfort

        There you go again, al. Where did you get the mass murder story from? Military units have been deployed in civilian areas in the U.S. on many occasions. You are just making that crap up about mass murder. Is this an example of your free thinking, al? That is left-wing loon clownish thinking, al. Are you going to apologize for that, al?

        Can you think of any reason other than mass murder for deploying military units in the war zones of our broken cities, al? Think hard! I’ll have to help you, al. When the professional soldiers show up, the local bad boys who have been terrorizing and murdering their neighbors go into hiding. The shooting stops, the kids play in the streets. You little free thinking smarties spending a couple of hours out there with a broom does practically nothing, except make you feel good.

      • Don Monfort:

        *sigh*

        It should appear I am wasting my time responding to you. So your comment wasn’t about a strike on Detroit, but instead about declaring martial law in Detroit? It’s only a slight relief to know what you’re talking about now.

        I’ve made my points about McKitrick’s post, and you have not responded to them in any sort of satisfactory way, instead insulting me for being young or liberal or an atheist or whatever. If I can’t expect better I’ll just take my ball and go home.

      • Don Monfort

        Judith, can you explain why you deleted my comment, but you leave this crap?

        “Glad to know I’m dealing with someone who thinks mass murder should be celebrated.”

      • Well because i didn’t spot it. provide me with the link (in the comments or via email)

      • Don Monfort

        I am not asking you to delete it. I find it amusing and very revealing of the character of the hysterical clown who said it. I am more interested in why you delete some comments with relatively innocuous tweaks and you leave a lot of crap that is way more offensive. Rhetorical question.

      • Its what I happen to spot while scrolling through my dashboard of recent comments.

      • Steven Mosher

        Tony you have zero idea how difficult the sst data is. You barely understand land data.
        Step away from the key board.

      • Mosh

        I understand how SST data is derived quite well thank you. If I step away it leaves you free to not answer questions and misread comments like when you claimed I said tenths of a degree when I didn’t.

        Would you like to answer my question as to when global SST data can be considered reliable?

        Tonyb

    • Standard error versus standard deviation is kind of an issue to some. If you have a butt load of data points and a friendly distribution you have very small standard error. So if you use all the data you can get your hands on you should have the smallest standard error, assuming of course a friendly distribution of error.

      No matter how many data set you combine though, every “surface” temperature reconstruction is going to have a standard deviation. LT has the largest, “surface” stations next, then SST and finally vertical ocean temperature has the least. When you include the nether regions close to the poles you are adding noise, but thanks to all the other data points you have that big n to keep the standard error small.

      Only problem with that is those nether regions increase the standard deviation. So I personally think you need to consider both so you can see if you have over polished your turd.

      That data got a pretty good polishing and miraculously found the missing heat a few years back.

      • I had not analyzed Ross’s article. Thank you Don for pointing out a false assumption by the kid. Clearly he is smart, but not smart enough to realize Ross is courteous enough to write in English were most klimatariate papers are impenetrable by all but those in the field (though they get published in broad readership journals).

        What struck me is the kid’s finale. He stands on the iron logic that one ought to yield the benefit of the doubt in scientific arguments to authorities, And, of course, that is not a usual argument for a kid to make for the obvious reason. Thus it told he was parroting — probably his biology prof or someone that would make that argument. I have always disagreed with that argument. I never assert my authority as a reason to heed my argument unless I am personally responsible for the decision. I do, however, appeal on complex matters or ethics for those junior to me to trust my judgment, the use of life and career experience.

        David Springer | June 7, 2015 at 2:43 pm |
        Correcting buoy data with instead of ship data is like correcting an atomic clock instead of a sundial.

        Yes. It does seem desperate and contrived. One would expect more from “respected experts.”

      • He is not saying you have to defer to authority, only that McKitrick’s effort is so half-baked and full of errors, he will just wait for a more serious effort.

      • Captain Dallas is correct about the standard deviation and standard error both being important for their own reasons, but when calculating the error for a mean, one should use standard error, which takes into account sample size because the sample mean converges to the population mean according to the Central Limit Theorem. We don’t even need to ask what other regions do though when you include them: clearly the distribution of samples from any particular region does not change with more observations, it just becomes more clearly defined. But this is important: the distribution of the mean, and the distribution of the sample, are not the same thing. And if you want to know how accurate your mean is, you use the former (and standard errors). This is not an “issue to some”, this is just how you do that calculation.

        Jim D accurately states my final position.

      • Alexander, I have to disagree with that.

        Since you are trying to create a continuous reconstruction of temperature with a fairly small portion having “best” data, you aren’t really sure what biases there really are. SST isn’t as bad as surface air temperature, but adding the Antarctic to “global” increased the standard deviation while reducing the standard error. I would expect a true well behaved distribution to reduce both. I believe the Central Limit Theorem is restricted to a somewhat to a uniform distribution.

        Not sure what impact that has, but that is the issue.

      • David Springer

        Alexander

        Do you realize how skewed the temporal distribution of buoys are in the record?

        If you change ship temps you change a trend in the more distant past when there were no buoys. If you change buoy temps you change the more recent trend where buoys now dominate the record with far greater accuracy and precision.

        So if you want to kill the pause you change buoy temps not ship temps. Isn’t that just precious?

      • David Springer:

        I’ve written a second article that deals with that issue, and it’s been up for some time now. I would recommend it, because it shows how the correction ends up producing the same SST anomaly series no matter which way you apply it.

        http://michiganssa.blogspot.com/2015/06/other-bad-wuwt-commentary.html

      • David Springer

        Thank you for pointing out that Hal Lindzen made exactly the same comment that Judith Curry made which was exactly the same comment I made. All independently. So what do I do when I’m faced with finding myself in agreement with two of the most senior living atmospheric physicists in the world vs. being in disagreement with you, a unknown post graduate student.

        You see where I’m going with this?

      • David Springer

        In your article at secular students alliance you failed to use the actual dataset and instead used a fabricated dataset with a linearly increasing number of buoys ending the series 5 years early. That’s not what happened in the real world.

        https://en.wikipedia.org/wiki/Weather_buoy

      • Alexander

        You must surely be aware of the inherent problems with data derived from ships, buckets or intakes?

        During which decade do you believe that global SST’s became accurate to fractions of a degree thereby making them a worthwhile matrix that can inform important policy decisions? Thanks.

        Tonyb

      • Steven Mosher

        Tony nobody argues that it is accurate to a tenth.
        Nobody.

      • David Springer

        What do you estimate the error range to be on the estimates of SST?

        And two followup questions:

        1) Is that a prediction?

        2) Where in the world is Mr. T?

      • Mosh

        Read harder I said fractions of a degree not tenths. :)

        What do you believe their level of accuracy to be, and from what decade can we rely on them as a good representation of global sea surface temperature?

        Tonyb

      • Don Monfort

        What Springer said, secular student alex. Show us that it works the same both ways with the real data. Frankly, I am suspicious. If they could have done the same thing by adjusting the notoriously dubious ship data, why did they cause a doo-doo storm by adjusting the buoy data? Since this appears to be an obvious PR stunt from the beginning, you could score some points by clearing this up.

        While you are at it, make an attempt of honestly the question I have asked you repeatedly, with no complete answer. Why did they make any adjustment at all, given that the other many producers of SST data products have known about the differences between the buoy and ship data for as long as Karl et al. and they have decided not to apply the dubious adjustment? Or you can find some excuse to take your ball and go home, again.

      • David Springer:

        “So what do I do when I’m faced with finding myself in agreement with two of the most senior living atmospheric physicists in the world vs. being in disagreement with you, a unknown post graduate student.”

        My statements are in line with researchers who have actually been working on this issue. They say that there is no difference between the two methods. But how in the world shall this statement of mine be taken? I was criticized earlier for my ending comment saying we should trust the experts—shall others now levy the same against you or will it just be repeated to me? At least the experts I cite have been working on this issue.

        Regardless of statements from experts who have not done the math here, my example in the blog post holds. Your comment:

        “[…]and instead used a fabricated dataset with a linearly increasing number of buoys ending the series 5 years early. That’s not what happened in the real world.”

        is true in only the most useless way, and your Wikipedia link does not support your statement. Instead, Kennedy et al. (2011) gives the metadata for buoys v. ships in Figure 2, and yes it is a linear increase, no it does not stop 5 years early, so sue me. Correcting that will have literally no effect on the nature of the problem at hand.

        http://hadobs.metoffice.com/hadsst3/part_2_figinline.pdf

        My use of a “fabricated” series is to show that the style of the correction does not depend on the data series at hand, because it is simply algebra. See my comments following up to someone who made the same bad argument as you on my blog post (was that you?).

        Climate Reason:

        “During which decade do you believe that global SST’s became accurate to fractions of a degree thereby making them a worthwhile matrix that can inform important policy decisions? Thanks.”

        I do not believe your question to be well-posed. We do not judge relevance of data based on how accurate it is over a given decade. Homogenization methods and various means of bias correction can help to make inaccurate data more accurate, so there’s not a decade during which global SSTs suddenly “became” accurate and thus relevant. Data is always relevant, but that is not the same as saying it is without errors. As it were, part of writing those blog posts was actually reading these papers that describe the problems with the ERI or bucket or buoy measurements.

        Don Monfort:

        “Show us that it works the same both ways with the real data.”

        I’ll briefly summarize one of my comments on my blog: if a correction is independent of the data series then it will produce the same results no matter the data series. That’s just the nature of the problem. I showed it worked with an illustrative example—it is simple algebra and will work with all data series that have a comparable change in buoy fraction, like the real data. Either way, I don’t think people have been understanding this point: the researchers themselves have been pointing out how there is no difference, they have checked it.

        This has stopped being an issue of anyone’s ability to communicate how there is no difference. Now it is an understanding issue. If you do not understand it at this point, then there is nothing left for us to discuss. And yes I will keep taking my ball and leaving if you’re going to be unpleasant again.

      • David Springer

        “[…]and instead used a fabricated dataset with a linearly increasing number of buoys ending the series 5 years early. That’s not what happened in the real world.”

        is true in only the most useless way, and your Wikipedia link does not support your statement. Instead, Kennedy et al. (2011) gives the metadata for buoys v. ships in Figure 2, and yes it is a linear increase, no it does not stop 5 years early, so sue me. Correcting that will have literally no effect on the nature of the problem at hand.

        http://hadobs.metoffice.com/hadsst3/part_2_figinline.pdf

        My use of a “fabricated” series is to show that the style of the correction does not depend on the data series at hand, because it is simply algebra. See my comments following up to someone who made the same bad argument as you on my blog post (was that you?).

        Yes wikipedia does support my claim. Bouy number rapidly increased in the 2000s when major programs began. It was not linear. A small number of buoys started being deployed in the late 1970s. The link you gave to Kennedy 2011 is a literature bluff i.e. it does not provide the support you claim. If it does and I missed it please quote it. And no it’s not algebra. It’s calculus.

      • Alex

        Having studied the historic SST’s in some depth (pun intended) I would not place any credence on them prior to the 1960’s. I have had many discussions with John Kennedy concerning this.

        Whilst some well travelled trade routes might bear scrutiny prior to this date the general lack of readings and the methodology utilised renders many readings pointless or highly suspect.

        It is perfectly reasonable to ask you when you believe that GLOBAL SST’s started to have the scientific credence you seem to be according them. I look forward to your answer.

        tonyb

      • Rather than assertions, we like to see evidence around these parts, secular student. If we press you for the evidence it’s just how we roll. And if you can’t stand up to the level of mild unpleasantness you are running into here, you shouldn’t be allowed to roam the internet without adult supervision.

      • PS:You are still dodging the question: Why did the many other fabricators of the SST data products chose not to do this adjustment? I’ll add another question: Given the nature of the SST data, why isn’t the satellite data a more reliable gauge for recent years, including the period of the pause that is killing the cause? I hope I haven’t hurt your feelings.

      • Steven Mosher

        Tony.. Nobody believes it to a FRACTION either.

        You and others simply DO NOT GET what it means when we
        say the average is 1.5

        That does not mean the quanity is known to that level of precision or accuracy.

        Thats NOT what a spatial average means.

        get that through your thick skull

      • Mosh

        Just because you say something in a forceful and a belligerent manner doesn’t make it true

        From Envistat

        ’01:25 -SOT Shubha Sathyendranath, Senior Scientist, Plymouth Marine Laboratory
        in the context of climate change, the oceans are the greatest reservoir of the anthropogenic, of the excess heat that our planet is accumulating, and the satellites help us to see how the – at least at the surface – how the temperature is changing. With the melting of the ice and the increasing in the temperature, sea level has been increasing, not everywhere, and not uniformly, but overall the trend is an increasing trend.

        01:47 VO: ESA’s Earth observation satellites have been monitoring oceans for years. On board the Envisat satellite, an infrared radiometer measured sea surface temperature to within a fraction of a degree.

        Plymouth Marine Observatory, The UK Environment Agency, The British Parliament and many other \agencies believe they know SST’s to fractions of a degree.

        Now, instead of lambasting me for making correct statements, how about dealing with the fact that we often proclaim a degree of accuracy that is unrealistic. Also, why don’t you answer the question as to when you believe we started to have an accurate picture of GLOBAL sea surface temperatures.

        tony

      • Steven Mosher

        Tony

        just because you read people who dont know how to express the actual meaning of an average, doesn’t make what you say or say they true.

        in the first place satellites do NOT measure bulk SST.

      • Steven Mosher

        Tony

        “During which decade do you believe that global SST’s became accurate to fractions of a degree thereby making them a worthwhile matrix that can inform important policy decisions? Thanks.”

        GLOBAL SST: This refers to global average of Sea Surface temperature,
        Otherwise known as BULK SST.

        Bulk SST is the temperature of water BELOW THE SURFACE

        It is measured by ships and buy bouys below the surface

        Nobody claims THIS average is accurate to within fractions of a degree.
        when we talk about the global SST record.. That is what we are referring to

        To Bolster your claim you cite details from Evinsat.

        Before you even BEGIN to cite stuff you dont understand.. read The Foundational HISTORY

        http://atsrsensors.org/pdf/(A)ATSR%20Exploitation%20Plan%20Volume%204%20(Issue%201).pdf

      • Mosh

        So The renowned Plymouth Marine Laboratory don’t know how to express temperatures properly and I shouldn’t be quoting them?

        http://www.pml.ac.uk/Research

        Come on Mosh. Do show a little humility sometimes.

        Its 1 am. good night to you.

        tonyb

      • climatereason,

        You may have difficulty.

        As Steven Mosher said –

        “GLOBAL SST: This refers to global average of Sea Surface temperature,
        Otherwise known as BULK SST.

        Bulk SST is the temperature of water BELOW THE SURFACE”

        So as I have said before, the temperature which climatologists avoid at all costs is the actual surface temperature, for obvious reasons.

        The same applies to terrestrial surface temperatures of course. All in all, a terribly unscientific mish-mash!

      • David Springer

        “Come on Mosh. Do show a little humility sometimes.”

        Not a chance. Bluster is all he’s got.

      • For Mosher and Tonyb.

        We (normals – people who aren’t climate alarmists) understand perfectly well what ‘global average’ means – we know it’s an academic gibberish figure. We GET IT. We’re just not impressed by it. And we are amused/enraged to see it constantly fiddled with by NOAA etc etc.

        What we ALSO GET is how that global average – that academic w-nk analysis – is bandied about for public consumption by climate scientist alarmists like Schmidt Mann etc, by climate zealots like JCH and ATTP etc and by politicians like POTUS Obama and the last Labor govt in Australia.

        Tonyb is right. Mosher is wrong and you can tell this empirically cos Mosher’s losing his rag.

      • hidethedecline

        It is quite remarkable as to how highly intelligent people like Mosh and Alex are so reluctant to answer perfectly reasonable questions.

        We proclaim SST’s as being important pointers to climate change and cite what look to be somewhat precise figures dating back over 150 years, claiming them to be global averages.

        The idea that we have the reliable data stretching back some 150 years that enables us to perform an accurate calculation on a global basis I find extraordinary, especially as I have taken the trouble to look at the history of such measurements..

        Simple question. “During which decade did GLOBAL SSt’s become reliable and accurate enough to be used as a true scientific measure that is useful enough to inform public policy?”

        How difficult can it be to answer that?

        tonyb

      • stevenreincarnated

        Scratch out what appears to be a calibration error to ARGO and you don’t really have much left over. Could be about the same as it was in 92.

      • I hear you Tony. I am really starting to be convinced that climate alarmists like Mosher and Karl etc don’t get reality. They aren’t aware the weather, the actual accurate temps, have been recorded by wine makers, horse race carnivals, cricket tests, religious festivals, weather nerds (‘rainy today mother’) etc etc for 2 millennia. Mosher and Karl will just keep kriging and fiddling and adjusting and so on and so on until there will be this completely alt history climate science alarmist world average record.

      • Climate Reason:

        I have been reluctant to answer your question because of the reason I gave to you in your previous response. I also would think that if you have been discussing this extensively with Kennedy and others, then they would be much better at giving an answer to that question since it is their area of expertise. I am not sure if you have asked it of Kennedy here yet (I have not read all of the comments on this thread), though he is active here. What has been his answer if so?

    • Mckittrick says the bouy adjustment error bar is 0.12 +- 1.7.
      The Michigan kid says the error bar is 0.02?
      Who is right?

      • Both of them. It would be nice to have a real expert to explain the differences with respect to different applications.

        As far as a number goes though 0.02 is the standard error based on the data used. What that data ends up representing can be in the 1.7C range. For example if you need surface skin temperature for a radiant heat transfer application, the reconstructed temperature could be 1.7C cooler than than the skin temperature. If you are looking for a bulk layer temperature that would indicate changes in ocean heat uptake, the reconstruction could be high.

        Since that collection of data is supposed to represent energy, the larger uncertainty, standard deviation is more important than the standard error if you are into thermodynamics.

        This is fairly common problem and not very easy to communicate.

      • I believe that I am right, and I can probably explain why:

        Let’s say that you want to make observations of something; let’s move out of climate and into blood alcohol content or something. Maybe you want to know the average BAC of college students on a Friday night so you start picking up kids and giving them breathalyzer tests.

        You’ll end up with some sort of curve (maybe bell shape, maybe bimodal, whatever) that starts to represent the true distribution of BAC amongst students. As you get more data points, the curve becomes smoother, simply more well-defined.

        Now you want to know the average BAC, easy enough to calculate. How accurate is your sample average? Well, consider this: with few observations it should be really inaccurate, right? So uncertainty is high. With ALL possible observations, it should be bang on the money, because the sample distribution is thus exactly the population distribution.

        However, clearly the sample distribution did not lose variance itself with more observations. It converged to the population distribution. This is just like how a bell curve does not somehow shrink and narrow when you add observations.

        The is because the sample distribution and the distribution of the sample *mean* are not the same thing. Standard error, which (for an unbiased estimate) is √(var(x) / (n-1)), describes the sample mean uncertainty, whereas you can calculate a standard deviation for the sample distribution which does not depend on your sample size n. But clearly the previous does.

        You can check this in any introductory statistics textbook, or online. It’s a simple result of the Central Limit Theorem.
        http://www.a-levelmaths.com/Summary%20Handouts/Central%20Limit%20Theorem%20Estimation%20Summary.pdf

      • Alexander, “I believe that I am right, and I can probably explain why:”

        As far as most basic applications you are most likely right. thermo though has a few added issues, mainly latent energy though. temperature only provides a “sensible” energy estimate.

        Then you have the E proportional to T^4. If you convert all you T data to S-B equivalent energy and use the same method, you are going to end up with different “effective” temperature which is the more realistic uncertainty for a thermodynamic application.

        temperature data sets are what they are and have to be used, but there is more involved in the interpretation of the data reconstructions than standard error.

      • Captain Dallas: I’m not sure where latent or sensible heat should come in here, though. If we’re not moving between units, then the error estimates don’t change.

        If instead I wanted to turn a temperature error measurement into an error measurement on radiative forcing of a blackbody at my temperature, I’d certainly not use those same errors, thats true, but there are ways to figure out stuff like that. I could sample my temperature from the mean/uncertainty distribution and just use a Monte Carlo method, or maybe I could use a direct equation in a propagation of errors method, as here for instance:

        http://en.wikipedia.org/wiki/Propagation_of_uncertainty#Simplification

      • Alexander, Now you got. If you go through the conversions between units and propagation error you will see that for the ocean surface temperature range you could have a maximum variation with zero change in energy of about 1.5 C. What is likely is a fraction of that, but definitely greater than 0.02 C degrees. Standard deviation provides more information in that respect.

        There isn’t anything wrong with trying to fine tune data sets provided you keep the true goal in mind, which should be the objective with statistics. Now is the objective to create a thermodynamics relevant metric or eliminate an inconvenient slow down?

        Statistics can create a lot of Eureka moments

      • Captain Dallas—propagation of error isn’t related to standard deviation v. standard error though, it is how you translate uncertainty through a function of what you’re measuring.

        “for the ocean surface temperature range you could have a maximum variation with zero change in energy of about 1.5 C”

        It’s not clear what we’re talking about now, since you say “change in energy”. Your 1.5˚C refers to variance, I think I understand that, perhaps of the same measurement that Kennedy et al. include; but “change” makes me think you’re also talking about trends. Do you merely mean the difference between the ship and buoy measurements when you say “change”? If so, I agree: we can have large variances compared to the mean value. But the confidence in the mean is not equal to that variance, you have to take into account sample size, which decreases the uncertainty range.

        “What is likely is a fraction of that, but definitely greater than 0.02 C degrees.”

        So, what equation should we use? Again, we’re not converting any units, still just talking about temperatures and mean temperatures (and temperature differences), so propagation of error does not come into play here. Do you think the equation I gave, √(var(x) / (n-1)), is incorrect here? I guess it seems that’s what you’re saying, but these other matters don’t play in here; so, why so?

        “Now is the objective to create a thermodynamics relevant metric…”

        So it seems you object to using either heat content (so, Joules) or temperature (I’m not sure which). Either can be useful, but it depends on what you want to know. For the surface, which is for all intents and purposes supposed to be 2D, I don’t think it’s possible to calculate heat storage because we don’t have an actual mass to use heat capacity calculations on.

        We might just ask how much heat the first meter of ocean water is storing, and just assume that the temperature variation at the surface is the same through the first meter, in which case we could do that. However, since the equation is linear to calculate heat capacity change, ∆Q = mc∆T (where m is the mass of the 1-meter layer, c is the specific heat capacity of sea water, ∆T the change in temperature with time, and ∆Q the change in heat stored; and m and c remain constant), then the error propagation will be linear too. So you wouldn’t see it blow up like you might if you were using non-linear (i.e. exponential, higher power polynomial) equations.

      • Alexander, “propagation of error isn’t related to standard deviation v. standard error though, it is how you translate uncertainty through a function of what you’re measuring.”

        In my opinion you need to consider both what you are measuring and what that measure is intended to represent. If you are looking for a metric to represent energy, you have the zeroth law to consider.

        That leads to the “unforced” variability or how much the temperature metric can vary with no change in the energy it is supposed to represent.

        So while propagation of error isn’t related to sd v se for a particular data set you need to consider how that data set can impact overall propagation of error. That is what leads to SD being preferred.

        All of this focus on “surface” data is due to the “surface” data not meeting expectations. Since the expectations are energy related, Wm-2 and Joules, T has to be thermodynamical relevant, meaning you have to consider that pesky zeroth law.

        For Karl et al. time will determine how useful it is, then we should be into ERSSTv6beta. Or everyone could consider that for a planetary scale problem you need all the data you can lay your hands on and the combination will give you a better estimate of the real uncertainty. Picking a choosing probably isn’t the way to go.

      • Captain Dallas:

        “That leads to the “unforced” variability or how much the temperature metric can vary with no change in the energy it is supposed to represent.”

        This does not make sense physically. Since temperature is a direct linear measure of the mean internal kinetic energy of a system, a decrease in temperature means the system has lost energy. Now energy can move around in and out of our field of observation, sure, but that is a completely separate issue.

        Either way, temperature is very relevant. Why do you not think so?

        “So while propagation of error isn’t related to sd v se for a particular data set you need to consider how that data set can impact overall propagation of error. That is what leads to SD being preferred.”

        I don’t think you understand what “propagation of error” means. It refers to the “tracing” of error values through a function of the variable at hand. It refers to mathematics, and depends only on the function. If the function is the heat capacity equation, that’s all it depends on.

        You keep talking about the zeroth law, but that is not relevant here since we don’t have systems in thermal equilibrium with each other.

        I think this is what I can say with regard to the point you’re attempting to make: if there is an unbalanced flow of energy from one system to another, that will always show up in the temperature measurements of each of those bodies. All of that uncertainty, which you seem to be saying is due to this heat transfer, is manifest in those measurements. There are some situations where you can have conductive or radiative transfer that’s balanced out by bulk exchange, in which case the average temperature doesn’t change—but because temperature and mean kinetic energy are bijective (and there’s no net mass movement), that means there was no net energy accumulation in the first place in that system.

        If there was more uncertainty, the data would be more widely distributed. That’s just a simple fact about data. There’s no hidden uncertainty that somehow is there but doesn’t actually cause variation in the data.

        All of what you’re saying sounds like a collection of words or terms you’re just throwing out (zeroth law, propagation of error), but the way you’re using them is wrong.

      • Alexander, “This does not make sense physically. Since temperature is a direct linear measure of the mean internal kinetic energy of a system, a decrease in temperature means the system has lost energy. Now energy can move around in and out of our field of observation, sure, but that is a completely separate issue.”

        An average temperature though is not a direct measure. The average temperature of a combined cycle gas turbine and a lawn mower can be the same, doesn’t mean anything. If you have a 5 C increase at the poles and a 1 C drop in the tropics average temperature can go up while average energy goes down. You need a uniform distribution of temperature and energy for a change in average temperature to be meaningful.

        I have already put the challenge out to compare an average global temperature anomaly with an average global energy anomaly, give it a go. As they say, all models are wrong but some are useful. Temperature is a model. Average temperature is another model. Now look for the potential propagation of error to see how useful they are.

        For a quick and dirty check, the oceans range from about -2 C/306Wm-2 to about 32C/492Wm-2 So for a 15C average temperature you could have an average S-B energy of 399Wm-2 equivalent to about 16.5 C degrees. That would be 1.5C

        If you look at Stephens et al. energy budget there is a huge +/-17 Wm-2 uncertainty at the “surface” versus a small +/- 0.4 uncertainty TOA. The difference isn’t measurement error it is the nature of the beast.

        Nothing unphysical about it at all, it is the reason there is a zeroth law.

      • Captain Dallas:

        “For a quick and dirty check, the oceans range from about -2 C/306Wm-2 to about 32C/492Wm-2 So for a 15C average temperature you could have an average S-B energy of 399Wm-2 equivalent to about 16.5 C degrees. That would be 1.5C”

        The Stephan-Boltzmann law does not give energy. You are absolutely wrong here. It gives radiative intensity, watts per square meter. That is not a measure of energy, it is energy per time per area.

        If you want to go from temperature to energy, you need heat capacities. That is a linear equation (which I gave to you earlier), and so the error translates linearly too. You don’t use blackbody radiative equations there.

      • Alexander, “If you want to go from temperature to energy, you need heat capacities. That is a linear equation (which I gave to you earlier), and so the error translates linearly too. You don’t use blackbody radiative equations there.”

        I agree, but I didn’t set up the problem. The problem compares surface apples to TOA oranges and we are stuck with it. Using S-B just provides a rough estimate of the sensible part of the problem. The roughness is 1.5 C change with no change in surface energy. Because surface T/E is being compared to TOA T/E it has to be considered.

        Personally I would prefer source versus sink differential, but the sinks, the poles, have the worst data. The 33 C 15C/390 surface versus -18C/240 was someone else’s brain fart.

      • Captain Dallas:

        “The roughness is 1.5 C change with no change in surface energy.”

        You’re really going to have to be specific on how you’re coming up with the numbers, then I can probably comment on what you’re talking about. It seems you want to compare surface temperature changes to the Earth’s energy imbalance, something you simply wouldn’t do transiently, but if you can clarify what equations and numbers you’re using, where you’re making subtractions and so on, I’d appreciate it.

      • Alexander, 1.5C is just an extreme. If you want a better estimate of what is likely you would need to determine what would be a reasonable range of meridional and zonal temperature gradients. If you don’t like S-B which is simple, I am sure you can find other methods. But since there isn’t a simple out/down energy arrangement I doubt there is any great way of dealing with the “surface”. This is really what the couple climate models should be doing but as you know they have issues with regional absolute temperatures and things like the slowdown.

        This is why OHC is considered a better metric, but that isn’t all that easy to extend back in time. So we are stuck with a less than perfect metric and an assumption that “unforced” variability cannot be greater than about +/-0.1 C. If it was limited to +/-0.1 C and surface temperature was a perfect metric, then there would be no model/observation discrepancy and no need for ERSSTv4,5,6… Since almost every temperature reconstruction has an almost too good to be true standard error but there is still a fairly large discrepancy, some might consider why such remarkably accurate data just never seems to be good enough.

        A simple explanation is zeroth law.

      • I’m just catching up to this interesting discussion on std error vs std deviation. I hope one day we can resolve the matter since it seems to keep coming up in the context of climate studies.

        Nevertheless, given that the real debate is whether the 0.12 C adjustment is justified shouldn’t we be looking beyond the error/sigma debate?

        Another choice Karl, et al., made that might have impacted the results (though in which direction is unclear to me) was to ignore the fact that measured buoy-boat discrepancies depend on location (which ocean, which current, etc.). The application of a 0.12 C global adjustment for all buoy data appears to conflict with the expert opinion of previous investigators.

        In addition, the 0.12 C is a temperature anomaly derived from comparison to a 30 year climatology of SST, (though, again, it is unclear to me which period Karl, et al., actually used). Since they based their anomaly on the work of Kennedy (2011), have they resolved the additional uncertainty he identified in regard to the SST climatology itself?

        As Kennedy, et al., discussed:

        No estimate has yet been made of uncertainties in the estimate of the climatological average SST that has been used to convert actual SST measurements to anomalies. … This question is of particular signicance if estimates of SST anomalies are combined with land surface air temperature anomalies to create an estimate of global average temperature.

      • “Nevertheless, given that the real debate is whether the 0.12 C adjustment is justified shouldn’t we be looking beyond the error/sigma debate?”

        Adjusting one up or the other down doesn’t really have any impact on anomaly. Karl et al. though have a knack of choosing adjustments that raise questions. Nothing really wrong with their methods, other than there are generally better ways that are less controversial.

        Standard deviation though is more interesting. Some portion of the standard deviation can be reducible and some can’t. When you find an interpolation method that reduces standard error and standard deviation it is performing a smoothing that can be useful. Karl. et al’s method smoothed the 1998 peak but probably didn’t smooth the current peak which would not be all that useful unless you are into making dramatic press releases.

        In five years or so the method should produce the same smoothing of the current peak, then we can see how useful it is. As it is, there is about 0.85 C of standard deviation in monthly and about 0.3C SD for five year smoothing so there is a fairly large range of possibilities.

      • AC. I’m not sure what you mean. The standard deviation DOES depend on sample size.

        https://www.mathsisfun.com/data/standard-deviation-formulas.html

      • jim2:

        “I’m not sure what you mean. The standard deviation DOES depend on sample size.”

        The equation uses sample size but that does not mean that the expected value of the standard deviation changes with sample size. You’re summing over N observations, after all—the effects cancel out. When calculating standard error though you are not canceling out the division of the sample size so the expected value of the standard error of the mean does depend on sample size.

      • AC. Sample size directly effects the standard deviation and, in effect, sample size determines what you know with confidence. With a small sample size, you still get a mean, but the odds of it being an accurate reflection of reality is small with a small sample size. Sample size is critical.

        http://www.conceptstew.co.uk/PAGES/nsamplesize.html

      • jim2:

        I feel we may be talking well past each other here. My most recent blog post helps to clarify this issue:

        http://michiganssa.blogspot.com/2015/06/sd-v-se-of-mean.html

        You are absolutely correct that sample size helps to narrow down our certainty of particular constructs like the mean (or any particular quantile). However, the standard deviation is not what that is just by itself. The standard deviation for any distribution is the spread of the data, and if you sample (say) temperatures, the standard deviation of those measurements does not change, no matter how big your sample size is.

        This is paramount and few seem to understand this (pardon the all-caps but I can’t bold or italicize): the distribution of the data, and the distribution of the mean of the data, are NOT THE SAME THING. I draw a distinction between the standard deviation of the distribution of the data, and the standard deviation of the distribution of the mean. Please see my blog post for a detailed description.

      • AC. You seem to be talking past yourself!!

        The mean is a SINGLE NUMBER. It is not a DISTRIBUTION!

      • jim2:

        “The mean is a SINGLE NUMBER. It is not a DISTRIBUTION!”

        A sample mean is itself a single sample from a distribution that has its own characteristics. You don’t know what you’re talking about. Read my blog post, and read some statistics. Here, I got a first link from Google for you.

        http://onlinestatbook.com/2/sampling_distributions/samp_dist_mean.html

        “Sampling distribution of the mean”—yeah, it actually is a thing.

      • You are right about the sampling distribution of the mean, AC. I had envisioned something else.

      • jim2:

        OK. Sorry if I was unclear in my previous posts; yes I had intended to talk about the sampling distribution of the mean.

      • Don Monfort

        The secular student can cook up any number of plausible excuses and justifications for making the adjustment that allegedly erases the pause, but he must avoid the question on why the other SST data product fabricators have not seen the need or the justification to do likewise.

        Some interesting points made here:

        http://donaitkin.com/oh-for-some-real-climate-science/

        “It seems to me that if you are going to tell the world that your work does not support the notion of a global warming hiatus you had better show not just that you have some fancy new reconstructed data, but that your data are just miles better than everyone else’s, if only because nobody else agrees with you, and they’ve been in the business for a long time.Not only that, despite the fact that the datasets are based on different techniques, there is a strong measure of agreement among them.”

        And they dropped “cool” satellite data out of their new pause busting product:

        “Alas, the Karl paper doesn’t even mention anyone else in the temperature-measuring field. I pressed on into Supplementary Materials attachment and found this gem: Previous versions of our SST analysis included satellite data, but it was dis-included in a later release because the satellite SSTs were not found to add appreciable value to a monthly analysis on a 2° grid, and they actually introduced a small but abrupt cool bias at the global scale starting in 1985 . Other observing systems, including satellites, and model simulations could provide important insights that would enable the quantification of interpolation uncertainties in data-sparse regions, but haven’t been used in this study.”

      • AC – I don’t have time right now, but hopefully, next weekend I will have some time to read your blog.

    • Hey, that’s me! Thanks for the link here. This got a lot of comments too so I’ll have a look through.

      • Quick thanks to Alexander Coulter for commenting in response to my question above. I haven’t understood a word of it, mate, but this exchange with captdallas is fun.

      • Hi Hide The Decline,

        I’m actually starting a new article now that will try to explain (and illustrate) this issue better! Hopefully it’ll help clear this issue up more, I find that words sometimes just don’t cut it when it comes to data.

      • Alexander

        Very nice article. But I have one small but important quibble. You wrote:

        “After only 100 samples I can see that most people spend less than half of their income on rent, and they tend to cluster around 1/4 – 1/3″

        Nope. You would be more correct to report:

        After only 100 samples I can see that most people respond that they spend less than half of their income on rent, and their responses tend to cluster around 1/4 – 1/3”

        There is a very key difference. You assume that your data is correct. When dealing with people that is never a wise assumption to make, even when it involves such a “neutral’ issue. Could respondents hear the answers given by their peers and then “normalize” their answers, etc, etc.?

        It is also a key difference when looking at climate research. Especially when we know so little about how and why decisions were made to adjust data collected generations ago.

      • David Springer

        sciguy +1

      • That’s the crux, sciguy. The ship data is and always has been BS. If we want to know if there has been a pause that is killing the cause, we use the buoy data and the satellite data. All the other fabricators of SST have chosen not to make this BS adjustment. But knowing how the consensus climate science works, look for them to jump on the bandwagon in time for the partee in Paree.

      • Like the Nazis at the bulge, you think you’re winning because you moved forward.

        Your demise is now a few months away. .80C is about to be the front line.

      • Don Monfort

        putz

      • Don Monfort

        yo momma

      • Hi Alexander, have now caught up with all the back and forth. Great stuff.

        McKitrick has updated his paper and commented on your post – the exchanges and updates are great. Bottom line, Mckittrick’s finding is that the error bar for Karl 2015 bouy temp uplift 0.12 is +-1.7.

        Mckittrick’s right and always was.

        Can I make a suggestion for a further edit to your post. McKitrick commented at your site:

        “Regarding the SD vs SE question, there is no dispute that SE gives the variance of the estimated global mean, if the unweighted global mean is what you are interested in. What is really at issue is how good an approximation +0.12 is to each site-specific adjustment. Remember that SE is attached to the unweighted global mean. But the observations are heavily clustered in shipping lanes, overweighted in the NH Atlantic. To estimate the global mean you need to weight observations according to location and sample density. Suppose that you had enough meta data to do that. Then you wouldn’t need to use a global average as an approximation everywhere, you could use location-specific bias adjustments. After all, why throw out all that information? But the data are inadequate and K15 rely instead on a single global number. It artificially understates the uncertainty of what they are doing to refer to the unweighted global mean SE. Put another way, the difference between an SD of 0.85 and an SE of 0.02 is an indicator of the amount of information lost by not using location-specific adjustments.

        As to whether it matters: again, it’s one of 3 steps that together make all the difference. So yes, it matters. Is it correct? Not for me to say, but it is important to point out how much information is being lost in this step.”

        You responded that you agreed with McKitrick – weighting the average has to be done.

        I think you should edit out the reference to McKitrick as a “somewhat serious person” on your post. I mean, he’s a serious person, and you agree with him. Why lie?

      • The Kennedy reference shows that the mean varies from about 0.1 to 0.2 for different regions, so the 0.85 SD is mostly other random factors related to comparing a ship and buoy in the same vicinity. The regional means can be considered independent estimates of the mean, and help to show that the SE is quite small compared to the SD and is more appropriate as an measure of the uncertainty in the mean. McKitrick was not correct in using the SD instead of SE for the uncertainty in the mean.

      • Sciguy54:

        “There is a very key difference. You assume that your data is correct.”

        My post was meant to illustrate the difference between standard error of the mean and standard deviation. Unspoken assumptions about the accuracy of particular measurements, or of what surveying tactics I’d have used, were left unspoken because those do not matter for this issue. My post for instance gave several examples of things I would want to measure; perhaps I’d have simply chosen to illustrate with the ant-travel distance case, where the ants surely aren’t making a conscious choice to normalize anything. Though you may complain that along the time I’m collecting data for a million ants I switch methodology. These exogenous factors were left out because they are exogenous.

        But, thank you for the compliment on my post, I hope it did explain why standard error of the mean is the correct measure of certainty on a sample mean.

        Hide the Decline:

        “Bottom line, Mckittrick’s finding is that the error bar for Karl 2015 bouy temp uplift 0.12 is +-1.7.

        Mckittrick’s right and always was.”

        Your use of “bottom line” here is not an accurate reflection of what the comments I exchanged with McKitrick actually say, nor of a “finding” of McKitrick’s. He has not provided any new math to justify his result, and his commentary about the difference between 0.85 and 0.01 is actually not at all justified since this standard deviations would be the same even if each region was sampled uniformly and with the same number of observations. My post that I just wrote actually illustrate this fact. So no, on this issue McKitrick has not been correct, and the only correct statement he (almost) made was to say “Regarding the SD vs SE question, there is no dispute that SE gives the variance of the estimated global mean”; the “unweighted mean” part is irrelevant here. The standard error of the mean is always the value you want when you’re calculating the certainty of the mean. It’s no coincidence that the name basically says what it is. I explain in a third post about a “crude” area-weighted estimate of how the standard error of an area-weighted mean would be calculated with the standard errors of the regional means; the final result is essentially the same as the result Kennedy et al. (2011) give, 2*s.e. = 0.024.

        “I mean, he’s a serious person, and you agree with him. Why lie?”

        That statement was somewhat tongue-in-cheek, but I will not remove it. Sorry. Although clearly I do not agree with your assessment that I agree with McKitrick on each issue. In particular if he will continue to write that the uncertainty bound on the mean is ~1.7˚C, then he is incorrect. (It is not clear he wishes to continue to take this stance, given his own comment, whether or not he still has that in his updated article.)

      • Hi Alexander, I think my use of ‘bottom line’ is fine. I mean, McKitrick has updated his article and his +-1.7 is still there. You haven’t succeeded in demonstrating any invalidity of his calc on Karl, just that you don’t agree with him.

        Re editing the snipe, good on you.

        One last thing, you wrote at Jo Nova’s:
        “And yes, with McKitrick’s politeness in all of his comments, following as well from the tone of his article at WUWT, I was unfair and impolite to him, and will keep my language more civil in the future. (Or, at least, equally toned: pardon if I don’t hold the same opinion of Mr. Monckton for instance)

        Out of nowhere, you mention Monkton? Because? This is sadly familiar for us normals.

        “It’s just this war and that lying son of a b-tch Johnson” (look it up if you don’t get this reference) Please don’t be that guy.

      • Alexander, ” In particular if he will continue to write that the uncertainty bound on the mean is ~1.7˚C, then he is incorrect. (It is not clear he wishes to continue to take this stance, given his own comment, whether or not he still has that in his updated article.)”

        Then you can look forward to more discussions. The standard deviation going forward lets you know how useful the OLS model may be for predicting the future and determining whether or not a trend is significant. Standard error just lets you know how good a particular data point may be. Karl et al made two claims, that their method produces better data points and that the “pause” is busted. Whether or not the “pause” is busted depends on how the “pause” is defined which includes potential cherry picking of end points. A few years should determine if Karl et al. is correct on that point.

        This same straw discussion is part of all climate science data reconstructions. The UK’37 method has a lab accuracy of about +/-0.2 C and a field error of around +/- 2.0 C depending on the location of the sample. Mg/Ca is better, but there are still difficult to determine confounding factors that limits its use. Just adding more samples to reduce standard error doesn’t make the problems go away, you still have to determine if your super precise reconstruction is fit for purpose. Since the ultimate purpose is to relate surface energy to TOA energy imbalance and changes in forcing, standard deviation is PDI.

        Cowtan and Way devised a method to “find” lost heat and it was hidden in the Arctic winter over sea ice. By including a region with the lowest specific heat capacity and the highest variability they got their moment in the sun but their “improved” method is pretty much gathering dust because most aren’t not sure exactly what the heck it might be good for other than press releases. Karl et als new and improved product will likely die on the same vine.

      • Hide the Decline:

        First, I said I would not remove that comment of mine, not that I would. I have already removed several other denigrating comments, but that one shall remain. I did remove those others at Dr. McKitrick’s request, but I will not be taking requests from unaffiliated people on what is appropriate language on my blog.

        I have actually provided an extensive argument for why 1.7˚C is an incorrect bound on the mean value. My entire blog post that I gave to you to look at was an illustration of that issue. I also mentioned how the regional differences are of little consequence for the final figure because of the large net sample size.

        I will repeat only once more, since I cannot say it in simpler terms: Dr. McKitrick has not done any calculations. He has taken the 0.85˚C standard deviation figure from Table 5 of Kennedy et al. (2011), which I link to in my blog post responding to McKitrick, and multiplied it by 2. That is because it seems, on face value, to be a 2*sd bound on some figure.

        What that describes is not the certainty of the mean but the spread of the full data observations. The certainty of the mean is given by the “standard error of the mean”, which is calculated as the standard deviation divided by the square root of the sample size.

        I have described this ad nauseum on my blog. McKitrick has not given any arguments against this. All he has said is that the difference between the 0.85 and the 0.01 is illustrative of the loss of information used from not looking at the regional differences—this is not true. It is a false statement, and I can explain why.

        The reason why can be illustrated in one of two ways. In the first way, we simply do a rough area-weighted correction, and then calculate the certainty on that. To do that, we use the variance equation, which takes a function (in this case, an area-weighted average function) and calculate the standard error of the final variable based on the known standard errors of the inputs. The function is

        G = w_1 * x_1 + w_2 * x_2 + … + w_k * x_k

        where G is the global average, w is a weight for a region based on its area fraction of the global oceans, and x is the mean value observed in that region.

        Through the variance equation, we take the square of each partial derivative and multiply that by the variance of each variable (which is, itself, the square of the standard deviation by definition). Then we add them all up and take the square root of that. So,

        S_G = √ [ (∂G/∂x_1)^2 * (s_1)^ 2 + … + (∂G/∂x_k)^2 * (s_k)^2]

        Here, s_k is the kith standard error of the mean from each region. The final result is a 2-times standard error of the global mean of 0.024, which is basically the same as the standard error of the unweighted mean.

        I spent multiple comments and an entire blog post describing why this is the correct method, and why the 1.7˚C is not the correct method. These points are the exact same that John Kennedy has made in this very same thread, too. And again, this is something that can be found in any introductory statistics course. I gave links in my third blog post explaining the SD v. SE issue to online resources that describe the derivation of the standard error of the mean. Go read them.

        “Out of nowhere, you mention Monkton? Because? This is sadly familiar for us normals.”

        I brought him up because he was the topic of another blog post of mine responding to a couple other posts at WUWT. I brought him up because I was very derisive toward him, and the question of my tone toward McKitrick was brought up by Joanne. My comment was that I would respect Dr. McKitrick, because he was respectful to me; and I would not honor similar requests for respect to Mr. Monckton, so people shouldn’t bother jumping on me for that.

      • Hide the Decline:

        Sorry, my comment got long and I lost track of the fact I was going to give 2 ways to illustrate why the regional differences question doesn’t matter. My first was to actually go through and do the calculation. I have already done that calculation of course but I guess it needed to be repeated here.

        My second way of explaining that would be that the standard deviations of each region would not change even if you continued to sample them more highly. The 1.7˚C figure would remain even if the globe was uniformly sampled. This is illustrated, again, in that blog post I wrote and linked you to explaining the SD v. SE question.

        But it should be clear, of course, that the area-weighted method McKitrick (rightly) prefers will give the same answer as the unweighted mean in that case, since the unweighted mean works under an assumption of uniformity in observations.

        McKitrick is correct that if you have regions sampled more frequently than others, then the standard error of the mean is not going to be quite as low as the standard error weighted by sample size. But I *did* do that that calculation, I explained it in the area-weighting blog post and I explained it in the previous calculation. It is not close to 1.7˚C, it is close to 0.024˚C. That’s because it is still much driven by large sample sizes in each region.

        Either McKitrick gave calculations to back up the 1.7˚C figure, or he did not. Go through his comments again, go through his blog post, go through everything, and find them. I can also save your time and tell you he did no such calculations.

        Now either I have calculations backing up the 0.02˚C figure, or I did not. As it were, I wrote two blog posts that defend it. Now if you have not read them, that’s quite not my problem.

      • Alexander

        I hope this ends up fairly close to a sensible location within the thread. I had noted that one always should be wary of the accuracy obtained from surveying. In response you wrote:

        “Unspoken assumptions about the accuracy of particular measurements, or of what surveying tactics I’d have used, were left unspoken because those do not matter for this issue.”

        I apologize because I failed to make the connection of peer pressure back to the case here with bucket measurements. The point I should have made is that the measurements were made and brought back to the bridge by a crew member, and likely one of low rank given the nature of the task. The result would be entered into the permanent records of the ship. Ultimately it is the commanding officer aboard who is responsible for the ship’s operation, and in many organizations one of the key criteria for judging the commanding officer is the orderly and efficient operation of the ship and its crew in every detail. Usually every member of the crew is painfully aware of this situation.

        Under those conditions, what will the “thermometer operator” do if he starts to enter the latest reading and it appears to be an outlier compared to previous entries? Given a little experience in how hierarchical organizations work I would not be surprised if the crew member would look at the previous entries and apply his own ad hoc “smoothing algorithm” to his latest data point before entering it into the record in order to avoid making any “waves”. Double ditto if he actually forgot the reading before having a chance to record it!

      • David Springer

        Garbage in, garbage out

        The data are unreliable and the best of it corrected to be equable with the worst of of it.

        But it gets even worse. The very best data available, from the ARGO network of buoys, was left out entirely.

        FAIL FAIL FAIL

      • Sciguy54:

        You’re probably right, yes. I am unsure how one would correct for those (shall we say?) incongruities. I more meant that this particular blog post I wrote about the standard deviation/standard error of the mean isn’t related to that issue. So, when you bring it up, I think it starts to stray from my point. Many here seemed to disagree that the standard error of the mean is appropriate to use in this circumstance. I think that whether there are other sources of error is a worthwhile discussion, though it is not one I am versed enough to have.

  124. Pingback: Weekly Climate and Energy News Roundup #182 | Watts Up With That?

  125. Pingback: Chaos Patch (#65) | index. i

  126. Mike Flynn

    “Sea surface temperature (SST) is the water temperature close to the ocean’s surface. The exact meaning of surface varies according to the measurement method used, but it is between 1 millimetre (0.04 in) and 20 metres (70 ft) below the sea surface.” – Wikipedia.

    Of course, climatologists don’t ever use the temperature of the surface, because it confuses them.

    If they accept a vague temperature between 1mm and 20 metres, then it is easy to pick a temperature which suits the purpose at hand. Adjustments and homogenisation can create a trend decrease, increase, or no change at all!

    Don’t like satellite temps? Just say they only read the surface! Don’t like the 1mm temperature? Use the 20 metre temperature, if it fits better!

    Still not happy? Create your own data! Find a method that starts with data from a variety of questionable sources, then compose an algorithm that gives the desired answer. It’s pretty easy to alter the past, given a little time, and a creative mindset.

    Of course, it hasn’t really changed the past, but don’t tell anybody.

    • Steven Mosher

      “Of course, climatologists don’t ever use the temperature of the surface, because it confuses them.”

      ignorant

      http://icoads.noaa.gov/advances/emery.pdf

      Brunke, M. A., X. Zeng, V. Misra, and A. Beljaars, 2008: Integration of a prognostic skin sea surface temperature scheme into climate and weather models. Journal of Geophysical Research, 113, D21117, doi:10.1029/2008JD010607.

      Fairall, C. W., E. F. Bradley, J. S. Godfrey, G. A. Wick, and J. B. Edson, 1996: Cool-skin and warm-layer effects on sea surface temperature. Journal of Geophysical Research, 101, 1295-1308.

      Garratt, J. R., 1992: The Atmospheric Boundary Layer, Cambridge University Press, 316 pp.

      May, D. A., M. M. Parmeter, D. S. Olszewski, B. D. McKenzie, 1998: Operational processing of satellite sea surface temperature retrievals at the naval oceanographic office. Bulletin of the American Meteorological Society, 79, 397-407.

      Schluessel, P., H.-Y. Shin, W. J. Emery, and H. Grassl, 1987: Comparison of satellite-derived sea surface temperatures with in situ skin measurements. Journal of Geophysical Research, 92, 2859-2874.

      Wick, G. A., W. J. Emery, L. H. Kantha, and P. Schluessel, 1996: The behavior of the bulk-skin sea surface temperature difference under varying wind speed and heat flux. Journal of Physical Oceanography, 26, 1969-1988.

      Zeng, X., and A. Beljaars, 2005: A prognostic scheme of sea surface skin temperature for modeling and data assimilation. Geophysical Research Letters, 32, doi:10.1029/2005GL023030.

      http://journals.ametsoc.org/doi/abs/10.1175/1520-0442(2002)015%3C0353:TIVOSS%3E2.0.CO;2

      http://www.researchgate.net/profile/William_Emery/publication/234005940_On_the_bulk-skin_temperature_difference_and_its_impact_on_satellite_remote_sensing_of_sea_surface_temperature/links/0c960526fe6f0903ad000000.pdf

    • Steven Mosher

      “If they accept a vague temperature between 1mm and 20 metres, then it is easy to pick a temperature which suits the purpose at hand. Adjustments and homogenisation can create a trend decrease, increase, or no change at all!

      The interest is the trend not the temperature.

      You dont like adjustments? use raw data. The observed rate of warming is higher. and models then perform better.

      Use raw data? you just validated the models. thanks

      • So when adjustments are made and the “take home” message is “things are worse than we thought”, there is a missing disclaimer of “for the models”? Well, it certainly matches with IPCC – every AR shows less projected warming than its predecessor, yet somehow things are always worse for these guys too. CO2 emissions rising faster than used for projections, while temperatures rising less than projections = worse than we thought (for the models). Never really made much sense before, but now I see it.

      • David Springer

        You dont like adjustments? use raw data. The observed rate of warming is higher. and models then perform better.

        Use raw data? you just validated the models. thanks

        ——————————————————————–

        bs

      • David Springer

        lie harder

    • Mike Flynn:

      You’re quite right about the problems of historical SST data obtained by ships of opportunity. Ironically enough, the very papers that Mosher cites here manifestly underscore that tangle of problems. Apparently his scientific attention/comprehension extends no further than selling the rank conceit that histories of global temperatures are adequately known back to the 1750s, as peddled by BEST.

    • Steven Mosher,

      I read the first reference you linked to.

      It was titlled “Estimating Sea Surface Temperature
      From Infrared Satellite and In Situ Temperature Data”. It contains speculation and suggestions that it may be possible to design a model which will somehow bring order to the current shambles

      From the paper –

      “All of these ship and buoy SSTs are estimates of some type of bulk SST, which does not actually represent the temperature at the surface of the ocean.”

      As I said, and for which you characterise me as ignorant. I know you prefer models and estimates to fact, but I do not. As the first reference you gave in support of your argument, actually supports what I said, I have not bothered with the rest.

      I commend your first reference to other readers. It demonstrates current SST problems in some detail, and proposes some means of proceeding along a more accurate path.

      You statement about the trend is nonsense. The raw data is inconsistent, even within the same data class. Remember, the trend is only your friend until the end of the trend. If you find a trend, how do you calculate where it ends? You can’t, of course.

      • Hi Mike,

        The first paper that you read was written around 2001 (the newest reference was, I think, from 2000). A lot of work has gone on since then to understand the physical differences between in situ and satellite measurements, particularly around understanding the relationship between the skin SST that satellites measure and the SST at depth measured by drifters, ships etc. This is tied up with understanding the daily cycle of near-surface temperature change. There was a review paper on this diurnal variability a few years ago:
        http://link.springer.com/article/10.1007%2Fs10872-007-0063-0

        And work has continued since. There was a good blog post comparing the Karl et al. SST with one of the best available satellite data sets:
        http://sstcci2.blogspot.co.uk/2015/06/no-slowdown-in-global-temperature-rise.html

        You can read more about that satellite data set here:
        http://onlinelibrary.wiley.com/doi/10.1029/2012JC008400/full

        Best regards,
        John Kennedy

      • Hi John

        If you want to do something really useful this Sunday morning you can come round and help me to cut my hedge. was just taking a breather when I saw your comment. I went to your web site and noted your comments about warmth in Sweden

        As it happens the Historic temperatures of Sweden were the subject of one of my first articles.

        https://chiefio.wordpress.com/2009/10/09/how-long-is-a-long-temperature-history/

        Unfortunately over the years some links have been lost, but enough remain for you to get the gist.

        Whilst Stockholm’s temperatures commenced in 1756 those of nearby Uppsala started some 40 years earlier. This enables us to see the remarkable warm temperatures recorded in Sweden during the 1700’s up until the 1740 period. This era was of great interest to Hubert Lamb and many others including Phil Jones, Manley and of course David Parker.

        I have seen ample evidence of it in your own library plus there is information on it that I saw in the archives and library of the Scott Polar Institute in Cambridge.

        I think several of the most interesting questions that remains unanswered are

        1) The reasons for the Medieval Warm period-ample evidence of which lies within 15 miles of your offices on Dartmoor.
        2 The reasons for the decline into the LIA
        3 The reasons for the three decade long warming period in the early 1700’s
        4 The reasons for the long term warming we can trace from 1700 to the present day-with ups and downs-making Giss and your own records staging posts and not the starting posts for long term warming.

        I’ll put the kettle on shall I? Bring stout gloves.

        tonyb

  127. Pingback: Potpourri

  128. Pingback: Improving climate change communication: moving beyond scientific certainty | Climate Etc.

  129. Comparing HADCRUT4 to HADCRUT3 and GISTEMP data set, using a 5 years moving average, one can observe :
    (1) Very similar behavior of all data sets until 2003
    (2) Significant divergence of HADCRUT4 data with respect to HADCRUT3 ones, starting from 2003, whereas HADCRUT4 data remain fully consistent with GISTEMP ones.
    [Plot1]

    Using a 1 years moving average, one can further observe that new HADCRUT4 corrections tend :
    (1) To attenuate 1998 El Nino bump on one hand
    (2) But to amplify further El Nino bump on the other hand
    [Plot2]

    The divergence after 2003, whereas the consistency of the 3 data sets before 2003 is very good, is the incontrovertible evidence that HADCRUT4 data have been significantly distorted and corrupted by unjustified adjustments whose clear aim was :
    (1) To delete the growing and inconvenient divergence between HADCRUT3 and GISTEMP data sets
    (2) To “hide the decline” of temperature since 1997 as Phil Jones (head of CRU – East Anglia) wrote in the famous “Mike’s Nature trick … to hide the decline” climategate 1.0 email.

    We also know from Climategate emails that playing with SST is the favorite way to corrupt the data.

    “Phil,

    Here are some speculations on correcting SSTs to partly explain the 1940s warming blip.

    If you look at the attached plot you will see that the land also shows the 1940s blip (as I’m sure you know).

    So, if we could reduce the ocean blip by, say, 0.15 degC, then this would be significant for the global mean — but we’d still have to explain the land blip.

    I’ve chosen 0.15 here deliberately. This still leaves an ocean blip, and i think one needs to have some form of ocean blip to explain the land blip (via either some common forcing, or ocean forcing land, or vice versa, or all of these). When you look at other blips, the land blips are 1.5 to 2 times (roughly) the ocean blips — higher sensitivity plus thermal inertia effects. My 0.15 adjustment leaves things consistent with this, so you can see where I am coming from.
    Removing ENSO does not affect this.
    It would be good to remove at least part of the 1940s blip,
    but we are still left with “why the blip” .

    This is exactly what the new NOAA data set, with new adjustments, does : playing with SST to distorst and corrupt the data, and at the end to make the decline vanish…

    • So close, but so far away. The adjustments are made to improve the data sets. Sea level rise and ocean heat content are at complete odds with HadCrappy3.

  130. Pingback: Climate scientists criticize government paper that erases ‘pause’ in warming – Fox News | Everyday News Update

  131. JC: Most if not all comments here are discussing the adjustment. They are missing the bigger picture of what is going on.

    https://tallbloke.wordpress.com/2015/06/05/noaa-tries-to-make-global-warming-slowdown-vanish/comment-page-1/#comment-102278

    Please, if you get chance, read P.Vs earlier comments

  132. Pingback: Climate scientists criticize government paper that erases ‘pause’ in warming – Fox News | NewsBreakOnline.com

  133. Pingback: Climate scientists criticize government paper that erases ‘pause’ in warming – Fox News | 5DTV World Breaking News Update

  134. Pingback: Climate scientists criticize government paper (Karl et al. 2015) that erases ‘pause’ in warming | Watts Up With That?

  135. Pingback: Climate scientists criticize government paper (Karl et al. 2015) that erases ‘pause’ in warming | I World New

  136. Pingback: Natural disasters in the WORLD,Space,Science,History ,Travel- RadioMetafora.ro

  137. Pingback: Climate scientists criticize government paper that erases ‘pause’ in warming | The World 247

  138. Pingback: DEBATE HEATS UP Scientists slam paper that erases ‘pause’ in warming | Country Connections

  139. Pingback: Climate scientists criticize government paper that erases ‘pause’ in warming

  140. The scientists who developed the 66 different explanations for the hiatus just did a collective face palm and said,”All we had to do was adjust the data! Why didn’t we think of that?”
    Are we supposed to be so stupid as to believe that all these other scientists were mistaken about the quality of the data that was being used to indicate the reality of the hiatus?

  141. Pingback: Climate scientists criticize government paper that erases ‘pause’ in warming | Michael Bradley - Time Traveler

  142. Pingback: Climate scientists criticize government paper that erases ‘pause’ in warming | News Leader

  143. As Dr. Curry pointed out:

    well its not just the pause that ‘disappeared’, its also the 1998 El Nino warm bump

    Collateral damage should be expected during any war.

  144. Pingback: Climate scientists criticize fraudulent government paper on climate | Uncommon Sense

  145. Pingback: Not-So-Random Thoughts (XV) | POLITICS & PROSPERITY

  146. Pingback: NOAA erases the " pause " - HCS Snowmobile Forums

  147. Pingback: Climate scientists criticize government paper that erases ‘pause’ in warming - Nfostation.com

  148. “The IPCC’s statement of two years ago – that the global surface temperature ‘has shown a much smaller increasing linear trend over the past 15 years than over the past 30 to 60 years’ – is no longer valid,” said Dr Karl, the director of NOAA’s National Climatic Data Center.”

    This all looks and smells like a political contrivance, as opposed to science. It looks and smells very similar to a well-known situation from many years ago, when “stories” woven by the Nixon White House relative to the Watergate break-ins began to unravel. Reporters started to bore in to the White House spokesman, Ronald Ziegler, and his response became quite famous.

    To quote the NYTimes:

    … on April 17, 1973, Nixon stunned reporters by saying that he had conducted an investigation that raised the prospect of involvement by White House officials.

    Mr. Ziegler told a puzzled press corps that this was now the ”operative statement,” repeating the word operative six times. Finally, R. W. Apple Jr. of The New York Times asked, ”Would it be fair for us to infer, since what the president said today is now considered the operative statement, to quote you, that the other statement is no longer operative, that it is now inoperative?”

    Eventually Mr. Ziegler replied: ”The president refers to the fact that there is new material; therefore, this is the operative statement. The others are inoperative.

    From The Nation; The Nondenial Denier
    By TODD S. PURDUM
    Published in the NYT, February 16, 2003

    The good news is that the IPCC had very high confidence in their science and Karl et al. have high confidence in theirs. So we should all sleep well knowing that if the story changes again tomorrow we can have high confidence in the next “operative” statement.

  149. Pingback: Open Letter to Tom Karl of NOAA/NCEI Regarding “Hiatus Busting” Paper | Bob Tisdale – Climate Observations

  150. Pingback: Pause on Pausing the Pause | The Wave Chronicle

  151. Pingback: DEBATE HEATS UP Scientists slam paper that erases ‘pause’ in warming | Pure Country 104.9

  152. Pingback: Climate scientists criticize government paper that erases ‘pause’ in warming | Time4News

  153. Pingback: Open Letter to Tom Karl of NOAA/NCEI Regarding “Hiatus Busting” Data | Watts Up With That?

  154. The Karl’s non paper only demonstrates one thing.
    The climatology, at least as practiced by Karl&Co is not a serious science.
    For decades the “climatologists” have been shouting that they needed MORE DATA.
    So the taxpayers complied and gave them billions for buoys, satellites and balloons.
    Yet a single guy comes now and simply adds and substracts a number to all measurements, in fact declaring that these billions have been wasted because anything we learned was totally unreliable.

    Ironically the largest modifications were applied to the most modern data which should have been the surest.
    What comes next ? A poll asking people how they feel the temper