Uncertainty in SST measurements and data sets

by Judith Curry

Two new papers that discuss uncertainty in surface temperature measurements.

The issue of uncertainty in surface temperature measurements is getting some much needed attention, particularly in context of the HadCRUT datasets.  For context, some previous Climate Etc. posts on this topic:

The first paper, by John Kennedy of UK Met Office, provides a comprehensive and much needed uncertainty analysis of sea surface temperature measurements and analyses:

A review of uncertainty in in situ measurements and data sets of sea-surface temperature

John Kennedy

Abstract.  Archives of in situ sea-surface temperature (SST) measurements extend back more than 160 years. Quality of the measurements is variable and the area of the oceans they sample is limited, especially early in the record and during the two World Wars. Measurements of SST and the gridded data sets that are based on them are used in many applications so understanding and estimating the uncertainties are vital. The aim of this review is to give an overview of the various components that contribute to the overall uncertainty of SST measurements made in situ and of the data sets that are derived from them. In doing so, it also aims to identify current gaps in understanding. Uncertainties arise at the level of individual measurements with both systematic and random effects and, although these have been extensively studied, refinement of the error models continues. Recent improvements have been made in the understanding of the pervasive systematic errors that affect the assessment of long-term trends and variability. However, the adjustments applied to minimize these systematic errors are uncertain and these uncertainties are higher before the 1970s and particularly large in the period surrounding the Second World War owing to a lack of reliable metadata. The uncertainties associated with the choice of statistical methods used to create globally complete SST data sets have been explored using different analysis techniques but they do not incorporate the latest understanding of measurement errors and they want for a fair benchmark against which their skill can be objectively assessed. These problems can be addressed by the creation of new end-to-end SST analyses and by the recovery and digitization of data and metadata from ship log books and other contemporary literature.

Published in Reviews of Geophysics, link to abstract and full manuscript.

Excerpts:

In using SST observations and the analyses that are based on them, it is important to understand the uncertainties inherent in them and the assumptions and statistical methods that have gone into their creation. In this review I aim to give an overview of the various components that contribute to the overall uncertainty of SST measurements made in situ and of the data sets that are derived from them. In doing so, I also aim to identify current gaps in understanding.

Section 2 provides a classification of uncertainties. The classifications are not definitive, nor are they completely distinct. They do, however, reflect the way in which uncertainties have been approached in the literature and provide a useful framework for thinking about the uncertainties in SST data sets. The uncertainties have been tackled in ascending order of abstraction from the random errors associated with individual observations to the generic  problem of unknown unknowns.

Throughout this review the distinction will be made between an error and an uncertainty.  The error in a measurement is the difference between some idealized “true value” and the measured value and is unknowable. The uncertainty of a measurement [is defined] as the “parameter, associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand”. This is the sense in which uncertainty is generally meant in the following discussion. This is not necessarily the same usage as is found in the cited papers. It is common to see the word error used as a synonym for uncertainty such as in the commonly used phrases standard error and analysis error.

Broadly speaking, errors in individual SST observations have been split into two groupings: random observational errors and systematic observational errors. Although this is a convenient way to deal with the uncertainties, errors in SST measurements will generally share a little of the characteristics of each.

Random observational errors occur for many reasons: misreading of the thermometer, rounding errors, the difficulty of reading the thermometer to a precision higher than the smallest marked gradation, incorrectly recorded values, errors in transcription from written to digital sources and sensor noise among others. Although they might confound a single measurement, the independence of the individual errors means they tend to cancel out when large numbers are averaged together. Therefore, the contribution of random independent errors to the uncertainty on the global average SST is much smaller than the contribution of random error to the uncertainty on a single observation even in the most sparsely observed years. Nonetheless, where observations are few, random observational errors can be an important component of the total uncertainty.

Systematic observational errors are much more problematic because their effects become relatively more pronounced as greater numbers of observations are aggregated. Systematic errors might occur because a particular thermometer is mis-calibrated, or poorly sited. No amount of averaging of observations from a thermometer that is mis-calibrated such that it reads 1 K too high will reduce the error in the aggregate below this level save by chance. However, in many cases the systematic error will depend on the particular environment of the thermometer and will therefore be independent from ship to ship. In this case, averaging together observations from many different ships or buoys will tend to reduce the contribution of systematic observational errors to the uncertainty of the average.

In Kennedy et al., [2011b] two forms of this uncertainty were considered: grid-box sampling uncertainty and large-scale sampling uncertainty (which they referred to as coverage uncertainty). Grid-box sampling uncertainty refers to the uncertainty accruing from the estimation of an area-average SST anomaly within a grid box from a finite, and often small, number of observations. Large-scale sampling uncertainty refers to the uncertainty arising from estimating an area-average for a larger area that encompasses many grid boxes that do not contain observations. Although these two uncertainties are closely related, it is often easier to estimate the grid-box sampling uncertainty, where one is dealing with variability within a grid box, than the large-scale sampling uncertainty, where one must take into consideration the rich spectrum of variability at a global scale.

In the context of SST uncertainty, unknown unknowns are those things that have been overlooked. By their nature, unknown unknowns are unquantifiable; they represent the deeper uncertainties that beset all scientific endeavors. By deep, I do not mean to imply that they are necessarily large. In this review I hope to show that the scope for revolutions in our understanding is limited. Nevertheless, refinement through the continual evolution of our understanding can only come if we accept that our understanding is incomplete. Unknown unknowns will only come to light with continued, diligent and sometimes imaginative investigation of the data and metadata.

JC comment:  Uncertain T. Monster is VERY pleased by this comprehensive discussion of the uncertainties.  The greatest challenges (discussed at length in the paper) are how to assess structural uncertainties in the analysis methods and  how to combine all the uncertainties.  Any application of these data (including trend analysis) needs to consider these issues.

The second paper attempts to slay the uncertainty monster.

Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends

Kevin Cowtan and Robert Wray

Abstract. Incomplete global coverage is a potential source of bias in global temperature reconstructions if the unsampled regions are not uniformly distributed over the planet’s surface. The widely used HadCRUT4 dataset covers on average about 84% of the globe over recent decades, with the unsampled regions being concentrated at the poles and over Africa. Three existing reconstructions with near-global coverage are examined, each suggesting that HadCRUT4 is subject to bias due to its treatment of unobserved regions. Two alternative approaches for reconstructing global temperatures are explored, one based on an optimal interpolation algorithm and the other a hybrid method incorporating additional information from the satellite temperature record. The methods are validated on the basis of their skill at reconstructing omitted sets of observations. Both methods provide superior results than excluding the unsampled regions, with the hybrid method showing particular skill around the regions where no observations are available. Temperature trends are compared for the hybrid global temperature reconstruction and the raw HadCRUT4 data. The widely quoted trend since 1997 in the hybrid global reconstruction is two and a half times greater than the corresponding trend in the coverage-biased HadCRUT4 data. Coverage bias causes a cool bias in recent temperatures relative to the late 1990s which increases from around 1998 to the present. Trends starting in 1997 or 1998 are particularly biased with respect to the global trend. The issue is exacerbated by the strong El Ni˜no event of 1997-1998, which also tends to suppress trends starting during those years.

Published by the Royal Meteorological Society, link to abstract.

There is a web site with data and and metadata [here], and also an explanatory youtube video.

The Guardian has an extensive article, excerpts:

There are large gaps in its coverage, mainly in the Arctic, Antarctica, and Africa, where temperature monitoring stations are relatively scarce.

NASA’s GISTEMP surface temperature record tries to address the coverage gap by extrapolating temperatures in unmeasured regions based on the nearest measurements. However, the NASA data fails to include corrections for a change in the way sea surface temperatures are measured – a challenging problem that has so far only been addressed by the Met Office.

In their paper, Cowtan & Way apply a kriging approach to fill in the gaps between surface measurements, but they do so for both land and oceans. In a second approach, they also take advantage of the near-global coverage of satellite observations, combining the University of Alabama at Huntsville (UAH) satellite temperature measurements with the available surface data to fill in the gaps with a ‘hybrid’ temperature data set. They found that the kriging method works best to estimate temperatures over the oceans, while the hybrid method works best over land and most importantly sea ice, which accounts for much of the unobserved region.

Cowtan & Way investigate the claim of a global surface warming ‘pause’ over the past 16 years by examining the trends from 1997 through 2012. While HadCRUT4 only estimates the surface warming trend at 0.046°C per decade during that time, and NASA puts it at 0.080°C per decade, the new kriging and hybrid data sets estimate the trend during this time at 0.11 and 0.12°C per decade, respectively.

These results indicate that the slowed warming of average global surface temperature is not as significant as previously believed. Surface warming has slowed somewhat, in large part due to more overall global warming being transferred to the oceans over the past decade. However, these sorts of temporary surface warming slowdowns (and speed-ups) occur on a regular basis due to short-term natural influences.

The results of this study also have bearing on some recent research. For example, correcting for the recent cool bias indicates that global surface temperatures are not as far from the average of climate model projections as we previously thought, and certainly fall within the range of individual climate model temperature simulations. Recent studies that concluded the global climate is a bit less sensitive to the increased greenhouse effect than previously believed may also have somewhat underestimated the actual climate sensitivity.

This is of course just one study, as Dr. Cowtan is quick to note.

“No difficult scientific problem is ever solved in a single paper. I don’t expect our paper to be the last word on this, but I hope we have advanced the discussion.”

To give a flavor of twitter discussion:

Dana Nuccitelli:  This new study kills the myth of the global warming pause

John Kennedy:  The irony is that the study being used to bash HadCRUT4 assumes that HadCRUT4 is correct where we have data.

The paper is getting plenty of media attention, I’m also getting queries from reporters.

JC assessment

Let’s take a look at the 3 methods they use to fill in missing data, primarily in Africa, Arctic, and Antarctic.

  1. 1.  Kriging
  2. 2.  UAH satellite analyses of surface air temperature
  3. 3.  NCAR NCEP reanalysis

The state that most of the difference in their reconstructed global average comes from the Arctic, so I focus on the Arctic (which is where I have special expertise in any event).

First, Kriging.  Kriging across land/ocean/sea ice boundaries makes no physical sense.  While the paper cites Rigor et al. (2000) that shows ‘some’ correlation in winter between land and sea ice temps at up to 1000 km, I would expect no correlation in other seasons.

Second, UAH satellite analyses.  Not useful at high latitudes in the presence of temperature inversions and not useful over sea ice (which has a very complex spatially varying microwave emission signature).  Hopefully John Christy will chime in on this.

Third, re reanalyses in the Arctic. See Fig 1 from this paper, which gives you a sense of the magnitude of grid point errors for one point over an annual cycle.  Some potential utility here, but reanalyses are not useful for trends owing to temporal inhomogeneities in the datasets that are assimilated.

So I don’t think Cowtan and Wray’s analysis adds anything to our understanding of the global surface temperature field and the ‘pause.’

The bottom line remains Ed Hawkins’ figure that compares climate model simulations for regions where the surface observations exist.  This is the appropriate way to compare climate models to surface observations, and the outstanding issue is that the climate models and observations disagree.

aahawkins

Is there anything useful from Cowtan and Wray?  Well, they raise the issue that we should try to figure out some way obtain the variations of surface temperature over the Arctic Ocean.  This is an active topic of research.

803 responses to “Uncertainty in SST measurements and data sets

  1. Or perhaps Cowtan and Wray found Trenbeth’s missing heat in Santer’s workshop?.

    • How can we celebrate scientists who claim to have found a way to breakthrough all the noise that in the statistics of AGW is represented by huge error bars — to detect a human signal in the greenhouse warming effect — when humanity’s contribution is immeasurably miniscule at best? Sure, sure, charlatans are persuasive. The reason a human signal due to human activities is impossible to detect within the natural variation of the continually changing climate is because there is no link.

      • Always thought this would be a more prominent point – Co2 lags warming because natural warming creates more life which leads to more Co2, etc.

        http://hockeyschtick.blogspot.com/2013/11/new-paper-finds-ice-core-co2-levels-lag.html

      • David Springer

        Pencil whipping the data results in more warming than we thought.

        What a shocker. Has applying corrections to bad or missing data in consensus climate science ever resulted in less warming that we thought? Innocent mistakes should go both ways. If they don’t they’re probably not so innocent. Pencil whipping of data is a notorious way to massage it into giving a desired result instead of a result that is closer to the truth.

      • Wag…I disagree that humanity’s CO2 contribution is miniscule!!! Consider how many cubic miles of coal have been burned over the last century along with the billions of barrels of petroleum (for locomotion and heating) while forests have been cut down to create crop land. Current estimates of CO2 creation exceed CO2 removal by billions of tons per year.

      • Your examples are like a match compared to the energy of the Sun that global warming alarmists ignore.

      • Walter Carlson

        Wag…the estimate is that 9 petigrams of CO2 are added each year, but only year only 3 petigrams are removed by natural vegetation. So, how can you think the atmospheric CO2 levels are not increasing??

      • My second-hand CO2 is running about 40,000 to 53,000 ppm (parts per million)–i.e., 4% – 5.3% Carbon dioxide (wiki). By comparison, just 0.0387% of the air we’re breathing in is CO2–i.e., about 387 ppm or 0.000387ths by volume.

        If we were planning a mission outside the solar system we’d want lots of CO2 to grow healthy plants for our journey to the stars–e.g., growers keep CO2 levels at 1,000 to 2,000 ppm in Earthly greenhouses, which is about the level you’d find in a lecture hall full of students and pretty much what has been normal over most of Earth’s 550 million year history. Plants begin to die below 150 ppm. The Sahara wasn’t always a desert. Dr. Will Happer testified before the U.S. Senate that, “the planet is currently starved of CO2, and has been so starved for several million years.”

      • Gaia has a real problem, a whole plant kingdom with the vicious and unsustainable habit of virtually permanently sequestering CO2. How did she get so lucky as to develop an animal who could unsequester CO2?
        ========================

      • True, the “new approach to environmentalism,” according to Dr. Patrick Moore, “requires embracing humans as a positive element in evolution rather than viewing us as some kind of mistake.”

      • Walter Carlson

        Wag.as usual, you ignore the big picture and quote useless info. What I stated pertain to the global ecosphere, not a backyard greenhouse!!

      • If the ‘global ecosphere’ acts nothing like a greenhouse, why confuse things by using it as an example? The Left does just that because it’s just too good to not to use it as an analogy — even if the Earth really doesn’t work that way — because, it fits their narrative — that modernity is heating up the globe (AGW). The Left is pushing AGW for political purposes, even if it means pushing crazy ideas like a backyard BBQ and the SUV you used to pick up the charcoal is contributing to global warming, raising the seas, stirring up more and bigger hurricanes and burying warmth deep in the oceans where it cannot be measured but nonetheless will someday arise to the surface and consume us all like a fiery Phoenix.

      • How altogether that your ad hom attack should be your final bonfire of the vanities. All genius climate alarmists should try this experiment–

        Start a bonfire in your backyard and stare into the flames for an hour.

        Ok, now…

        Go outside on a cloudless day and stare into the sun for 5 minutes.

        You’re blind now, right?

      • Walter Carlson

        Waggie…you have a serious problem with reality!!! You brought up the greenhouse, not me! And YOU seem to have a problem as you label anyone who disagrees with you as ‘the Left’. How long have you been sooo confused??

      • So… the atmospheric CO2 levels increasing at a net rate of 6 “petigrams” a year (as you say) is not worrisome as climate alarmists using the analogy of a greenhouse wish to portray? You have some other reason to be alarmed by the increase? I agree that the increase is so relatively small we must measure it in parts per million but that to you was just meaningless data. What is it that you fear about an increase in atmospheric CO2 if it isn’t heating up the globe through a ‘greenhouse’ effect?

      • Waggie…you remind me of Scarecrow of Oz fame. Anyway, the answer to your question: “What is it that you fear about an increase in atmospheric CO2 if it isn’t heating up the globe through a ‘greenhouse’ effect?”

        Is found in:
        scientificamerican.com/article.cfm?id=new-york-state-begins-planning

      • You are afraid of rising seas. Got it. I heard the election Obama stopped that.

    • Surely you mean Santa’s workshop.

    • If not, Santer will beat the heat out of them!

  2. BAck to the trust issue. How come the changes when they adjust data are always in the same direction, Increased warming today and yesterday was actually cooler.

    Same link from my trust comment.
    http://stevengoddard.wordpress.com/data-tampering-at-ushcngiss/

    Scott

    • Perhaps the boats took more measurements when they were travelling Eastwards….

      • Spot on old chap! Port Out Starboard Home, QED and all that. Going out East, one was all fired up on meeting the fellow travellers , sipping champagne, eh what. Some were learning the intricacies of quoits and some were becoming naturalists measuring the sea temperature. Coming home it was all bridge, deckchairs, malaria and where the hell is my gin and tonic?
        :)

    • they arent data adjustments. its inferring missing data.

      1. The methodology of infering the surface from SAT data has already been shown to be useful by no less than mcIntyre. see Odonnell and mcIntyre

      2. We know that HADCRUT underestimates the warming at the pole
      from other surface datasets that use more complete datasets.

      The only question is how badly biased HADCRUT is. Way takes a good first step toward answering that question

      • ‘they arent data adjustments. its inferring missing data’
        Quite. Infilling missing data in most scientific traditions is a no-no, in my own field its a Felony.
        Some of us predicted that the pause would end when the data was reanalyzed.

      • In most sciences inferring missing behavior and data gets you a Nobel prize. Einstein and Brownian motion. Bohr and the photon. Etc.

        DocMartyn whines that its not fair!

      • “Einstein and Brownian motion. Bohr and the photon”?

        Didn’t do history of science did you. Einstein published a theoretical analysis for the movement of liquids, that describe Brownian motion. However, Einstein was unaware that Brownian motion, as he described it, had been observed.

        “He wrote in May 1905 to his friend and discussion partner
        Conrad Habicht, a famous letter in which Einstein listed four of the five pathbreaking papers on which he was working during his miracle year. The paper on Brownian motion was, after the paper on the light
        quantum and the dissertation on the determination of molecular dimensions, the third on Einstein’s list, before the relativity paper, which he had only outlined at that time:
        “The third proves that, on the assumption of the molecular theory of heat, bodies on the order of magnitude 1/1000 mm, suspended in liquids, must already perform an observable random motion that is produced by thermal motion; in fact, physiologist have observed motions of suspended small, inanimate, bodies, which motions they designate as “Brownian molecular motion.”

        http://www.physik.uni-augsburg.de/theo1/hanggi/History/Renn.pdf

        This is rather like the discovery of cosmic microwave background radiation by Robert Woodrow Wilson and Arno Allan Penzias, where the theoretical work was being done by Robert Dicke.

      • Think of all the unfinished symphonies that would still be unfinished if somebody didn’t infer the endings. Inference has its place.

      • steven, they are inferring missing data under the assumption that the difference between two measurements is predictable. I think they have a small issue with the Arctic due to inconsistent Stratospheric warming events which should create one mongo temperature inversion. That might be worth checking into.

      • If they are doing something like to co-kriging–I suspect this is the case–then they would’ve developed a quantitative model relating the two variables including associated uncertainties. Both the model and those uncertainties could be incorporated into the final kriging calculations.
        (The difference in the two measurements is predictable…it just may be lousy (or good). I suspect/hope the limits of predictability is taken up in the paper. That predictability seems to be a key aspect.) This one will be interesting to follow….

      • Capt.

        Nothing is wrong with using present data to create an estimate for missing data. Especially if you do cross validation and if you have a bias if you dont infill.
        And I did see you or other skeptics object when mcintyre odonnell and Id and nic lewis used a simalar mthod to
        Improve on steig.

        And nobody bitches when tonyb uses diaries to infill missing temps in cet.

      • mwgrant, “(The difference in the two measurements is predictable…it just may be lousy (or good). ”

        The thing with polar SSW events is that the difference is inconsistent as in reversing phase. I think that can be overcome, but I don’t think it is easy.

      • Steven, “And I did see you or other skeptics object when mcintyre odonnell and Id and nic lewis used a simalar mthod to
        Improve on steig. ”

        I didn’t bitch, I just thought that since the Antarctic is out of phase more often that in phase it was an exercise in futility :) Especially when the change is in -40C degrees temperature ranges where each anomaly should count for 50% of the “average” anomaly.

      • Nothing is wrong with using present data to create an estimate for missing data. Especially if you do cross validation and if you have a bias if you dont infill.

        Its a legitimate approach,however there is a significant distance in skill levels between Ruzmaiken and Feynman or Kravtsov et al and Cowtan and Ray.

        http://www.ocean-sci.net/7/755/2011/os-7-755-2011.html

        The Wasserstein distance being around 10 DOF

  3. Its Way. Robert Way

    • Mosher says Way does solid work. Presumably that means not like Mann or Lewwhatever. Perhaps we could get a more detailed discussion of the issues at that moderated blog you’ve participated in. (I don’t remember the URL).

      I’m not that impressed by the “pause” anyway, or by “global averages”, since the relative importance of temperatures at different times and places probably varies widely and we don’t know how. But if the observed Arctic temperature field can be filled in better, that would be a valuable accomplishment, useful in validating models, even if not relevant to “global warming”.

      • I know robert does first rate work because we’ve been comparing notes and methods and code for well over a year. At one point we spent about 3 months looking at labrador data from enviroment canada and BEST. he really likes to get down in the weeds.

        he’s not your typical desk jockey and does real live field work placing sensors in remote locations.

        So, I trust his work. why? because I’ve watched him work, watched him discover his own mistakes and my mistakes and I appreciate his attention to detail.

        Of course, folks should double and triple check, but he’s pretty damn solid.

      • That’s all right, Steven, what I was hoping for seems to be happening here.

      • > Lewwhatever.

        That’s Lew. Simpliciter.

      • That’s Lew. Simpliciter.

        But that doesn’t express my disdain.

      • “…doesn’t express my disdain.”

        Actually, it’s hard to express disdain in a way that satisfies. It’s so viscerally perceived that the it seems under served by mere words. At least to me.

        Spitting would work.

      • > Spitting would.

        Go for it, Poker. Don’t forget to wipe your screen with a good lint cloth.

        If your screen needs more care, use ammonia or alcohol.

    • “Spit on your monitor”

      I generally sublimate my disdain through fasting and prayer. Also coloring books.

  4. On the unknown unknowns, I couldn’t suppress a chuckle when I read (in the submitted document):
    ” Donald Rumsfeld memorably divided the world of knowledge into three quarters:….”

    I wonder if he intended it to be that funny, and did it make it into the print version?

  5. “John Kennedy: The irony is that the study being used to bash HadCRUT4 assumes that HadCRUT4 is correct where we have data.”

    Pretty much says it all.

    • Kennefy helped. See acknowledgements.

      • Steven, being correct where you have data is good but anytime you have to create data you have potential problems. “global” surface temperature is not a particularly good metric because “global” temperature is not a particularly reliable indication of “global” energy. The more effort expended trying to tweak GMT in regions where a degree of temperature anomaly is equivalent to half a unit energy for an “average” temperature anomaly is not exactly where I would focus much effort.

        Then that’s just me

  6. Brandon Shollenberger

    I believe that should be Robert Way not Robert Wray.

  7. The Cowtan & Way study agrees with the Wyatt and Curry Stadium Wave hypothesis in that natural variability can explain much of the observed variance in global temperatures. I see this as a positive for Wyatt & Curry.

    While there maybe uncertainty in the global SST there is no doubt that saving energy makes cents (pun intended).
    Check out my entry in the Biggest Energy Saver contest where I compete with every electric customer of Texas’s largest grid operator, ONCOR electric.
    http://www.biggestenergysaver.com/vote/
    My entry is labeled Jack S.

    Note: My solar array was completely owner financed, owner designed and made with 100% USA materials, labor and includes no tax credits or subsidies.

    • No doubt that this is a plus for the Stadium Wave hypothesis. The discrepancies between the main temperature time series of gistemp and hadcrut are vanishing so that the underlying variability can be focussed on, and the uncertainty on that reduced.

      It’s the same as having two clocks with different times. Why stand for that?

    • Jack,
      Thanks for your reasoned posts on HBB. No longer on that site but will try to vote for you in the BES contest. Not on social media either, though looking for a work-around.

      Keep up the good fight.
      ahansen

  8. Dear Dr Curry
    Thank you for you comments. We indeed hope that one of the results of our paper will be to stimulate a vigorous discussion in this area.
    With respect to kriging across land ocean boundaries, we note that this is a problem in the paper. Can I draw your attention to our update memo in which we test separate reconstruction of the land and ocean data before blending, which is in our view a better approach. To do this properly would require access to the HadCRUT4 land ensemble which is not currently distributed, but with the CRUTEM4 data (which lacks some corrections) the results of blending pre- or post-reconstruction is almost indistinguishable, even under different ice-coverage assumptions. (There is no reason why this must be the case, it is a result of the distribution of the unobserved regions). Dynamically changing ice is more difficult, and you can’t do it with anomalies as you don’t know what kind of bias you introduce when changing a cell from land to ocean, so we’ll have to leave that problem to the BEST team.
    Most interesting is the issue of the UAH data over Antarctica. We’ve recently been looking at this with respect to both Vostok, and the Bromwich 2012 Byrd reconstruction. Byrd particularly interesting – it sits on a cell boundary and is remarkably well modelled by the cell to the north in the hybrid reconstruction. The cell to the south models the year-to-year variations, but not the long term trend. We’ve made some preliminary analysis of what is going on based on differencing North-South transects in the UAH data. Some regions show no significant changes, whereas others show large changes in either direction around 2000. I hope to write this up as another update, and maybe Dr Christie will be able to shed more light on the issue, although I’m afraid everything takes a long time when you’re doing it in your spare time.
    So it may be that kriging is a better approach for Antartica, especially with remediated data from some of the isolated stations – Byrd is critical here, and I want to do some detailed comparisons with BEST too. Against that, the holdout tests actually favour the hybrid approach for most of the existing station locations, including the SP.
    Having said all of that, the difference between the hybrid and kriging reconstructions of Antarctica is only really significant around 1998, so it doesn’t greatly affect our conclusions. And the Arctic is sufficiently small that the two reconstructions are very similar. Most of the Arctic coverage bias also arises in the NH winter, when the Rigor result is most relevant.
    If I may appeal to your own expertise, there would seem to be a parallel between our results and those of Cohen et al 2012 (doi:10.1088/1748-9326/7/1/014007). Do you think there is a plausible connection?

    • Dr. Cowtan
      Does this mean that the GISTEMP measurements should converge to the HadCrut with kriging adjustments?

      http://imageshack.us/scaled/landing/818/2yd.gif

      The chart above is an overlay of GISTEMP on top of your Fig S6,along with a simple model assuming variability.

      The differences to GISTEMP appear rather minor and are mainly in the last few years.

      Amazing work, congratulations.

    • Dr. Cowtan
      Thanks for coming here and engaging. Though I won’t pretend to understand the science, I respect Dr. Curry and am interested in the unfolding dialog.

    • Dear Kevin, thank you very much for stopping by to engage here. With regards to the UAH data in the polar regions, there are good reasons why RSS doesn’t show data for the polar regions. While I think UAH is hopeless over sea ice, i do see that there could be some sort of a useful signal over the Antarctic continent.

      Can you clarify what you see as the parallel between your results and Cohen et al. 2012?

      In any event, it is good to see some new perspectives on this topic.

      • Actually, the Cohen thing is curiosity and an unhealthy obsession with patterns. Our bias being greatest in winter and the boreal winter cool patterns caught my attention. But it’s way beyond my expertise, and our dataset is probably not the right one for this problem, and experts like you and Jennifer Francis are already doing good work on this, so I’ll sit on the sidelines and watch.

        Good critical discussion is invaluable to good science. One of the best things we did with this paper was ask for referees who we thought were best qualified to spot the holes in our work. The discussion here is also very helpful me to shape a plan of work going forward. I’m sorry I haven’t been able to engage further, it’s a busy teaching time.

        I’m pondering on posting what I think are the next steps on dealing with the coverage issue. On the one hand that will tell people what they can expect from us, and also provide a list of interesting projects which we know we can’t take on. On the other hand I don’t want to influence other people’s approach to this problem too much. It’s a difficult call.

      • I really like your approach in engaging publicly on this. Debates and critical discussions are what moves science forward. I would encourage to do a post on next steps to encourage discussion and to generate new ideas, I would be happy to post it at Climate Etc also.

    • Just curious – why isn’t the HadCRUT4 land ensemble available. Is it still a work in progress?

    • Dr Cowtan
      Thanks for your explanations. Two comments and a query:
      1) I do not think the Bromwich reconstruction for Byrd station in Antartica should be relied on. Almost all the large difference between their fast warming reconstruction and previous reconstructions for the small grid cell containing Byrd (eg that by Steig et al, 2009, and that by O’Donnell, myself, McIntyre and Condon in 2011) arose from splicing the early manned Byrd station and the later automatic Byrd weather station records into a single record with nil offset, despite the long gap between them, the different station location and type. No one else had thought fit to do so.

      2) For our Atarctic temperature reconsturction, we used AVHRR data from polar orbiting satellites that measured the skin temperature of the Antarctic surface. We found that these displayed sensible spatial correlations although their trends were unreliable. The MSU atmospheric temperature data that UAH mainly deal with is not really suitable as a proxy for near surface air temperature over high altitude snow covered regions, nor (as I recall) as a proxy for sea surface temperature.

      3) Can you clarify exactly what satellite data you used, please? I have been unable to tell from the documents that you have made publicly available – perhaps I have missed it.

  9. Why would you spend the time to publish a paper using methods so easily refuted? Just to get a Nuccitelli like response from the cheer leading squad? I am far from a conspiracy theorist but papers like this look more like damage control propaganda than science. The more I learn about academia the less reputable it seems. Judith you are an exception.

    • A bit of a retraction. My first sentence is an actual question, not rhetorical. The rest of my post was a bit ham handed and unfair to Dr. Cowtan and Way. My apologies.

    • “Why would you spend the time to publish a paper using methods so easily refuted?”

      They haven’t been refuted.

      ” I am far from a conspiracy theorist”

      This is directly contradicted by the evidence of your own post.

      “The more I learn about academia the less reputable it seems. Judith you are an exception.”

      This is strictly a consequence of a) you don’t like what they say and b) you do like what she says … and that’s why you take her comments to be a refutation of the paper, despite your own inability to evaluate either.

  10. First, Kriging. Kriging across land/ocean/sea ice boundaries makes no physical sense. While the paper cites Rigor et al. (2000) that shows ‘some’ correlation in winter between land and sea ice temps at up to 1000 km, I would expect no correlation in other seasons.

    Response [1] Actually in the paper we show through rigorous cross-validation tests (see Table 1; Table 2; Figure 3) that kriging is an effective approach for estimating temperatures, even across boundaries. However the hybrid approach performs better than any other method at reconstructing high latitude temperatures (see Figure 3 – cross validation) even at distances of 1650 km). In the case of sea ice this hypothesis has been tested (see Figure 4) where it is shown that kriging from land regions outperforms kriging from ocean cells.

    Second, UAH satellite analyses. Not useful at high latitudes in the presence of temperature inversions and not useful over sea ice (which has a very complex spatially varying microwave emission signature). Hopefully John Christy will chime in on this.

    Response [2] As indicated in the response to the 1st comment – we have tested the methodology adopted in this study against both held-out observations and against grounded/floating buoys in the Arctic ocean, often located on sea ice. The results of our study indicate that the performance of the hybrid method is reasonable over ice (Figure 4; Figure S5). We also provide an attempt at showing the impacts of changing sea ice conditions on the reconstruction. Although not available in the supplemental information we have also tested the method in Antarctic against the reconciled Byrd station located in one of the most icebound, isolated places on the planet. The results of this test show very reasonable performance with the hybrid method.

    Third, re reanalyses in the Arctic. See Fig 1 from this paper, which gives you a sense of the magnitude of grid point errors for one point over an annual cycle. Some potential utility here, but reanalyses are not useful for trends owing to temporal inhomogeneities in the datasets that are assimilated.

    Response [3] Since the paper in question was published there have been significant advances in reanalysis methods. In particular, 4-D methods such as those employed by ERA-Interim have shown to be much more reliable in the Arctic and Antarctic. There are a series of papers by James Screen at Exeter which delves into many of these issues and examines the performance of reanalysis products in both the Arctic and Antarctic. I would suggest that Dr. Curry take a bit of time to have a look at the results of some of these studies. That being said the paper does not use reanalysis to infill temperatures, nor do we use it with the kriging, reanalysis is simply presented as an additional source of evidence in additional to satellites, radiosondes and isolated weather stations which show that the Arctic is rapidly warming. Physical evidence is also available in the form of sea ice reduction and glacier changes as well as melt records from high Arctic ice caps. There is a wealth of literature supporting the conclusions that the Arctic is warming rapidly and this relationship (Arctic Amplification) is clear in the paleorecords.

    • James, thanks for stopping by and engaging here. I agree that there is evidence of warming in the Arctic, however, I remain unconvinced that your methods are verified in any meaningful way for surface temperatures of open water and sea ice in the Arctic Ocean. I see no reference to papers by James Screen in your paper, I don’t know what papers you are referring to. I have recently done a comprehensive literature survey regarding in situ surface temperature and surface flux measurements in the Arctic Ocean (for a grant proposal). I have not seen any recent studies evaluating reanalyses using these data sets.

    • David Springer

      If Arctic temperatures in the modern era where we have more and better instrumentation than ever before have been shown to poorly estimated what’s that say about estimates of Arctic temperatures before the modern era?

      I’ve long held that the data anthropogenic warmists need to find hundreths of degree warming in global average simply doesn’t exist even now and just gets progressively farther from adequate with every year it steps back in time.

      In other words if the Herculean efforts to produce accurate GAT were mistaken up until this paper was published in 2013 how mistaken are estimates of what was happening one hundred years ago? How can we possibly compare now to then with any confidence? Answer is simple: we can’t. Yet ideologues continue to massage here, interpolate there, and then present the results as proof of their predetermined conclusions. What a load of BS. The higher and deeper it gets stacked the less scrutiny it takes to see it. That’s why consensus climate science is losing the war for the hearts and minds of everyone outside the field.

      As always, write that down.

    • Whether kriging works over boundaries (by which I assume is meant discontinuities) surely depends on where you sample.

      Provided you sample near both sides of the boundary I don’t see how a problem can arise.

      Conversely, if the boundary is far from any samples, I don’t see how knowledge of the respective covariances on each side of the boundary can tell you anything at all about where the boundary is other than that it is between certain samples. Covariances between samples on opposite sides of the boundary are presumably useless other than as a diagnostic that there’s a boundary.

  11. Overall, I would advise that commentators read the full paper and the supplemental materials before making assertions as to the applicability of certain methodologies. The cross-validation steps taken in this paper are very important and the paper shows rather clearly that the Hybrid method in particular appears to be fairly robust even at long distances from adjacent cells.

    • Can I ask a very simple question?
      Did you remove individual stations, at random, then calculate the temperature at that site, then compare the real with the calculated?
      I believe that such calculations are the only way to know how well you model captures reality and where and how it fails.

      • Doc, in Cowtan’s first post, he mentions something about holdouts, so they do seem to be checking the corrections on a holdout sample. People sometimes do what you’re suggesting to estimate the covariance matrix of an estimated parameter vector (eliminate each observation, use the remaining N-1 to estimate, get N parameter vector estimates, calculate matrix using them… in the circles I know they call this “the jack-knife estimator”)

      • There is a link to a youtube video which very briefly describes what they did.

      • DocMartyn, you can check the video. They removed large areas as a test to see how well their method infilled the missing data regions.

    • Steve Fitzpatrick

      Good suggestion. Unfortunately the article is pay-walled… and they won’t even tell you the price until you provide them with your credit card. Too bad.

      • So, lets’signore it and make uniformed critcisms.

        Go team Skeptic!

      • stevefitzpatrick

        Michael,
        Did I make an uninformed criticism? Nope. But you sure did. Get a life pal.

      • Steve,

        Apologies – I wasn’t meaing you in particular, becuase you didn’t.

        But if people are so interested that they want to critique, why not cough up a few bucks and read the damn thing…..or just be quiet??

    • From the SI:

      Arctic temperature anomalies are changing approximately 8 times faster than temperatures over the planet as a whole. As a result, the comparatively small unobserved region in the Arctic can significantly bias the global temperature trend.

      Wow. Eight times. It sure is a good thing that we are able to remove that substantial cooling bias from the global temperature trend, for the period 1997-2012.

      What about the rest of the record? What do your results show for Arctic temperature bias for, say … 1925 – 1942? Or 1960-1977? How badly was the Arctic temperature bias screwing up the reported global temperature trend then? What does the record show, now that you’ve removed it?

      • The Arctic warming is a lot faster more recently, witness sea-ice trends.

      • Jim D

        The Arctic warming is a lot faster more recently,

        More recently than when? 1925-1942? 1960-1977? Cant say that without data.

        It sure will be nice when Robert Way comes back and shows us what the comparable trends were, now that they have removed the Arctic temperature bias from the temperature records from those periods.

        And I wonder – given the see-sawing that seems to occur between the Artic and Antarctic, when over the last century was there an Antarctic temperature bias that was not previously accounted for in the global temperature record?

        I can’t wait to learn what these exciting new results say about these interesting and important questions!

      • Well, it would appear that the Robert way is to vanish as quickly as one appears. Attention drawn off by more important matters, one supposes. Perhaps a new shipment of uniforms has arrived over at SS, and he is needed for a rather different kind of modeling activity.

        Pity.

  12. The paper is getting plenty of media attention, I’m also getting queries from reporters.

    Greet David Rose for us!

  13. Somewhat on topic, and it assumes John Kennedy will be scanning the comments. Could you please give us an update on the status of the HADISST2 dataset, John?

    Cheers

  14. I think it is awesome that the authors defend their paper on the blog where it was challenged. I am not competent to judge the issues myself, but I’m guessing that I will soon have a clear impression of where people end up. This is a thousand times better than each group publishing proofs in their own echo chambers for their own fans.

    • +1

    • Agree mike. Non scientists are well served in these exchanges. After a while, you can get a sense of who’s on more solid ground, even without fully understanding the science. Sneering elitist warmists like Web who would if they had their way, require a Ph.D in physics to be produced at the voting booth, will never acknowledge that.

      • Well, and algebra test might not be such a bad idea.

      • Hey PG, someone got moved into the BoreHole at RealClimate for saying that Dr. Cowtan is a mere X-ray crystallographer.

        Well, I got my PhD in electron diffraction and did X-ray work as a post-doc and if there is one thing that these dudes are good at it is in reconstructing reality from a reciprocal space and limited data.

        I suppose that is too elitist for you..

      • “Well, and algebra test might not be such a bad idea”

        Barely passed algebra. In my Junior year my homeroom teacher tossed my report card on my desk and said, “Congratulations. Full house.” I opened it up to find 3 D’s and 2 F’s. Never held me back. Had a successful business career, and have achieved some literary success writing personal essays since then.

        My general point obvious as it is, is that there are different forms of intelligence and more than one way to solve problems.

  15. JC there is an author error for Robert Way(who is commenting here)

  16. When different temperature time series are discussed, it’s good to remember that the warming cannot be described by any single time series. The global average surface temperature is just one proxy for the warming, and not necessarily the best proxy by objective criteria.

    A good proxy
    – Is closely correlated with phenomena that affect us.
    – Can be determined accurately and unambiguously from measurements.
    – Has little random variability.

    From the list given above we can conclude that a good proxy is not strongly influenced by surface temperatures in regions where they vary exceptionally much without affecting anything else as strongly. That may be the case for winter temperatures at high latitudes in areas with highly varying influence of temperature inversion, to give just one example.

  17. Help me understand this. When paper 2 says that HadCrut4 “covers 84% of the globe,” does that mean that 84% is fairly represented by direct measurement while 16% has to be inferred by some method? And, the inference procedure that differs between HadCrut4 and paper 2 just makes inferences for that latter 16% of the globe?

    If the answer is yes, then the results stated in the Graudain seem hard to believe. 16% is about one-sixth of the global area. HadCrut4 and paper 2 get a difference of 0.115 – 0.046 = 0.069 in dCent/decade in the average warming rate over the entire globe. To get that, HadCrut4 and paper 2 would be asserting a 6*.069 = 0.414 dCent/Decade difference in average warming rates over the unsampled one-sixth of the globe. That’s roughly an order of magnitude faster warming than the claimed global average rate in HadCrut4 (0.046).

    I suppose this is possible if polar warming is predicted (say by the models) to be 10 times the global average, but I don’t recall hearing such a big multiplier.
    Or am I misunderstanding the numbers?

  18. What was the reasoning behind using UAH and not RSS data? Additionally, would not a comparison methodology substituting RSS for UAH data be informative?

  19. Thanks, Professor Curry, for this report.

    It encourages me to go ahead and boldly identify the elephant in the living room that has been danced around and ignored by mainstream scientists for the past sixty-eight years (2013 – 1945 = 68 years).

  20. “with the hybrid method showing particular skill around the regions where no observations are available.”
    Incredible [in the Princess Bride sort of way]
    Any method must show considerable skill when there are no other comparisons to the results.
    Could they try chicken entrails for example, Steve.
    After all a bad model is better than none
    Or wait.
    There are a whole lot of Climate Models at the IPCC just waiting to be compared

    • angech,

      You misunderstand the Way of Warm. Chicken entrails perform best for predicting future data, in lieu of future observations. I have used them many times, and neither the Team, the IPCC, or even WebHubTelescope are able to disprove the results.

      What you need for creating data where none exists, either in the past or the present, is to use “Runestones of Power”. These show incredible skill, not just considerable skill, particular where the results cannot be confirmed by observation.

      Even cheaper is to just guess. Nobody can prove you wrong.

      Live well and prosper,

      Mike Flynn.

    • angech, I don’t think that’s what was meant. I think “around the regions” means “at sampled, direct measurement points near the boundaries of the unsampled regions.”

  21. Cowtan and Way: Your method presents a short-term trend (1979-2012) that’s even greater than GISS. 0.11 deg C/ decade and 0.12 deg C for your infilling methods versus 0.8 deg C/decade for GISS.

    Part of the GISS warming bias results from their masking of sea surface temperatures in areas where sea ice can exist and replacing that sea surface temperature data with land surface air temperature data.
    http://bobtisdale.files.wordpress.com/2012/04/figure-14.png
    Full post is here:
    http://bobtisdale.wordpress.com/2012/04/13/the-impact-of-giss-replacing-sea-surface-temperature-data-with-land-surface-temperature-data/

    How does your method address this bias?

  22. Concerning the SST measurements over the last 100+ years, the only strange anomaly I have come across in my own analysis work using the CSALT model, is a warming glitch starting in late 1943 and lasting into 1944 before declining.

    This spike is only weakly associated with a SOI peak and is suspicious as it corresponds to many missing temperature readings during the war years. It also emerges in the land-only data.

    A warming spike also occurs in 1939. The big SOI event occurs in 1941 which does show up in the data.

    It may just be coincidence but Kennedy does say that the uncertainties are “particularly large in the period surrounding the Second World War owing to a lack of reliable metadata”.

    • WebHubTelescope,

      “In the context of SST uncertainty, unknown unknowns are those things that have been overlooked. By their nature, unknown unknowns are unquantifiable; they represent the deeper uncertainties that beset all scientific endeavors. By deep, I do not mean to imply that they are necessarily large. In this review I hope to show that the scope for revolutions in our understanding is limited. Nevertheless, refinement through the continual evolution of our understanding can only come if we accept that our understanding is incomplete. Unknown unknowns will only come to light with continued, diligent and sometimes imaginative investigation of the data and metadata.” – Kennedy.

      You say you have come across a strange anomaly. Why would you find his strange?

      As Kennedy says, you may need a little bit of “imaginative investigation”.

      As I become more fluent in the language of the Book of Warm, this means use your model results to create the data which should have existed, according to your model, and then adjust actual observations to fit. Voila!
      No more stupid anomaly!

      No thanks necessary. I am glad to be able to help.

      Live well and prosper,

      Mike Flynn.

      • Willard, ” But why would we need models if we can have access reality by looking at the data?”

        You don’t need models for anything other than describing data. Your model may find errors in the observations, but you never assume the model first unless it is as solid as a rock aka a physical law. So when your model butts head with physical laws, ya need to proceed with caution.

        You seem to think that that attitude is some sort of failing on my part and accuse me of being a cherry picker. Which is extremely humorous.

      • > So when your model butts head with physical laws, ya need to proceed with caution. You seem to think that that attitude is some sort of failing on my part and accuse me of being a cherry picker.

        Thank you for asking, Cap’n. Perhaps I can clarify two points.

        The first is that your complaint may be seen as trivial: all models are wrong. All models will butt head with some physical laws. Numerical methods oblige.

        The second is that I did not wish to accuse you of being a cherry picker, but to show how easy it would be to dogwhistle it with a counterfactual like “If reality don’t suit you […]”. I have no idea if you really cherry pick or not, and quite frankly I don’t care.

        Taken together, the two points amount to suggest that it might be more fruitful to argue about the models’ usefulness than to entertain mind-probing counterfactuals on the basis of a trivial property of models.

        Hope this is clearer.

      • Willard, you should be more observant of the two engage in conversation. Webster’s warmth and charity is a little less obvious than most which tends to set the tone of the discussion. I sowed the seed of his model being able to find blemishes in the data and am enjoying my harvest.

    • Web SST were measured by examining ships logs. Before the WWII ships would take take the most economic routes. In the run up to WWII routes changes in the Atlantic, and also with the break down in relations between Japan and America, in the Pacific.
      The war saw ships using different routes, and slow, coal burning were replaced by faster, oil fuels ships.
      After the war the trade routes used by shipping were completely altered, with respect to pre-war years. Japan and (West) Germany didn’t return to a pre-war GDP until 59/60

      • And the ships were probably more worried about dodging torpedoes and evading aircraft than slowing down to take accurate temp. measurements.

      • Kennedy says that “During the war years 0.2K was added to reflect the additional uncertainty during that period”

        This is an uncertainty level and not an offset, but it is curious that the only time that the CSALT model residual error stays above 0.1K for any length of time, and actually reaches 0.2K is from the years 1938 to 1945.

        The WWII temperature anomaly numbers are suspect as the CSALT model also substantiates.

      • Webster, “This is an uncertainty level and not an offset, but it is curious that the only time that the CSALT model residual error stays above 0.1K for any length of time, and actually reaches 0.2K is from the years 1938 to 1945.”

        BEST “global” is suppose to look into that with kriging which should answer a few questions. I doubt there will much change though based on the land surface temperature which BEST can use in their kriging.

      • ” captdallas 0.8 or less | November 14, 2013 at 11:18 am |

        Webster,

        Oceania”

        Ahh, I can always count on Cappy for a heaping dose of MISDIRECTION.

        Other side of the globe buddy.

        It appears that the major portion of the WWII anomaly is due to contributions from the Arctic zonal region. The CSALT model residual of the GISTEMP series has dual spikes that straddle a broad Arctic peak during the war years:
        http://img198.imageshack.us/img198/1193/hdro.gif
        On the right hand side is a finer resolution which highlights the two spikes occurring at 1939 and 1943 in the Arctic and how they line up with the CSALT model residual spikes.

        The CSALT model does not use data that is specific to the Arctic so that the temperature anomaly could possibly be of a mechanism other than one of the CSALT indices, or it could be a measurement error in the Arctic.

        The data is described here by KevinC
        http://www.skepticalscience.com/print.php?n=1378

      • Webster, “It appears that the major portion of the WWII anomaly is due to contributions from the Arctic zonal region.”

        Oh really? After breaking the SST into 10 degree latitude bands and weighting them by actual area, it kinda looks like 5S-5N, 5N-15N and 5S-15S had the largest impacts while 55N-65 north was actually rising in temperature, though it did likely start the ball rolling in the 1920s. Since the Oceania surface temperatures tend to agree with that, I would say you are trying to blow smoke up someone’s arse :)

      • More misdirection, Cappy?
        How quaint.

        Arctic amplification amplifies the noise. That is essentially what I am looking at this point.

      • Webster, “Arctic amplification amplifies the noise. That is essentially what I am looking at this point.”

        The you should notice that there is more amplification in the 30N-60N latitude band because the “choke point” or rapid reduction if Sea surface to land area ratio. 45N has the highest variance and should be a good break point which is why I used the 65N-45N 45N-45S and 45S-65S areas to show the “waves”.

        The problem is finding a starting point, 1910-1920 is actually volcanic/solar nearly synchronized push down on 200 year weakly damped recovery by the looks of it. That’s why you get and amplified rebound in ~1940, that fairly consistent ~30 year lag.

        Arctic amplification “should be” the biggest, but 30N-60N “is” the biggest amplification.

      • btw Webster, if kriging mapped variance base on actual energy instead of just temperature it would pick up more of the blemishes :)

      • Kevin Cowtan intersects the work of John Kennedy with this post:
        http://skepticalscience.com/hadsst3_a_detailed_look.html

        The CSALT model has the largest residual in the early 1940’s:
        http://imageshack.us/a/img534/3678/nj7d.gif
        This spike sticks out like a sore thumb on the GISS series as well as the other ones.

        If what Kennedy says is correct, ship crews didn’t fuss with the trailing buckets and he thinks the temperatures were high during the WWII years because the thermometers were near the engine room intakes.

        Knock that down by 0.2C as Kennedy says is the uncertainty and the model actually predicts that this is an instrumental measurement error. That’s what models are good for !

      • Webster, “Knock that down by 0.2C as Kennedy says is the uncertainty and the model actually predicts that this is an instrumental measurement error. That’s what models are good for !”

        You hit the nail on the head, if reality don’t suit you, model it away. Since standard kriging wasn’t quite good enough, hybrid it wit stratospheric reading until you get it right. If you want AGW to start in 1900, declare that 1900 was “normal” . If you don;t like the recovery pop and drop in 1941, ignore the coastal tropical surface station data and massage that away.

        That is what models CAN be good for.

      • > If reality don’t suit you […]

        If we had a more direct access to reality, Cap’n, do you think we’d bother with models?

      • You do have more access to reality, you just choose to spend money on supercomputer climate models instead.

      • Willard, there was a CAN there. Models and reality are both imperfect. What Webster is thinking is that his model justifies ignoring other real evidence, reality, that the 1941 SST event is real. He is letting his confirmation bias get in the way. I am not advocating revising history or ignoring data.

        Cowtan and Way had a nice paper that has potential but stopped at the results they liked when they could have taken the next step and had a great paper. Now they seem to think that their model is good enough to rewrite some history. I think history can take care of itself.

        In ten years this will be part of history, probably an interesting chapter.

      • That it remains an interesting footnote in ten years would be a success, Cap’n.

        If you like a dataset, just call it reality, right?

      • Willard, “If you like a dataset, just call it reality, right?”

        I understand that you forte is not the hard sciences so you don’t realize that the C&W paper is effectively changing the freezing point of salt water to ~+4C degrees which is a physical impossibility, but that is the changing of “reality” that is being proposed.

        Logic and esoteric debate is a wonderful past time but actually applying it tends to elude some folks.

      • I love it when you resort to ad homs, Cap’n.

        You just got caught conflating your favorite data with reality, you know. Acknowledging this only forces you to admit you also need to rely on something like a model, which means you’re remark reduces to “my model is better than yours”. But then you’d have to put forward a model, and argue why your model is better than the ones you criticize. For instance, you can take the Lewis gambit and pretend your model is (more) empirically based, as if such empiricism was free of dogmas.

        And since you want to play tough, this ain’t your turf at all. Epistemology 101, really.

      • Willard, “You just got caught conflating your favorite data with reality, you know. ”

        That is complete BS, I don’t have favorite data, I look at data. When you switch from a SST metric limited by the physical properties of water to a surface air temperature at the end you will get an anomaly spike. Since the surface air temperature is multiple tens of degrees lower than the sst you have apples and oranges. Now if they provided a reason to mix the metrics, their method would be useful especially for determining the impact of sudden stratospheric warming events. As it is though, it is a misrepresentation of the sst data.

        You of course cannot realize this because it is well outside your field of expertise which I am still trying to determine.

      • ” captdallas 0.8 or less | November 15, 2013 at 11:11 pm |

        Willard, there was a CAN there. Models and reality are both imperfect. What Webster is thinking is that his model justifies ignoring other real evidence, reality, that the 1941 SST event is real. He is letting his confirmation bias get in the way. I am not advocating revising history or ignoring data. ”

        Kevin Cowtan has a SkS post from yesterday on SST Bias
        http://www.skepticalscience.com/the_other_bias.html

        Scroll to the bottom of the comments and you can see my addition of the WWII correction.

        There is much evidence that the SST temperatures from 1940-1945 were biased warm by about 0.1C due to less rigor in measurements. This is understandable as the merchant marine were more concerned about attracting nearby U-boats than dragging a bucket behind their ship to get good temperature readings. Because the default thermometers were near the engine intake, the temperatures were biased high until the war ended.

        http://img809.imageshack.us/img809/6500/zrj.gif
        View it and weep.

      • Webster, ” Because the default thermometers were near the engine intake, the temperatures were biased high until the war ended.”

        So that is why the cooling anomaly occurred during the war? Then prior to the war the confidence degrades the further back in time you travel.

      • webster, if it is engine intakes, that would be obvious in each and every area of the ocean. Simple instrumentation error.

        https://lh4.googleusercontent.com/-_6Ae9_Qkoek/UofH_DwH5xI/AAAAAAAAKiY/ZKDY2xWO8gQ/w677-h431-no/bucket+bias.png

        Where is it?

      • OMG Webster! It looks like the buckets-intake virus is contagious!

        https://lh6.googleusercontent.com/-oXDS9d9Jlog/UofZNxxJkXI/AAAAAAAAKjA/fWfmWoKE9s8/w867-h453-no/best+and+GIS+get+bucket+bias.png

        It infected BEST Tmax, Tmin and GISS dTs! Oh the Humanity! Why would a full 0.2C of error just randomly spoil a perfectly good correlation?

      • captd, I thought BEST was land only. Since when did they use buckets for land?

      • JimD, “captd, I thought BEST was land only. Since when did they use buckets for land?”

        They didn’t, it is sarcasm. Webster believes that the SST anomaly during WWII is an obvious out mistake that needs to be removed. Oddly his “mistake” shows up in the land data Tmax, Tmin just like a real “global” event might. Per Webster, Cowtan and Ways, “superior” kriging method “proves” that the switch from buckets to intakes is a glaring error. I say if the same thing is in all the other data, they just might be wrong.

      • > I don’t have favorite data, I look at data.

        My mistake: you look at data, Cap’n. Then you see or feel reality. But why would we need models if we can have access reality by looking at the data?

        Thus we get back to the first question. That means you’ll have to do better than that if you wish to divert me from it.

        Thank you for the other ad hom, Cap’n.


      • captdallas 0.8 or less | November 16, 2013 at 3:50 pm |

        OMG Webster! It looks like the buckets-intake virus is contagious!

        https://lh6.googleusercontent.com/-oXDS9d9Jlog/UofZNxxJkXI/AAAAAAAAKjA/fWfmWoKE9s8/w867-h453-no/best+and+GIS+get+bucket+bias.png

        It infected BEST Tmax, Tmin and GISS dTs! Oh the Humanity! Why would a full 0.2C of error just randomly spoil a perfectly good correlation?

        Cappy, you are always so full of it. The peaks are real as the year 1941 had a full-blown El Nino event as evidenced by a strong SOI extremum.
        http://img202.imageshack.us/img202/9397/o51.gif

        The 0.1C correction comes about because it is understood that the sea temperatures were exaggerated on the warm side during WWII.

        Furthermore, as far as the sharp Northern Hemisphere peaks in 1938 and 1944, those are spikes that are not captured by the SOI or other CSALT components.
        You can see these clearly in the Atlantic and the land areas here:
        http://imageshack.us/a/img585/5273/y6w.gif

        I can add an AMO index as the Hurrell diff in presures to capture that.

      • CaptnDallas

        “OMG Webster! It looks like the buckets-intake virus is contagious
        [snip link]
        It infected BEST Tmax, Tmin and GISS dTs! Oh the Humanity!…”

        Ooooh!!! I like that. [Really] But one nit. I have to keep working on you to stop the slander:

        “Per Webster, Cowtan and Ways, “superior” kriging method “proves” … etc. etc.”

        I’m watching you.

        mwgrant

      • Webster, “The 0.1C correction comes about because it is understood that the sea temperatures were exaggerated on the warm side during WWII.”

        And if they correct for that front peak they will add to the valley at the end. The data “might” have a wart. That wart is inside the margin of error. You live with some warts. Since the Best Tmin, Tmax and GISSdTs all have a similar wart with what should be expected lags, that wart just might not be a wart. Given that each latitude band has its own seasonal oscillation, one really should expect those warts

      • Verdammte strikeouts on ‘kriging’ didn’t take. fPhee on WordPress.

      • mwgrant, my apologies :)


      • “Per Webster, Cowtan and Ways, “superior” kriging method “proves” … etc. etc.”

        I’m watching you.

        mwgrant

        lusers.
        MNFTIU

      • @WHUT

        Sorry sweets, Captn’s comment was funny. That is about where you have said ‘touche’ and proceeded. Lighten up.

    • Matthew R Marler

      WebHubTelescope, as a respecter of your model but not yet a believer, I would like to repeat my question from yesterday:

      What rates are we talking about? Say the ECS to a doubling of CO2 is 3C and the concentration of CO2 doubles: how long does it take the ocean surface to warm up by 2.8C? A year? two years?

      In your thinking globally, but not in your model, there is a long lag between the transient response and the equilibrium response at the surface. If for the sake of argument we take the ECS to be 3C, I think that the mean surface temprature rise by 2.8C would occur in under two years, compatible with your model not entailing much of a lag between CO2 change and near-surfce near-“equilibrium”. (i.e. bearing in mind that no “equlibrium per se ever occurs.)

      • Marler,
        You have to understand how the transient response works. On the CSALT model, there is a lag response that you can adjust. Make that longer and the ECS will increase, since the forcing is reduced initially.

        In practice, the response is diffusional which isa fast transient followed by a fat tail.
        The paper by Caldeira and Myrhvold describe this, which I blogged here,
        http://ContextEarth.com/2013/11/13/simple-models-of-forced-warming

        Because of the fast transient, a good approximation is to just use the TCR and then assume a gradual uptake for the fat-tail as it approaches an ECS.

        I could use my own diffusional approximation but I don’t think the world is ready for it yet. It is better to keep it at this level of abstraction.

      • Matthew R Marler

        WebHubTelescope: I could use my own diffusional approximation but I don’t think the world is ready for it yet. It is better to keep it at this level of abstraction.

        Let me try again: What rates are we talking about? Say the ECS to a doubling of CO2 is 3C and the concentration of CO2 doubles: how long does it take the ocean surface to warm up by 2.8C? A year? two years?


      • Matthew R Marler | November 14, 2013 at 11:43 pm |

        Let me try again: What rates are we talking about? Say the ECS to a doubling of CO2 is 3C and the concentration of CO2 doubles: how long does it take the ocean surface to warm up by 2.8C? A year? two years?

        Transient diffusion does not follow first-order, i.e. damped exponential, dynamics. There is no such thing as a conventional time constant when dealing with diffusion.

        The fast transient occurs quickly but the rest of the warming occurs slowly. Tell me that you understand this at least, because it is a very elementary aspect of diffusion theory.

      • Matthew R Marler

        WebHubTelescope: Transient diffusion does not follow first-order, i.e. damped exponential, dynamics. There is no such thing as a conventional time constant when dealing with diffusion.

        The fast transient occurs quickly but the rest of the warming occurs slowly. Tell me that you understand this at least, because it is a very elementary aspect of diffusion theory.

        I understand that the fast transient occurs quickly and that the attainment of the equilibrium, should it exist, takes long.

        Now back to my question: What rates are we talking about? Say the ECS to a doubling of CO2 is 3C and the concentration of CO2 doubles: how long does it take the ocean surface to warm up by 2.8C? A year? two years?

        In systems that are actually known, like chemical kinetics and pharmacokinetics, the answer is obtained fairly simply. Near equilibrium or near steady-state can occur in some compartments in less than 1% of the time it takes to achieve near equilibrium or near steady-in all compartments.

      • You should read that paper by Caldeira and Myhrvold. They show various temporal profiles of the transient thermal response.

      • Matthew R Marler

        WebHubTelescope: You should read that paper by Caldeira and Myhrvold. They show various temporal profiles of the transient thermal response.

        Sure thing. If you ever decide what you think best, let us know. With luck, and I am pretty sure luck will be required, it will comport well with your model.

      • Matthew R Marler

        WebHubTelescope: You should read that paper by Caldeira and Myhrvold. They show various temporal profiles of the transient thermal response.

        This one? N P Myhrvold and K Caldeira 2012 Environ. Res. Lett. 7 014019 doi:10.1088/1748-9326/7/1/014019
        Greenhouse gases, climate change and the transition from coal to low-carbon electricity

        Here is a quote: We estimated the change in surface temperature, 1T by using a simple energy-balance model. The radiative forcing
        1F supplies additional energy into the system. Radiative
        losses to space are determined by a climate feedback
        parameter, . We used  D 1:25 W m2 K􀀀1 [6–8], which
        yields an equilibrium warming of 3.18 K resulting from
        the radiative forcing that follows a doubling of atmospheric
        CO2 from 280 to 560 ppmv. The approach to equilibrium
        warming is delayed by the thermal inertia of the oceans. We
        represented the oceans as a 4 km thick, diffusive slab with
        a vertical thermal diffusivity kv D 10􀀀4 m2 s􀀀1 [8]. Other
        parameter choices are possible, but variations within reason
        would not change our qualitative results, and this approach
        is supported by recent tests with three-dimensional models
        of the global climate response to periodic forcing [9]. Our
        simple climate model treats direct thermal heating in the
        same way as radiative heating; heat either mixes downward
        into the ocean or radiates outward to space. To isolate the
        effects of a transition to LGE energy systems, we consider
        GHG emissions from only the power plant transition studied.
        Initial, steady-state atmospheric GHG concentrations are set
        to PCO2 D 400 ppmv, PCH4 D 1800 ppbv, and PN2O D
        320 ppbv, at which 1F D 1T D 0. (Use of other background
        concentrations for GHGs would not alter our qualitative
        results (SOM text SE1.3 available at stacks.iop.org/ERL/7/
        014019/mmedia)).

        They do not answer my question either: If the equilibrium warming effect is 3.18K, then how long after the doubling occurs will the surface temperature meet or exceed a specified value, such as 2.88K of warming? Notice that they say “heat either mixes downward into the ocean or radiates outward to space”, but there must be some non-negligible amount of heat that is transferred into evaporation, and then convection from the surface to the upper troposphere.

        This was the 4th time in one thread that you wrote about a lot of other stuff without answering the question or admitting that you don’t know the answer. The question is obviously important as it relates to the accuracy of your model and other lnCO2 models, and the relevance of the past temperature change at the surface to the future temperature change at the surface — all that “warming in the pipeline” that may hardly affect the surface temperature at all.

      • Matthew: “you wrote about a lot of other stuff without answering the question or admitting that you don’t know the answer.”

        Doncha hate it when that happens?

      • Matthew R Marler,

        I assume you noticed that they tested their results with a 3 dimensional model.

        As usual, use a model to test a model. If it doesn’t agree, change one or other until they agree. Success!

        Live wll and prosper,

        Mike Flynn.

      • Marler said:

        “This was the 4th time in one thread that you wrote about a lot of other stuff without answering the question or admitting that you don’t know the answer. “

        I know the answer.
        (1) On LAND, the equilibrium will be reached quite quickly, within years of the forcing, moderated very slowly by gradual ocean changes

        This is the relevant passage by Caldeira and Myrhvold


        We speculate that the 3-exp fits were better able to represent the rapid adjustment of land-surface (and land-plant) temperatures. Indeed, substantial temperature changes over land are observed to occur within days in climate model simulations of step-function changes in radiative forcing [20]. The median value of the shortest time constant in the 3-exp fits was 0.6 years, which is less than the annual resolution used in this analysis.

        (2) In the OCEAN, the equilibrium is reached asymptotically as a fat-tail. This means that at the surface, a fast transient to the TCR is reached quickly according to diffusion kinetics, followed by a gradual climb to the ECS. This could easily take hundreds of years partly because that is the way that Fickian fat tails work and mainly in consideration of how long it will take for the ocean to sink all the heat necessary for the temperature of the bulk to rise.

        This is another relevant passage by Caldeira and Myrhvold


        For many purposes, it may be a sufficient approximation to use a one-dimensional heat-diffusion ocean model having just one degree of freedom—in effect, to approximate warming as a simple heat-diffusion process.

        Why you think I am being evasive, I don’t know. I have worked out process diffusion equations my entire career. The SiO2 that is grown on the MOSFET devices that constitute your computer’s RAM and CPU is grown according to the Fickian diffusion kinetics that former Intel CEO Andy Grove wrote up in his PhD thesis in the early 1960’s. You wouldn’t ask a semiconductor engineer how long it would take to grow a thickness of an oxide unless you were being very specific. To grow a micron thick oxide it doesn’t take to long, but to grow a millimeter thick oxide thick oxide will likely take millions of times longer using conventional techniques. It is actually insane to even think about that once you realize how diffusion and bulk effects work.

        The ocean is a huge heat sink and it will equilibrate very slowly to an external forcing.

        Read this again:
        http://contextearth.com/2013/11/13/simple-models-of-forced-warming/
        Also read the paper on my blog called “Diffusive Growth”.

      • Matthew R Marler

        WebHubTelescope: In the OCEAN, the equilibrium is reached asymptotically as a fat-tail.

        Of course! The equilibrium is always an asymptotic result. That is why I have asked how long it takes to get 90% of the way to the equilibrium — that is a finite time. Even with simple exponential decay the final state is an asymptotic result, but after 5 half-lives the system is 97% of the way there.

        For many purposes, it may be a sufficient approximation to use a one-dimensional heat-diffusion ocean model having just one degree of freedom—in effect, to approximate warming as a simple heat-diffusion process.

        No denying that. But what may be a sufficient approximation for many purposes has to be shown to be a sufficient approximation for any particular purpose before its results are relied upon. That’s why there is always so much testing by as many means possible of whether the approximations that have been used are good enough for present purposes.

        Why you think I am being evasive, I don’t know. I have worked out process diffusion equations my entire career.

        I respect your many years working on related problems. I have worked many years on non-linear differential equation modeling of non-stationary multivariate biological time series, including repeated-oral-dosing and continuous intravenous infusion of drugs. The calculation I have requested here I have done many times. In this case, the answer can not be computed: if the calculated equilibrium change is 3C, how long does it take for the surface to change 90% of the way toward the equilibrium value? How long it takes the deep ocean to move 90% of the way toward the new “equilibrium” value is a separate question.

        That’s assuming that “equilibrium” is a relevant concept in this case: the current “equilibrium” temp of the earth is 288K, and the hypothetical new equilibrium value after a doubling of CO2 is 291K, and the deep ocean will never equal either of those.

        So back to my question and to your model. If the doubling of CO2 actually causes a 3C increase in the earth surface mean temperature, how long does it take the earth surface mean temperature to increase 2.7C (that’s 90% of the way toward equilibrium, but any value could be used.)? It’s under a 1% increase of downwelling LWIR, way under a 1% increase in total radiant energy at the surface. In order for your model to be accurate, it has to be a short period of time, such as 1 year, and in that case any “warming in the pipeline” will have little effect at the surface.

        That assumes that the “equilibrium” is even relevant. It looks to me like an interesting calculation that has become a great distraction from what is needed.

      • Matthew R Marler

        NW: Doncha hate it when that happens?

        I think in the AGW debate the blatant non-answering of important questions, like the gross exaggeration of the importance of every storm and fire, is a losing strategy. It’s one of the reasons that the purveyors of claims of catastrophic CO2-induced global warming are not prevailing in the public policy debates. afaict

      • Matthew Marler, Web answered that the land responds almost immediately to forcing changes (within a year according to his evidence). We see a near 4 C per doubling TCR when the last 30 years are taken over land, if we can attribute the warming to the CO2 change. This is mainly in the internal and northern continental areas. Everyday evidence of the diurnal cycle shows this difference between land and water. The only mitigating factor for the land response is because air comes in from the ocean areas.

      • Right JimD, Marler is simply trying to apply rhetorical devices to win the argument.

        Practically speaking the ECS of 3C will never be reached and even 90% will not be reached for the global average any time soon. Yet what matters is what the land temperature is doing, and how much accumulated heat that the ocean is absorbing.

        The latter especially is verification that the GHG is doing its physically expected thing, based on physical theorizing by physical scientists who have an advanced education in the physical sciences.

        Sure, we are fortunate to have a heat sink that big on earth, but like a heat sink that is placed too far away from your computer’s CPU, it’s not going to do a lot of good sitting that far away from the midwest and the middle of inner Siberia !

      • IR is absorbed by water only at the surface, and even then the water instantaneously vaporizes. Very little of the energy from IR makes it into the bulk volume of the ocean. SWR is the primary source of heat for the ocean.

      • Yes, if we imagine a global 100% land cover, the TCR would have tracked the ECS simply because there is no significant storage in the land surface for these time scales. The colder ground layers below have almost no surface influence, due to poor conduction, while upwelling and mixing parts of the obviously dynamical ocean circulation do.

      • ” Matthew R Marler | November 16, 2013 at 12:48 pm |

        I think in the AGW debate the blatant non-answering of important questions, like the gross exaggeration of the importance of every storm and fire, is a losing strategy. “

        I think the problem is one of misplaced projection attribution.
        If you can find one instance in my hundreds of blog posts that I have written in the last ten years that has ever made a big deal out of anecdotal information, I would like to hear about it.

        That is actually why I don’t care for stuff like Mr. ClimateReason does in his “research”. All he does is put together subjective, qualitative anecdotal information and treats that as if it were actual science.

        The only thing that I know about the atmosphere and its propensity for more violent storms is that the specific humidity has increased by 4% since 1970
        http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch3s3-es.html

        This agrees with what one can expect by an Arrhenius thermal rate activation based on increasing average SST values. If the highest wind speed in a storm is proportional to water content (increased updraft bouyancy) then a 4% increase in humidity could change a 190 MPH hurricane into a 190*1.04 = 198 MPH hurricane.

        It certainly won’t make a storm weaker, eh?

      • Web

        I do nothing of the sort and If you would just look beyond your well known prejudices and read what is written you might find the material useful. Historical climatology has a very long pedigree and is a useful adjunct to, but does not supplant, other scientific information
        Tonyb


      • jim2 | November 16, 2013 at 1:22 pm |

        IR is absorbed by water only at the surface, and even then the water instantaneously vaporizes. Very little of the energy from IR makes it into the bulk volume of the ocean. SWR is the primary source of heat for the ocean.

        Debunked and discredited assertions originally made by Fred Singer

        A Closer Look at Sea Surface Temperature Trends: How Effective is Greenhouse (GH) Warming of SST?
        http://www.climatescience.gov/workshop2005/posters/P-GC2.9_Singer.S.pdf

      • So TonyB, was Marler’s accusation “like the gross exaggeration of the importance of every storm and fire, is a losing strategy.” directed at you?

        Some medieval diarist exaggerating the fury of a particular storm and it goes in the ClimateReason database. Spare me.

      • “Debunked and discredited assertions originally made by Fred Singer”

        WHT pronounces this debunked. No proof. Right. I’m a chemist. I can assure you IR won’t penetrate very far through water.

      • he results are given of an experimental investigation of the evaporation of large water drops in the field of λ = 10.6 μ laser radiation of 40–120 W/cm2 power density. The results obtained are compared with theoretical estimates. It is shown that the efficiency of the process of evaporation of a drop in the field of a laser radiation of λ = 10.6 μ is in the range 40–72%. Small drops were found to fly apart under the action of focused laser radiation and this could considerably alter the cross section of the drop being evaporated.

        http://iopscience.iop.org/0049-1748/3/5/A03

      • Web

        Over the years severe Storms have been examined by numerous researchers and their likely severity or provenance quantified by sch as the met office.

        One off non cross referenced storms are always of interest as well but take second place to ones that can be verified and may have had a scientific study made of it.

        In ‘the Long slow thaw ‘ I quoted at length some 30 science papers and referenced another hundred but you seem to conveniently ignore all that.

        Tonyb

      • It doesn’t “immediately vaporize the water”. What kind of idiotic assertion is that?

        The water at the surface is constantly being agitated and this creates a diffusional effect which will randomly walk the infrared heated surface volume downward at a diffsuivity at around 1 cm^2/second. Jim Hansen understood all the effects of vertical eddy currents and effective thermal diffusivity in 1981.

        Of course some of this heat will get rereleased as a latent heat of vaporization and transferred upward into the atmosphere, but you can not say that it all gets vaporized. That is just ridiculous

        see:
        http://www.realclimate.org/index.php/archives/2006/09/why-greenhouse-gases-heat-the-ocean/

        “‘
        Acknowledgements:
        The need for such an analysis grew out of a series of discussions with S. Fred Singer.

        There you go jim2

      • Tony B, I don’t do subjective and qualitative anecdotal reasoning because guys like Mathew R. Marler will come after me and accuse me of “gross exaggeration of the importance of every storm and fire … a losing strategy.”.

        Of course Mathew R. Marler won’t go after you TonyB because he is a member of your team. And so it goes.

      • ” jim2 | November 16, 2013 at 2:10 pm |

        he results are given of an experimental investigation of the evaporation of large water drops in the field of λ = 10.6 μ laser radiation of 40–120 W/cm2 power density. The results obtained are compared with theoretical estimates. It is shown that the efficiency of the process of evaporation of a drop in the field of a laser radiation of λ = 10.6 μ is in the range 40–72%. Small drops were found to fly apart under the action of focused laser radiation and this could considerably alter the cross section of the drop being evaporated. ”

        So the water drops are suspended in the experiment and so can only release energy they collect by radiation and latent heat of vaporization (or by other surface tension releasing mechanisms).

        Surface water is connected to the deep you realize. The energy has to go somewhere you know.

        jim2, who exactly do you imagine is your audience? Dunces?

      • Matthew R Marler

        Jim : We see a near 4 C per doubling TCR when the last 30 years are taken over land, if we can attribute the warming to the CO2 change.

        I have never denied that the dry land surface warms faster than the ocean surface. My question is, if 3C is the projected increase in the equilibrium temperature of the earth surface, how long does it take for the spatio-temporal mean surface temp to increase 2.8C?

        WebHubTelescope: Practically speaking the ECS of 3C will never be reached and even 90% will not be reached for the global average any time soon.

        No rhetorical tricks, but repeating a question and an implication of your model. According to your model, the 2015 mean surface temp will be proportional to the ln of the 2015 CO2 concentration; and so also for the 2075 mean surface temp and 2075 CO2 concentration. Your model has no lag, though you believe there is a lag. “Practically speaking”, either the surface response gets to near the equilibrium value fairly quickly (e.g. Doc Martyn’s half year), or your model is wrong — not just your model, but every model in which current T is proportional to current lnCO2; for which the partial derivative of T wrt CO2 is 0 when CO2 is constant.

        Aside from the fact that you want to avoid answering the question I posed, despite that fact that global surface mean T is important enough for you to model it, you do not want to face the fact that two of your assertions (taking your model as an “assertion”) can’t both be accurate.

      • Matthew R Marler

        WebHubTelescope: guys like Mathew R. Marler will come after me and accuse me of “gross exaggeration of the importance of every storm and fire … a losing strategy.”.

        No! To you I criticize only what you write, and I quote it exactly.

        tonyb and I are not on the same team; we appear to be coordinated because we are responding similarly, though indendently, to the “invisible hand” of the information marketplace.

        Naomi Oreskes, in the last couple days, exaggerated the importance of Typhoon Haiyan in an editorial in the LA Times. I was criticizing Naomi Oreskes,not WebHubTelescope..

      • Matthew R Marler

        WebHubTelescope: The water at the surface is constantly being agitated and this creates a diffusional effect which will randomly walk the infrared heated surface volume downward at a diffsuivity at around 1 cm^2/second.

        Evaporation occurs continuously at the ocean surface, so that diffusional effect does not account for all of the radiant energy incident upon the surface. So the question is: Given the ongoing evaporation in the diurnally varying incident radiation as it is now, what happens if there is a 3.7W/m^2 increase in the incident radiation? That’s a tiny fraction of the night-time radiation, and a tiny fraction of the day/night difference in radiation, and an even tinier fraction of the daytime radiation; but it must be admitted that the “tiny fraction” varies considerably from Equators to poles..

      • “According to your model, the 2015 mean surface temp will be proportional to the ln of the 2015 CO2 concentration; and so also for the 2075 mean surface temp and 2075 CO2 concentration. “

        You can try it with CSALT and put in a first-order CO2 lag right there in the interface.
        http://entroplet.com/context_salt_model/navigate

        It will just make the TCR value larger because the CO2 is being deferred from making an effect until a later time. That is the problem with a single exponential lag (i.e. first order), and what Caldiera and Myrhvold are discussing. You need at least 2 or 3 exponentials of differing time constants to be able to piece-wise model the temporal behaviors.

        I have not added the diffusional response to the CSALT interface yet because I don’t want to do that until the time is right. Showing the fast transient TCR is enough for me right now.

        The issue is that Team Skeptic has these yahoos such as Roy Spencer and Nic Lewis that are intentionally trying to drop the TCR to very low values — and I want to stay conservative so that we can at least debunk their junk.

        And so we also have you, Mathew Marler, that keeps trying to catch me on some semantic trap that is completely invalid. It’s annoying but keeps me at least engaged.

      • Matthew Marler, with continued forcing I think the land would overshoot the 3 C before the global average reaches 2.8 C. Currently we are lagging maybe 0.5 C behind the equilibrium, and that gap is not closing due to the rapid emission rate. For the last 40 years the ocean heating rate has been 0.125 C per decade, which seems to be some kind of limit because it is distinctly falling behind the land. At this rate it takes 80 years per degree. Eventually this might put a brake on the land warming, but so far they are diverging fast with land warming 0.25 C per decade (40 years per degree) in the same period. Consistently, the global average trend has been 0.16 C per decade (60 years per degree). (Numbers from HADCRUT4, CRUTEM4, HADSST3).

      • Matthew R Marler

        WebHubTelescope, quoting me: I think in the AGW debate the blatant non-answering of important questions, like the gross exaggeration of the importance of every storm and fire, is a losing strategy. “

        In that quote, the “non-answering of important” questions might be a criticism of you, though I meant it as a statement of your responses on this thread. “Gross exaggeration of every storm” was a different “strategy”, unrelated to anything that you wrote.

      • Matthew R Marler

        WebHubTelescope: And so we also have you, Mathew Marler, that keeps trying to catch me on some semantic trap that is completely invalid. It’s annoying but keeps me at least engaged.

        I don’t perceive a semantic trap. I repeat a simple question: given that 3C or whatever is the “equilibrium” change, how long will it take for the globally averaged surface mean T (which is in your model) to increase 2.7C? And I repeat a related simple question: if there is “warming in the pipeline”, how much warming of the surface mean T will there be? And I repeat a fairly simple assertion or question: if ECS is a lot different from TCS, I do not see how any of those models that have CO2 only through lnCO2 can be accurate.

        And lastly, I repeat the result of a simple derivative calculation: if T at time t is proportional to CO2 concentration at time t (with high enough accuracy to be useful for planning for the future), then dT/dt = 0 unless dCO2/dt is non-zero.

      • Matthew R Marler

        Jim D: with continued forcing I think the land would overshoot the 3 C before the global average reaches 2.8 C. Currently we are lagging maybe 0.5 C behind the equilibrium, and that gap is not closing due to the rapid emission rate. For the last 40 years the ocean heating rate has been 0.125 C per decade, which seems to be some kind of limit because it is distinctly falling behind the land. At this rate it takes 80 years per degree. Eventually this might put a brake on the land warming, but so far they are diverging fast with land warming 0.25 C per decade (40 years per degree) in the same period. Consistently, the global average trend has been 0.16 C per decade (60 years per degree). (Numbers from HADCRUT4, CRUTEM4, HADSST3).

        My expectation does not match that, at least not today, it being closer to what Doc Martyn wrote. I can see how you might be right. I hope I can live long enough to find out.

      • ” Jim D | November 16, 2013 at 4:08 pm |

        Matthew Marler, with continued forcing I think the land would overshoot the 3 C before the global average reaches 2.8 C. “

        I am sure the land will get higher than the fast transient indicates, since the water vapor coming from the SST heating is contributing to the land increase. As the SST continues to creep up, the land will further warm.

        These are part of the medium-slow feedbacks that Hansen talks about. Albedo changes are the very-slow feedbacks and those further contribute to the uncertainty on the high side.

      • Matthew R Marler

        WebHubTelescope: I am sure the land will get higher than the fast transient indicates, since the water vapor coming from the SST heating is contributing to the land increase. As the SST continues to creep up, the land will further warm.

        These are part of the medium-slow feedbacks that Hansen talks about. Albedo changes are the very-slow feedbacks and those further contribute to the uncertainty on the high side.

        OK. Faster is faster than slower; and sooner is sooner than later. Water vapor effects contribute to uncertainty in albedo changes..

      • That’s why all the uncertainties are on the high-side of the PDF.

        The low-side of the PDF is being attacked by the annoying ankle-biters and that’s really what these simple robust models help to solidify. Right now the low-side barrier is a TCR of 2C and if one sees anything much lower than this, the analysis is suspect.

      • Matthew R Marler

        WebHubTelescope: The low-side of the PDF is being attacked by the annoying ankle-biters

        Til next time, be of good cheer.

  23. To krige or not to krige,
    That is the question.
    Weather ’tis better to derive
    A best linear estimation of
    assumptions of covarience
    based on GaussianTheorum
    or take no action against a sea
    of troubles, puzzles the will.
    And makes us rather bear
    The uncertainties we have
    than fly to others that
    we know not of.
    Must give us pause …

    With apologies ter the Bard.

  24. So–not having access–did Cowtan and Way in essence perform co-kriging with the hydrid scheme?It also sounds that unlike BEST they used the error estimates from the kriging and not an external scheme. Just curious–does here anybody ‘in the know’, i.e., with a kriging background and paper access, know? Nice to see some cross-validation, though again they use of the term may different from my ‘conventional’ expectations.

    …interesting development…

  25. High accuracy of Sea Surface Temperature (SST) is necessary because the world’s oceans are such an enormous reservoir of heat. very small changes in SST hide very large changes in stored heat. So it is important to know of sources of measurement errors and correct them..Because thermometers are not uniformly distributed over the oceans ans land, particularly over the Arctic and Antarctic, sampling errors occur and have to be corrected

    However none of the errors so far discovered significantly alter Ef Hawking’s comparison of global average surface temperature with the IPCC sponsored models. The models continue to exaggerate temperature after 1997. Since the IPCC models can’t replicate present temperatures, what confidence can we have for their future predictions. IT appears that basically the IPCC have failed to come to terms with the on/off nature of climate-change.

  26. The climate consensus marches backward from absolute knowledge to ever increasing ignorance.

    Most scientific disciplines start from virtual scratch, and slowly build a base of knowledge that dispels the initial ignorance.

    In climate science, we had a couple year incubation period beginning with Hansen’s 1988 congressional theater of the absurd. In no time, “we” knew what the temperature was within tenths of a degree. We knew that currently, then tens, hundreds and thousands of years in the past. Almost simultaneously we knew future temperature trends to within tenths of a degree per decade. And sea level rise to within a few milimeters a year.

    Not to mention the ability to predict droughts, and famines, and pestilence and earthquakes, and lions and tigers and bears oh my.

    But as climate science continues its march backwards into reality, we finally “learn” that with hundreds of millions of dollars worth of sophisticated climate models, and an additional 25 years of research, we now know that we know less than we thought we knew. Albeit with much more certainty. (Excuse me while I laugh my a** off for a minute.)(And don;t get me started on the recent warmist fad that Hansen’s 1981 model was more accurate than his 1988 model – which only proves he was becoming more wrong about climate as he went along.).

    Is it nice to see some peer reviewed papers admitting that krigged, assumed, extrapolated, estimated means of anomalies with poor global coverage do not give us “global average temperature” to within tenths of a degree? Sure.

    But who seriously needed formal papers to tell them that. This global average temperature myth is like Obama’s “You can keep your plan if you like it. Period!”

    Everybody knew he was lying through his teeth.

    is there any commenter here, warmist or skeptic, who actually believes we know the global average temperature of the entire Earth climate system with anything approaching the precision claimed? (And feel free to substitute “total global heat content” for GAT,)

    • I have no problem using a convenient fiction, like money or average temperature, if it is useful.
      This damned paywall is a bit of blow with respect to analyzing the paper.

      • If I walk into a dealership with enough cash, I can walk out with a Ferrari. If I try to govern a nation with CAGW’s inflated claims of precision, I will bankrupt it.

    • It amazes me that folks still continue to ignore the words of people in the instrumentation and metrology fields. Engineers and technicians from those fields frequently post that claimed climate temperature reading accuracy is much too high. These are the folks who have studied, calibrated, and worked with the instruments used by climate scientists. Dismissing their comments as ignorance of statistical methods is plain silly. Folks, taking the average larger amounts of inaccurate data simply gives you a more precise average of inaccurate data. The original accuracy specifications of the instruments and the test method must be accounted for, not ignored.

      The point is that no instrumentation engineer would ever sign off on a claim that instrument calibration errors and drift amounts may be considered random. No instrumentation technician would sign off on the accuracy of an instrument until you brought it to him for a calibration check. Why would they be so picky about something that climate scientists seem to think is of no problem? That is simply because those engineers and technicians see the problems inherent in obtaining and maintaining high accuracy measurement capabilities.

      In fields like instrumentation in nuclear power plants, test instruments and gauges are run through a calibration lab both before and after a field calibration or test procedure. The instrument is checked before going out for the test to make sure it starts out within stated tolerance and then again after the test to verify the instrument remained within tolerance during the test. (It might be of interest that for safety analysis in nuclear power plants, using expensive high accuracy and high reliability sensors, overall instrument accuracy is assumed to be no better than +/- 10% of full range.)

      SST and sea water temperatures at depth accurate to 0.001 degree Centigrade over months of operation? Not gonna find any technician to sign off on that!

    • GaryM
      re; “You can keep….well, not much”
      A comment posted at Yahoo of all places.

      rams1956
      “While suturing a cut on the hand of a 75 year old rancher, the doctor struck up a conversation with the old man. Eventually the topic got around to Obama and his role as our president. The old rancher said, ‘Well, ya know, Obama is a “Post Turtle”.

      Not being familiar with the term, the doctor asked him, what a ‘post turtle’ was. The old rancher said, ‘When you’re driving down a country road and you come across a fence post with a turtle balanced on top, that’s a ‘post turtle’.

      The old rancher saw the puzzled look on the doctor’s face so he continued to explain. “You know he didn’t get up there by himself, he doesn’t belong up there, he doesn’t know what to do while he’s up there, he’s elevated beyond his ability to function, and you just wonder what kind of dumb #$%$ put him up there to begin with.” “

      Instantly thought of many gravy train post grads.

  27. http://sunshinehours.files.wordpress.com/2013/06/dailynormals_2013-05.gif

    “Using the stations in Canada with Environment Canada calculated anomalies, here is the month of May visualized using the mean temperature for each station for each day.

    You might have to click on the image or refresh the page to restart it.

    The black circle in the top left corner represents a 5 Celsius anomaly from the 1971-2000 average.

    Blue are below normal. Red above.”

    How would satellite data actually predict the data I have shown considering how much it varies from day to day in magnitude and sign?

    Can weather be predicted by satellite at every one of those locations?

    No way.
    .
    Why would any think a satellite is a replacement for a ground station?

  28. @Captdallas

    I was at the site before posting my comment. No answer there and I really would not expect it to be there. But thanks for the link anyway. I agree they did a nice job with side dishes at the site but the publisher has the meat.

    With cokriging being around for decades, the question would be easily answerable in the form a simple declarative sentence by someone familiar with both geostatistics at a working level and the paper and any supplemental text. If someone can say, ‘Yes, they co-kriged the SST and UAH’, then I would know what they did in the ‘hydrid’ approach. If the answer is ‘No’, then ‘hybrid’ remains uncertain (at the level of my interest) behind the paywall–no heartburn, just uncertainty. Then maybe I would poke around in the code, but probably not. Lately, I’ve been looking some variograms using the USA NCDC data and that suffices to keep me busy.

    Given what I can learn about the work from a distance it looks interesting. I’ll just wait and see what unfolds.

    Regards, mwgrant

  29. Step 1: Take a dataset riddled with errors and biases, leading to error bands larger than the effect that you want to “find”.

    Step 2: Declare that the dataset is wonderful as is, and perfectly good for informing thousand trillion dollar decisions.

    Step 3: Selectively eliminate some of the errors in the dataset that drive the results toward your pre-selected “finding”. It’s worse than we thought!

    Step 4: Declare the newly improved dataset is wonderful as is, and perfectly good for informing thousand trillion dollar decisions, while the old dataset is now hopelessly biased and useless for decision making, especially if the recent data in that version is trending in inconvenient directions.

    Rinse, repeat.

  30. So, once the data is adjusted properly to match the theory, we are safely back on course for catastrophe, unless Something Is Done Immediately.

    Fortunately, it appears that the entire US government has been spurred into action and within a couple of years after the recently announced onslaught of regulations and taxes, we will learn the following:

    a. The results of the first, rudimentary efforts at controlling CO2 are in and they show that the Climate Scientists were right all along: global warming has been slowed, as predicted, and the Big Carbon shills posing as ‘skeptical scientists’ have been discredited, once and for all.

    b. Although the reduction in CO2 achieved by our tentative first efforts has SLOWED global warming, it has also confirmed that anthropogenic CO2 continues to pose an existential threat that MUST be confronted. Therefore, we will immediately begin implementing the additional taxes and regulations required tol achieve the 90+ percent reduction in anthropogenic CO2 that the Climate Experts have been recommending for years, but which have been blocked by extremists financed by the Carbon Industry. We have tolerated the stonewalling of these denialists long enough; we can no longer afford to wait. And won’t.

    Bob Ludwick

  31. it is certain that: if it gets warmer than normal, for any reason; oxygen &nitrogen expand INSTANTLY, and release the extra heat in a jiffy. GLOBAL warming is a concocted myth, by dishonest people that don’t know how to do anything positive to the society: http://globalwarmingdenier.wordpress.com/

    • stefenthedenier,

      But, but . . . according to the Book of Warm, oxygen and nitrogen are non radiative gases. If they don’t radiate, they are obviously at absolute zero. Due to the magical radiative powers of CO2, we only “think” that oxygen and nitrogen are actually radiating, and only “think” that the atmosphere is gaseous.

      Warmists are an odd lot, Stefan. Maybe aliens stole all their brain waves. I can’t think of any rational explanation for their beliefs. It’s different for religions, but this crew claim to be scientists. Scientivists, more like it.

      Luckily, as the money runs out, this mob should be the first to be “let go”.

      We can only hope!

      Live well and prosper,

      Mike Flynn.

      • Mike, when the soil warms extra -> vertical winds increase and equalize in a jiffy. Vertical winds cannot feel them on the ground, because is a starting point, but up they ban keep a man on a glider for hours in the air. those people with hang-gliders can tell you that: where the ground is hotter, those winds are stronger – that’s why they don’t like rice paddies, swamps. O&N regulate overall temperature on the whole planet to be always the same – if one place gets warmer than normal; other place / places gets colder than normal. cheers! http://globalwarmingdenier.wordpress.com/climate/

  32. R. Gates aka Skeptical Warmist

    Don’t know if adding new data points will remove all the “pause”, for certainly the sleepy sun and cool phase of the PDO, along with a moderate increase in natural aerosols have brought some negative forcing to tropospheric temperatures, but whatever pause this brought, it’s well over now for Australia, with 2013 set be that country’s hottest year on record:

    http://www.bom.gov.au/climate/change/index.shtml#tabs=Climate-change-tracker&tracker=trend-maps

    This of course is driving the Aussie climate “skeptic” nutters crazy.

    • The sea ice extent at the arctic (much more important to R Gates) has increased dramatically over the same period causing extreme agitation to Neven, and Gatesy and all warmists driving American, and European climate “warmists ” nutters crazy and loopy .
      By the way, is this the same R Gates who has recently taken to saying that tropospheric temperatures are not reliable as the heat is stored in the oceans and we should disregard the pause for this reason.
      No, it must be a cherry picking imposter. LOL

      • It’s winter.

        You should have been skeptical of the the pause because it made no sense. Unless you had confirmation bias. Then it made a lot of sense.

        Heat is stored in the ocean.

        Starting at a huge El Nino and ending at a huge La Nina, it never occurred to you that possibly might be a cherry pickin’ thing to do?

      • R. Gates aka Skeptical Warmist

        You must be one of those Aussies confused about how your record warm year fits in with the meme that the “globe is cooling my friends”. Here’s a hint: it doesn’t. One of those doesn’t fit and will cause you increasing cognitive dissonance.

    • R. Gates

      it’s well over now for Australia, with 2013 set be that country’s hottest year on record

      Tell us about it after it has occurred, Gates, not while you are skeptically speculating that it might occur..

      Max

      • R. Gates aka Skeptical Warmist

        Only 6 weeks left in the year Max, and the Austalian summer has been stating out quite warmer than average.

      • Jch. It’s winter, duh.
        Read what I actually wrote.
        Re heat stored in the oceans. If the oceans were hotter ie storing more heat, then the atmosphere would be warmer as well.
        In other words if the oceans were storing more heat ie hotter for 17 years the atmosphere would have been hotter for the last 17 years ( no pause)
        Or heat is shared you know, basic science 101.
        Glad you admit there is a pause, perhaps it will give you pause for thought.
        By the way, if one is to have a pause that is real , it will not matter where the El Niño La Niñas come as there will always be several of both in most 17 year pauses and your argument can always be made (wrongly) as there will nearly always be an El Niño somewhere at the start of any real pause

      • R Gates this year is on course to be the 5 th coldest of the last 10 years world wide. This will of course change 2012 and 2011 into being the 10th and 11th coldest years this century from 9 th and 10th.
        Thank god Australia was warmer for you this year or your global warming would have really gone down the chute . So the last 3 years have been the 5th, 10 th and 11th coldest out of 13 years this century.
        Seems more like free fall on these cherry picked examples alone.

      • R. Gates aka Skeptical Warmist

        Angech,

        Your examples are even betters than Tisdale’s psychotropic cherries. 2013 is on track to be the warmest non-El niño year on record, and the last La Niña year as the warmest La Niña year. Given the huge influence of ENSO on these temperatures, this shows remarkable underlying warming, completely shattering the “globe is cooling” meme, but “skeptics” don’t see it that way through their psychotropic-cherry induced haze.

    • Warm Aussie temps may “be driving Aussie climate ‘skeptic’ nutters crazy”, BUT

      The fact is that since the new millennium started (January 2001), global temperature (HadCRUT4) has been cooling.

      And 2013 is set to become the 8th warmest (or 5th coolest) year of the millennium

      1 2010 0.547C
      2 2005 0.539C
      3 2003 0.503C
      4 2006 0.495C
      5 2009 0.494C
      6 2002 0.492C
      7 2007 0.483C
      8 2013 0.474C
      9 2012 0.448C
      10 2004 0.445C
      11 2001 0.437C
      12 2011 0.406C
      13 2008 0.388C

      And that must” be driving the climate “warmist” nutters crazy”.

      Right?

      Max

      • Max said:

        “And that must” be driving the climate “warmist” nutters crazy”.

        Right?”

        Max, Doesn’t drive me crazy. The CSALT model is the equalizer:
        http://imageshack.us/a/img818/3699/2yd.gif

        I had been blogging that the last couple of years of data weren’t matching the CSALT model, yet this paper comes along and it makes sense.

        “- ! – The current year is not close to the degree of warming that SOI neutral conditions would imply

        Timely that September’s GISS data was recently made available. The last few month’s data points is shown in the right-most panel below. For the model to continue to hold, any gap between model and data can not persist. It is indeed possible that this gap is closing with a temperature anomaly of 0.74C for September. This is something to keep an eye on.”

        http://img534.imageshack.us/img534/3678/nj7d.gif

      • Webby

        Your CSALT model doesn’t provide evidence of anything.

        The HadCRUT4 record (with all its known ex post facto adjustments, warts and blemishes) does provide empirical evidence that the global average temperature (whazzat?) is not rising, but cooling slightly.

        And that “must be driving the climate ‘warmist’ nutters crazy” (as Gates puts it).

        Max

      • R. Gates aka Skeptical Warmist

        Max said :

        “Warm Aussie temps may “be driving Aussie climate ‘skeptic’ nutters crazy….”


        Yes it is.

    • Sure, and when I was there a few weeks ago the farmers were complaining that the grape harvest was being wiped out by the unseasonal frost. I think the heat is of that special gridded, adjusted and interpolated sort that is only noticeable by climate scientists while us simple peasants only notice that we have to scrape ice off the windscreen..

    • k scott denison

      Gates, if 2013 is the hottest year on record in Australia, please answer the following:

      1. How long is “the record”?
      2. What, exactly is “the record” – how many temperature sensors in the same locations for how long.
      3. What number do you get when you divide the length of “the record” by the estimate age of Australia?
      4. What percentage of the total existence of Australia does “the record” cover?
      5. In light of 1-4, how much sense does the comment “… with 2013 set to be that country’s hottest year on record” make?

      And finally, for extra credit, one moe:

      6. I assert that that 2013 was actually ranked as the 22,013th hottest year ever in Australia. Can you prove me wrong?

      • 6. I assert that that 2013 was actually ranked as the 22,013th hottest year ever in Australia. Can you prove me wrong?

        I reckon you are wrong.

        In the past 500 million years, the planet has been without ice at either pole for about 75% of the time. Even allowing for the fact that the Australian land mass cruises between the south pole and way north of the equator over that time, let’s just assume that for the 75% of the the 500 million years when there was no ice at either pole, Australia was warmer than now. On that basis, Australia has been warmer than now for 75% x 500 million = 375 million years.

        Therefore, I suggest 2013 was actually ranked approximately 375 millionth hottest year!

        :)

      • k scott denison

        ;-)

    • k scott denison

      Seems to me that saying that 2013 is the hottest year on record for Australia is like someone who has never driven anywhere but New Jersey saying that exit 3 on the New Jersey Turnpike is the worst road design on record…

      Funny that it’s only Warmists that I see talking about “hottest ever”… Wonder why?

      • When they talk about the hottest year “on record”, they are speaking of the instrumental record. Speculation of temps before the thermometer period goes beyond the scope of what the BOM has stated, and is just noise-making.

        “2. What, exactly is “the record” – how many temperature sensors in the same locations for how long.”

        The Australian national temperature record starts in 1910. Though there are earlier temperature measurements, there are fewer weather stations reporting before then. For more information, make smart work of your browser search function. Information can be found at the Australian Bureau of Meteorology website.

        I’ve recently communicated with the BOM about the alleged, recent record-breaking twelvemonths. It took a fortnight for them to respond, and the answers were helpful, giving links for me to check them out. Any truly interested party can do likewise. More vested interests are still at liberty to be argumentative rather than learn anything.

        Something I learned from the exchange is that there is no official (single-mnumber) uncertainty measure for the Australian national temperature records, owing to the diffuclty of resolving structural uncertainties, like spatialization. But they offered an annual temperature uncertainy of 0.1C, on par with global (which does have a formal uncertainy measure), on the reasoning that the Australian weather station ensemble is denser than global.

        That brings up further queries for me. People with agendas instead come up with objections. A subtle, but critical difference in thinking that separates scientific analysis from propaganda.

  33. What good is good data? I just watched MSNBC ‘news’ report that polar bears are threatened because of disappearing arctic ice. NBC has become the Huffington Post of broadcast media. Facts simply don’t matter.

    • Since ice is actually increasing in the Arctic, the polar bears have turned their attention to logging on to the Obamacare website. Unfortunately, they are getting stressed over not being able to log on or acquire healthcare and are losing their fur…

      ..so we are back to the “polar bears are in danger” meme.

      PS I have inside sources on this info.
      – Teddi Bear

      • Naw, Teddi, the reason they’re losing they’re losing their fur is because it’s getting so warm there because of your SUV (and my pickup truck).

        Max

        PS Obama said so.

      • Given that parts of Alaska are apparently descending into an ice age it could be that the bears are afraid Canadians will be hunting them for their skins, again.

  34. I find it interesting that both Alaska and the SSTs in the Bearing Strait show cooling yet the constructed temperatures in the adjacent Arctic show strong warming. Perhaps I’ll see if I can find a free copy of the paper to find out why.

  35. stevefitzpatrick

    It is an interesting approach, and appears to have been carefully done. A couple of initial comments:
    1) The reanalysis data seems very far out of line, even compared to the ‘hybrid’ reconstruction. I was particularly struck by regions with reasonable good instrument coverage where the reanalysis data was not close. It may have been prudent to point out in the conclusions that the results suggest the reanalysis data may suffer a substantial positive bias, and so should be used with caution. I am reasonably sure that the reanalysis ‘temperature data’ has been widely used in other studies…. and maybe that is not a good thing.

    2) The authors correctly note the potential contribution of multi-decadal cyclical behavior to recent arctic warming, along with possible contributions from soot (black carbon) on snow and ice, changes in albedo due to recently exposed land (from ice melt), and of course, polar amplification of GHG driven warming. However, the relative importance of these remains unclear. It would seem to me prudent to extend the kriging reconstruction back through much more of the Hadley temperature record, in order to better evaluate the potential biases from sparsely covered high latitude regions in earlier times. Focusing on only the post-satellite period does not provide a sufficiently broad perspective to evaluate multi-decadal cyclical contributions.

    • steve,

      reanalysis data is sketchy. Sometime after AGU hopefully zeke and robert and I will post our poster comparing hi res surface measurements to re analysis ( Merra and Narr ) and RSS and UAH

      Part of the issue may be the data sources they use .

      More later

  36. What I notice in the response to this paper by the denizen skeptics is something akin to confirmation bias in reverse. Results are presented that disaffirm your favorite concept (the pause perhaps), and immediately the reaction is to disbelieve the methods and criticize the authors, rather than first trying to understand what they did. Don’t criticize confirmation bias if this is what you do. I, on the other hand find this result gratifying because it does confirm some things I thought should be happening, particularly the magnitude of Arctic warming being missed. It is not confirmation bias at all if it is good science.

    • Jim,

      Highly relevant to what was missing on the last thread – what should ‘skeptics’ do to earn the trust of scientists.

      This thread has been a great example of precisly what not to do.

      Certainly we’ll have a couple of scientists who now have less reason to trust the input of ‘skeptics’ after their interaction at Climate Etc.

      • (1) Have the authors done anything to stimulate distrust? Have they violated any of the “skeptics” rules of engagement? Have the followed the suggestions for how to gain trust?

        (2) Have the “skeptics” indicated trust?

        Answering these questions should help to illuminate why this problem is more complex than outlined in the previous post.

      • Michael,

        I’m not sure whether you can even come up with any reason for anybody at all to earn the trust of a scientist.

        Why would a scientist care about a person’s “input”?

        Facts are facts. Whether people agree or not, makes no difference at all.

        If I am wrong, I am sure you will speedily correct me.

        Live well and prosper,

        Mike Flynn.

      • Joshua,

        1. In your typical Warmist way, you are trying to play the victim. So the poor authors are not being “trusted” by the readers. Boo hoo. Cry like a baby, have a tantrum – who cares?

        2. Who cares?

        You are definitely confusing me, Joshua. Confusing me with somebody who cares what you think.

        Live well and prosper,

        Mike Flynn.

      • Mike,

        There have been calls here for scientits to ‘engage’ more with skeptics.

        I’m just suggesting that if they could trust that such an activity wasn’t a complete waste of their time, they might be more inclined to do so.

      • Michael wrote:
        There have been calls here for scientits to ‘engage’ more with skeptics.

        Scientists already are skeptics — it’s drilled into them from day 1.

        So what is it you’re really trying to say?

      • David,

        Sorry that should be ‘skeptics’.

      • I think this is an important point. But I hope the authors will ignore the “noise” present on any blog’s comments. Plenty of us were extremely pleased to see the authors show up here and comment, and it certainly made us trust them more. (Mosher’s letter of reference didn’t do any har, either.)
        I think it would be unreasonable of scientists to expect nothing but politeness when they do engage, it’s not the nature of the medium. I hope that they will notice those of us who appreciate their being here.

    • There are some immediate reactions to papers that elucidate the internal workings of the mind. It is a kind of skeptical normativism that resists change or attacks concepts not fitting it. The “pause” is now an established part of that norm, and this study for sure did not fit with the program.

      • The “pause” is now an established part of that norm…

        +1 … oh, fiddly! +5 !

        Has anyone bothered to tell the ‘pause’? „Entschuldigen Sie, wann geht der nächste Schwan?“

    • Jim D

      “I, on the other hand find this result gratifying because it does confirm some things I thought should be happening, particularly the magnitude of Arctic warming being missed. ”

      Confirms? The work (Cotwan and Way) is a set of analyses (essentially interpolation), not observations. Certainly it potentially inform future observations, invites further comparison with observation, and that is what it should do. The road is long. I just hope it represents an improvement in the evolving methodology…from BEST to better? ;o). Time will tell.

      • I do not deny having confirmation bias which is why I use that word. Confirms just means it fits with other independent lines of evidence that I already tend to trust, like what’s happening to the sea ice recently.

      • “not observations.”

        arctic bouys. never used before. guess what?

      • “arctic bouys. never used before. guess what?”

        Touche. It is there in Way’s comment. Thanks, I am a poor scanner. I assume, that was limited data used to develop and/or test the UAH surface relationship applied more generally to UAH observations where no buoy was present…icing on the cake so to say. Is that correct? To me the present nugget in the paper is the use of coregionalized RV’s [or something similar].

    • I can’t say this is groundbreaking. We have known for some time that Arctic ice has diminished for several years. So, the circumstantial evidence is in line with the paper, generally speaking. I don’t know how solid a number theirs is, and the range, but given the paucity of data, I wouldn’t bet the farm on it. And also, it begs the question as to the cause of the Arctic warming. Still, it is a contribution to climate science.

      • Jim2

        For me life is simpler. Cotwan and Way looked for an approach to more coverage and the effects of the missing areas of data on estimates. Their approach seems reasonable and consistent with geostatistical practice [I say ‘seems’ because of the paywall constraint–not a complaint, just a caveat on my perception.] From the point of view of the ‘missing data’ regions it makes observation(s)–actually a prediction. This may be subject to testing by future observations. A method for imputing [someone needs to introduce this term!] the missing data is laid out.

        Frankly these seem to be all good things to me, and substantive for the times. Good for them. I hope they handle the publicity side well. Too much flash compromises the goods.

      • “From the point of view of the ‘missing data’ regions it makes observation(s)–actually a prediction. This may be subject to testing by future observations. A method for imputing [someone needs to introduce this term!] the missing data is laid out.”

        Precisely.

        One of the things that folks should know is that there are data rescue efforts going on. SO, we use methods to predict (estimate) what temperature would be at a location where we have no real measurement.
        After data rescue efforts we then have a list of station data from the past that has recently been digitized. We can now compare this to our prior estimate.

        For example, Robert Way recently sent me a comparision of what our method predicted for a station in one area of the world and the actual record which was recently recovered. And yes it matched.

      • “And yes it matched.”

        Noted. Matching probably is not/should not be a surprise, but it is always pleasing when estimating, and very nice to say.

    • k scott denison

      Jim D | November 13, 2013 at 10:25 pm | Reply……

      Forest for the trees Jim. The data is grossly under sampled geographically so the authors are trying to find creative ways to fill it in.

      The major issue, however, is the minuscule length of the record of reliable temperature measurements that is available.

      Even if one stipulates that the temperature trend for the past 50, 40, 30, etc. years is the highest on record, who cares? That’s akin to saying the last 50, 40, 30 yards we driven of Interstate 90 in the US are the worst on record! Well, we’ve only driven on what, 300 yards and the road is 3,101 miles long.

      Perhaps our perspective isn’t as broad as we think.

      • We might see that land temperatures have risen 0.9 C in the last three decades, which is unprecedented in the record. Some might not care. That’s fine. We don’t all have to be interested in understanding why this is happening.

      • k scott denison

        Yes Jim D, and the condition of the road in the last 30 yard we drove was unprecedented in the record! OMG, we must tear it all up and start again.

        Doesn’t Chicken Little mean anything to you?

  37. When the “pause” goes paws up has always been just a matter time. Before it even existed will do nicely.

    • Anomalously, we’re told, snow this early in Chicago, today.

      • Actually that is only 2 days earlier than average.
        As a former Downers Trojan, I know Chicago is rather cold.

      • k scott denison

        As a northern neighbor of Chicago I actually breath much heavier that I need to… trying to warm the damn place up!

      • Always fun to see the bellmen outside the Chicago hotels — dressed like “Bear Claw” in Jeremiah Johnson — while out-of-towners stand in line in short sleeve shirts… freezing!

    • Take your “pause goes paws up” argument to the Sun, its not cooperating.

      • It’s already virtually a done deal. The napping sun is a spineless cooler. All of the coolers are spineless. When it comes to snot knocking, team cool is light in the loafers. And they don’t have finesse either.

      • JCH,

        Are you reading from the Book of Warm?

        It defines “virtual” as “actual”, from memory. As in “virtually a done deal”.
        This means “it hasn’t happened yet, but we’ll pretend it has. We’ve got away with it for years, and nobody’s woken up so far!”

        Then you are supposed to throw in some meaningless phrases. “Snot knocking” is a good one, because nobody knows what it means.

        “Light in the loafers” is also good, because you can always pretend it is just another meaningless phrase, which of course it is.

        I’m pretty light in the loafers, which hopefully allows me to float like a butterfly and all the rest. Finesse? I’m sure I can leave that to the followers of the Way of the Warm. Here’s one definition : –
        Finesse : – To handle with a deceptive or evasive strategy.

        Live well and prosper,

        Mike Flynn.

  38. SST measurements are one of the best data sets we have for assessing climate in the long term: a longer span and a better spatial distribution over time than any other measurements available because SST was routinely collected by merchant ships. Add to that the integrating effect of the ocean’s mixed layer and the absence of urban heat island effects. Why then is there this concern with “improving” these data? My analysis of the HadSST2 data set showed that it exhibited much greater variance than the output from the HadCM3 model almost everywhere on the globe. This greater variance can mostly be attributed to ocean current boundary variations. It is not due to “measurement error” but rather reflects the essentially stochastic nature of climate, something that is not captured by climate models which are deterministic. This endless reprocessing of SST observations sounds like an attempt to rig the data to fit the models. My paper on this topic can be found at
    http://www.blackjay.net/papers/climate-modeling-hypothesis-testing/index.html with some introductory remarks at
    http://www.blackjay.net/papers/index.html

    • True, true a human signal does not exist at all without manipulating the data and pointing to statistical models that real world observations invalidate altogether. The only correlation observed between increased CO2 and global warming, is the other way around: the historical record shows that increases in atmospheric CO2 follow periods of global warming. The lag time is measured in centuries – 1000±500 years (Wahlen et al. 1999).

  39. If cigarettes in Chicago will be $14.01 per pack, what would the majority on the Left wish to charge per gallon of gas if they could escape the burden?

    • I wouldn’t propose a tax I wouldn’t be willing to pay. So there is no answer to your question. I could go as high as 3 bucks a gallon, if the proceeds were distributed back to the population as a dividend. With no returns for those making over a certain income.

      That number should be less than what this liberal hard working progressive earns producing a product that hopefully will lead to an improvement in the prognosis for those suffering from a common malady. Amyvid and Vizamyl hopefully will lead to treatment some of the denizens on this blog sorely need.

      If the gubmint raised my taxes and balanced the budget, I would be happy.

      I am no longer willing to pay cigarette taxes, but on other combustible vegetative matter, I would go much higher.

      • Apparently, smoking is a risk factor for dementia and yet may prevent Alzheimer… it just depends on who you ask–e.g., does dietary cholesterol really
        cause heart disease?

  40. From the quoted paper (WRT random errors);

    “Although they might confound a single measurement, the independence of the individual errors means they tend to cancel out when large numbers are averaged together. Therefore, the contribution of random independent errors to the uncertainty on the global average SST is much smaller than the contribution of random error to the uncertainty on a single observation even in the most sparsely observed years”

    I am afraid that the climate science community still has a very poor understanding of “errors” and ”uncertainties” WRT to historical temperature data records. And the community is misapplying the “law of large numbers” in an almost obscene way.

    Let me explain with a few engineering examples;

    1) I wish to make a piece of metal that is 100 inches long out of one hundred individual 1 inch long pieces (I would never do that, but this is an example after all). So I tell 100 hundred vendors to make me a piece of metal that is 1 inch (plus or minus 1/8 inch) long. Yes, when I assemble those into a finished assembly the law of “large numbers” will likely result in an assembly that is 100 (+/- .01”) inches long. I am counting on the statistical distribution among the vendor’s private errors to cancel out. Each vendor is independent and has differing degrees of “competence”. If I was really “frugal” I could ask 99 really cheap vendors to make me 1” (+/-1/2”) metal blocks and then I could hire one expensive competent vendor to measure all those blocks and make me one (1) final block that makes the final assembled length equal to 100 (+/- 0.001”) inches. In the engineering trades this is known as “shimming”, we often make an assembly with more cost effective “mass market” parts (volume is the main driver of ultimate costs) and then adjust the final assembly to meet the required specification with an assortment of relatively cheap parts. In some cases we make “shims” that are explicitly “oversized” so we can remove some material by grinding once we know the final “stack-up” of all the other dimensions in the assembly. The trick is to make sure you have a “shim” with a positive value, skilled as engineers are we still have not figured out how to make a part that is MINUS 10 thousandths of an inch thick (although it is quite easy to do in a spreadsheet).

    2) I wish to know the absolute voltage (hint/hint temperature) present at one location in my factory. I have an old factory powered by DC electricity (I was buddies with Thomas Edison, Like George Eastman was) so we can ignore details like “True RMS Voltage” for a moment. So I purchase 100 voltmeters from several reputable firms. The spec sheet for each meter says it has an absolute accuracy of +/- 1%. So I now proceed to make 100 voltage measurements by connecting/disconnecting each of my meters one at a time to the “measurand” (i.e. the wires across whose voltage I wish to determine). So after a few hours I have 100 measurements of the absolute voltage at that point in my factory. Now the question becomes; how well do I know the absolute voltage at that point. Folks that misunderstand the “law of large numbers” would likely say; well that’s 1% averaged over 100 individual meters, so you know the voltage to 0.01%. WRONG. Without specific prior knowledge of the statistical distribution of the errors in each meter (which you do not have, remember, each meter is only good to +/- 1%, no more) you still (after 100 measurements) only know the absolute voltage to +/- 1%. If you want to know it better you just need one +/- 0.01% meter.

    Just one more example of the “law of large numbers”; the machines that make the spheres used in “ball-bearings” where invented by the Germans before WWII (look up “centerless grinding”, and the Air raids against Schwinefurt). In fact most ball bearings are still specified in metric sizes because of this heritage. The process is relatively inexpensive and can make great quantities of spheres. However, it can be quite difficult to adjust the machine to spit out millions of spheres that are accurate (knowledge wise) to 0.0001” of an inch. So the simple manufacturing plan is to make lots of spheres knowing that there will be a large (+/- 0.01”) spread (statistical distribution) of final sizes. Then you sort though the spheres (easy to automate) and pick out the 0.1% that have the highest accuracy. The “law of large numbers” simply says if you make a million spheres one thousand of them will be “very/very” accurate. And then of course you can sell them at a premium, even thought they cost you exactly the same to manufacture as the “crappy” (+/- 0.01″) spheres.

    The “law of large numbers” does not say say that multiple measurements from independent instruments will yield a better accuracy, sorry but that is an incorrect interpretation by the climate science community.

    Cheers, Kevin.

    • Kevin,

      Just another example of the breathtaking unfounded assumptions by the pseudo scientific climatological measurebators.

      Live well and prosper,

      Mike Flynn.

      • Mike

        I can access John Kennedy’s draft paper in full but not the Cowtan and Way paper. Has anyone got a link to the full paper as otherwise there seems to be a lot of discussion here about an abstract and articles carried by such sources as the Guardian?

        Judith has linked above to my article carried here a couple of years ago on the uncertainty of SST’s which drew a spirited response from John Kennedy which continued into private emails.

        I am delighted he has addressed many of my concerns and even used my Rumsfeld quote. John is definitely a scientist whose integrity I would trust but that does not mean I fully agree with what he has written.

        The fact remains that historically very few SST’s were taken and of those very few have any degree of confirmed accuracy. The discussion about canvas or wooden buckets is irrelevant compared to the depth the sample was obtained at, how frequently, and how it was measured.

        SST’s-apart from perhaps a very few on very well travelled routes -should be taken with a very large pinch of salt until the 1970’s. Justifying the accuracy of the record back to 1850 means far more assumptions have to be made than is warranted by the nature of the data and much of the data has to be interpolated from very sparse records in adjacent grids

        tonyb.

    • KevinK, but that is the joy of anomaly. You are not concerned with the finished product, just the deviation of the parts. So with combined measurements you can get a GMST anomaly repeatable to 0.05C even though the actual product is unknown to +/- 1K and the effective imbalance energy maybe +/- 17Wm-2.

      Even with that reduction though you still have 0.05C uncertainty in the latest data and maybe o.25C in the early data. If kriging can reduce the whole chain of data uncertainty that would be a big help.

      You still have the Wm-2 problem which is the bugger. If the GMT is 288K/390Wm-2 the average anomaly should represent 5.44Wm-2/K. For the oceans the range of error is +/- 0.9 Wm-2/K even if you have perfection with the anomaly. With the atmosphere the error is closer to +3.6 to -1.9 Wm-2/K with a perfect temperature anomaly. This is where the law of large numbers gets interesting. You can have nearly 0.5C of “Warming” with no change in energy just be redistributing the energy. So if you Krige one data set you still need to Krige every calculation based on that data set if you want your finished product to match the accuracy of the data.

    • Matthew R Marler

      KevinK: The “law of large numbers” does not say say that multiple measurements from independent instruments will yield a better accuracy, sorry but that is an incorrect interpretation by the climate science community.

      You are quoting the Law of Large Numbers, but the authors are quoting the Central Limit Theorem. Your critique is irrelevant to the problem at hand.

      • Kevin’s premise was that thermometers need be neither independent or identically distributed. Since those are the requirements for the classical Central Limit Theorem, Kevin can justifiably infer from his premise the stronger result that the Central Limit Theorem does not apply, even if he only made a weaker claim.

        I would instead question Kevin’s premise. If the temperature records starting in 1850 were so systematically biased by inferior 19th century thermometers as to be meaningless, just imagine how much less meaningful the Central England Temperature record must be for the measurements begun in 1659, whose thermometers would be even more systematically biased.

      • Vaughan Pratt

        As the most scrutinised temperature record in the world CET has been cross related to a number of local instrumental records to compile it and then substantiated by means of other records and observations. It bears no relation to most data sets which were assembled and then received limited cross referencing. The exceptions include the 7 long temperature records in Europe examined by Phil Jones amongst others and funded by the EU. If you read the resultant book you will see exactly why the historic records are no more than a guide and need cross referencing.

        See Manley for the compilation of the monthly CET record to 1659 and Parker for the Daily record to 1772.

        I had the great pleasure of meeting David Parker when I went to the Met office a couple of weeks ago to discuss CET and my own reconstruction of it, currently from 1659 to 1538. I will be there again next week to assemble more information to try to find the transition dates between the LIA and MWP.

        tonyb

      • Doesn’t the Lindeberg-Feller CLT allow for independent but not identically distributed r.v.s?…The r.v.s have to satisfy the Lindeberg condition, but I thought that was virtually guaranteed if the r.v.s are bounded. The rate of convergence can be slower than root n, but you still get convergence to a Normal.

      • Tony,
        plus one, (shucks you gave me the franchise fer plus ones,)
        fer yr observ-ay-shun on cross referencin’, the anti- doat ter
        con-firm-ay–shun bias. )

        Context’s the thing whereby
        we may unearth the problem
        situa-shun of the king, (and troops.)
        Situ-ay-shun analysis is able ter
        transcend the myopia
        of point of view and the
        opacity of time and space.

        Ref / me serf Sixth Edi-shun ‘History’s Chequered History.’
        Beth-the-serf.

      • @NW: Doesn’t the Lindeberg-Feller CLT allow for independent but not identically distributed r.v.s?

        Yes (and that’s (a) why I wrote “classical CLT” and (b) “and” instead of “or” between i. and i.d.), but good luck getting Kevin to agree that thermometers are independent. This seems unlikely for thermometers made by the same manufacturer, and also for thermometers based on the same principle.

        My counterargument would be that thermometer manufacturers have had several centuries to learn how to control whatever biases result from either the principle or the manufacturing process. 1850 is not all that long ago in the history of precise measurement, see e.g. [[marine chronometer]] for the lengths people were going to in 1750.

      • @climatereason: It bears no relation to most data sets which were assembled and then received limited cross referencing

        Quite so, Tony, and I wasn’t claiming otherwise. I was addressing Kevin’s concern about the possibility of biases in the measuring equipment, which would apply even more to two centuries earlier. Had he complained instead about the lack of cross-referencing of sea temperatures I wouldn’t have objected.

      • (For [[marine chronometer]] read marine chronometer. I forgot what forum I was in. :) )

    • Hi Kevin,

      Your critique of random errors is fair – not all errors are random and independent – but my paper discusses other kinds of errors like those you mention: systematic errors and pervasive systematic errors. Most of the work on uncertainty in observational data sets is involved with trying to understand the systematic errors, particularly those varying slowly with time.

      I’m intrigued by your examples though. In the block example where each block is an inch long with an eighth of an inch uncertainty, I would have thought that the uncertainty in the total length of one hundred inches would be around one and a quarter inches i.e. about ten times the uncertainty on a single block. How did you arrive at the +/-0.01″ answer?

      Cheers, John

      • Hi Dr. Kennedy
        Paper ‘Reassessing biases and other uncertainties…’ is an excellent work, but then I am someone with rather unconventional approach to these matters..
        One could forever argue about 1/10 of a degree C in the N. A. SST, but it appears to me that the general decadal trend in the NOAA’s AMO database is good representation of the reality.
        Why ?
        Well, the atmospheric pressure at Stykkisholmur/Reykjavik.has been (accurately) measured from the early decades in the 1800’s, it matches closely the AMO trends (see LINK ), further more it could be taken as decadal precursor to the AMO.

  41. R. Gates aka Skeptical Warmist

    After looking into details, data, and methodology of the Cowtan & Wray paper a bit more, I am persuaded by their approach. It does seem consistent and the results reasonable. Because of the lack of data precisely in the region of the planet where tropospheric warming has been the greatest, i.e. the Arctic, some of the “pause” was exaggerated– that is, there was not as large of a pause. But I think it is accurate to say that the rate of growth in tropospheric temperatures certainly moderated back to a mean that does reflect the influence of the many factors being researched elsewhere: namely, a reduction in the rate of flow of energy from ocean to atmosphere (the negative PDO effect), a reduction in overall solar output, and a moderate increase in natural aerosols from an overall slight uptick in volcanic activity. What this gets at is that some influence from a positive PDO (and AMO), and during the 1975-2000 period should be recognized, but even filtering this out, we see the underlying warming of the troposphere in the range of 0.14C per decade globally from GH increases even during the so-called pause, when factoring in the even greater warming of the Arctic. But of course, those of us who think the 0.5 x 10^22 Joules per year that have been added to the ocean down to 2000m consistently for the past 40+ years, without a “pause” at all, vastly outweighs the rather fickle, and far smaller energy changes in the troposphere any way, and so the lack of as huge a pause in the troposphere is not all that shocking. These few tenths of a degree in the troposphere one way or another are a tiny fraction of the energy the ocean has been stockpiling away.

    • Gates-” But of course, those of us who think the 0.5 x 10^22 Joules per year that have been added to the ocean down to 2000m consistently for the past 40+ years, without a “pause” at all, vastly outweighs the rather fickle, and far smaller energy changes in the troposphere any way, and so the lack of as huge a pause in the troposphere is not all that shocking. ”
      With oscillations for everything more numerous than Miley’s Twerks, what evidence is there that for the last 40 years we are not coming out of an oscillation of lower energy down to 2000m and that we are just returning to previously high levels through an as yet discovered oscillation? What data exists to falsify the hypothesis that high levels of energy existed hundreds of years ago that equate to those levels now?

      • R. Gates aka Skeptical Warmist

        Dennis,

        You can have any alternative hypotheses you want about ocean warming, but will it fit the facts as well as the rapid increase in GH Gases does? The oceans have been both warming rapidly and absorbing CO2, exactly as would be expected with the ongoing eruption of the human carbon volcano:

        http://phys.org/news/2013-11-scientists-hot-sour-breathless-oceans.html

        Massive amounts of Carbon is being moved from lithosphere to atmosphere and hyrdosphere by human activity. The results of this movement are becoming increasingly obvious.

      • Gates- Are you not answering my question because you dont want to or you dont know? The question was whether this warming was part of natural variability and part of an unknown oscillation of unknown periodicity. I have heard all the other CO2 stuff before. If you dont know the answer, that is alright. It was not a trick question.

      • Dennis

        Gates is not answering your question because he does not know the answer.

        He simply ASS-U-MEs that since CO2 is rising and global surface temperature is NOT rising, there must be some “missing heat” somewhere.

        And (once the team “corrected” the initial ARGO data, which showed cooling of the upper ocean) ARGO now shows very slight warming (1.4×10^22 joules or 0.05C over the past decade).

        So, being a bit less “skeptical” than he claims, he gloms onto this data to explain the “missing heat”.

        The fact that the HadSST3 sea surface temperature cooled slightly over this period, doesn’t bother him at all.

        And, of course, the postulated ocean warming is projected back to the past 50-odd years (despite the fact that there are no meaningful data to support this) and it’s all because of GH warming.

        The whole story is becoming more bizarre day-by-day.

        Max

      • Max- Thanks. I suspected as such but didnt want to embarrass him. They all have their own pet theories until you probe a little and find out their real understanding is as thin as chiffon. I think it is a legitimate question and goes to the issue of whether the increase in energy is unprecedented.

      • Max is crazy. A scientist finds out why. They went looking to find the missing heat. It would have been monumental scientific incompetence not to go looking for it.

      • This attempt to find the missing heat reminds me of separation anxiety.

      • You’re an accountant. If you have a journal that indicates deposits should be 10,000 dewdews, and the banks says only 7,000 dewdews were deposited, do you just adjust to the bank, or do you find out how many, if any, dewdews actually should have been deposited.

      • JCH

        Your accounting example sucks.

        Here’s another one:

        – You have a hypothesis.

        – It isn’t working out that way in real life.

        – So, instead of revising your hypothesis, you try to find some way to keep it alive.

        The above example obviously also sucks.

        The “truth” is somewhere in between.

        Max

      • It’s best when the bank statements and the journals captures every transaction. I’d prove each deposit in the register against the bank statement. 9 times out of 10 the mistake is going to be in the register. And it that case I’d adjust the journal given sufficient confidence in the bank statement. The reconciliation of the register to the bank is just another case of doing things twice. Like adding a column of numbers twice instead of once. Accountants have is easy compared to climate scientists. We can look at everything in the system. Even with large systems, say in use by General Mills, every transaction is verifiable and associated with the individual who authorized it. The amount of assumptions used in accounting are limited compared to climate science. It seems that accounting in climate science is like the Wild West. With so many ways of doing it as in the Cowtan and Wray paper. With so many transactions missing and various approaches to inferring them. As these approaches evolve they hopefully move towards a standard. But lacking agreed upon standards we get varying answers on total heat content (Balance Sheet) and gains and losses (Income Statement).

        We have a poster here who emphasizes TOA data from the Satellites. With accurate data, that’s an Income Statement, but there are limits. But in the long run, everything passes through the TOA. Everything going on below the TOA is like inter company transactions. If the IRS watched one thing going with the climate, they’d be watching the TOA. Once TOA gains and losses are accurate enough, we’ll know reasonably well about changes in total heat content.

        Stephens et al (2013) writes about an imbalance at the TOA of 0.6 watts/meter^2 with an estimated uncertainty of +/- 0.4 watts/meter^2. Using their SW in of about 340 watts minus the 100 watts apparently immediately reflected back out to get to the smaller number of about 240 in order to emphasize any imbalance, we have 0.6 / 240 watts which I think is a 0.25% net gain above neutral. Flat atmospheric temperatures if true, would say that that gain is going into the Oceans, or perhaps melting ice as well. Anyway the point is with solid TOA data, some inferences can be made, that some will be comfortable with.

      • @manacker:

        I just have seen you have made following assertion regarding the data, based on which the ocean warming over the past 50 years is diagnosed:

        And, of course, the postulated ocean warming is projected back to the past 50-odd years (despite the fact that there are no meaningful data to support this) and it’s all because of GH warming.

        You are totally making this up. What scientific publication by whom uses backward projection of the recent data to derive the ocean warming of the past 50 years, which is being used as evidence for the ocean warming (e.g. in the IPCC report)? Name it, please.

  42. Another strange day in climate science with just one more of those “corrections” which ALWAYS only go upwards.

    It may be asked why nobody showed comparable interest in the GISS dataset, where land data is extrapolated over the Arctic ocean, thus increasing trends, because land is known to warm faster than the seas.

    Or why there is still no breakthrough in Urban Heat / micro siting issue adjustment, despite several authors have puiblished, that this effect may have inflated land temperatures by a factor of 2.

    Now we have the strange stiuation with 2 data sets (Hadcrut4 and UAH) showing almost no warming separately, but with an increased trend when combined.

    Worse, surface based measurements are now even higher compared with satellite data trends, despite basic physics says they should be lower, much lower by a whopping factor of about 1.5.

    The best conclusion may just be to stick with the satellite data (perhaps RSS may now be better than UAH after the instrument failure) to evade all the issues of data and sampling quality on the surface.

    It does not change anything in the overall picture though.

    Climate models now still fail almost everywhere and Rosenthal et al 2013 have just demonstrated in their landmark study, that ocean heat content is near the very low end of the last 10000 years, and that it will take about 400 years just to recapture temperatures of the Medieval Warm period, but only if warming continues.

    • Well the authors are addressing a real problem–characterizing a central metric for climate change. While much is behind the pay-wall they appear to implement an established methodology, include imputation of missing values, cross-validation, and they have results. The methodology can be reviewed and critiqued, the results can/will inform the observation process, and future testing moving beyond the present cross-validation in theory can be done. No climate models or their results are directly involved here.

      From this perspective then, what warrants: The best conclusion may just be to stick with the satellite data For now, where is the beef? Who cares about comparable interest in GISS? Irrelevant, here. The paper is ultimately at it core a methodology paper exploring bias in the estimation process under the condition of missing data. It seems to be pretty responsible thing to do.

      Nothing changes? Sure it does. The work is incremental unless we get a black swan. But you can’t anticipate those.

    • The best conclusion may just be to stick with the satellite data

      You have also NOAA / Reynolds SST, with a glorious 16 year pause – not seen in the models. It may need some correction, though, if we think about it. ;)

    • Yeah, yeah, Rosenthal. Yeah, yeah, Marcott. We are at the cold end of the Holocene and may have already slipped into the deep ice chasm of glaciation but for the carbon steel dioxide crampons bitterly clinging to the wall of the abyss.
      ===============

      • It just takes hundreds of years to warm and cool oceans by a notable amount and this agrees very well with the tiny increases in ocean heat content we measure today.

        Rosenthal makes perfectly sense, if you agree that MWP and previous warm periods and the little ice age lasted for hundreds of years. Then it is just a logical consequence.

        Beyond that, the even higher values in the early/mid holocene make sense as well, because the flow of heat to the Arctic was still blocked at that time leading to this accumulation of heat.

        In my view, Rosenthal should be the poster chart of climate science, filling the gap after the Hockey Stick has been abandoned.

        http://climateaudit.files.wordpress.com/2013/11/rosenthal-2013-figure-2c-annotated.png?w=760&h=520

  43. “Both methods provide superior results than excluding the unsampled regions, with the hybrid method showing particular skill around the regions where no observations are available.”

    How precisely does one know what method better estimates temperatures where we have no measurements?

    • Cross-validation?

      • Cross-validation with what? Testing against actual measurements in a different area with completely different climate does not seem to me to provide any basis for confidence.

    • GaryM, of course one does not precisely know for sure, but this is the fate of people who actually count white swans, rather than simply asserting that they are all white. David Hume talked about all this a long time ago. As mwgrant says, they use a holdout sample to tune the predictive model, and then they go for what’s unknown. No, it’s not perfect, but that’s life in the real inductive world.

      Also, the people who are hailing this paper as “disappearing the pause” don’t understand the paper. In the paper, this is NO trend estimate over the last 16-17 years that exceeds twice the standard error of the estimate.

      • GaryM, to elaborate a bit… What they do is this. They estimate a model to predict observed temperatures in “the Border cells”…cells where there are direct measurements that are around the edges of the areas (poles and parts of Africa) where there are no direct observations. They statistically tune the model to achieve a pretty low average prediction error (small bias) in those border cells, without using those border cells in the estimation…in fact, without using those border cells or any cells within 1700 km of those border cells. This seems pretty reasonable.

      • this is the fate of people who actually count white swans, rather than simply asserting that they are all white.

        If they’re holding a watch they may just be estimating when the next one will leave.

    • Agreed, this has been my main question.

      • Between my house, 2 local reporting stations one a couple miles east and the other a couple miles west of my house, and the larger airport 30 miles west, none report the same temp generally.

        And just the line that separates Arctic and Tropic air masses north or south are going to change the average temperature, if you’re only estimating where that line is, what’s the error margin on something like that?

    • @GaryM

      “Cross-validation with what?

      Best shot as its been awhile…addressing cross-validation and more general comments on handling error in geostatistics.

      0.) Itself :O)

      As NW noted the approach taken in geostatistics and the cross-validation approach indicated by Cotwan and Way are in the the spirit of the ‘holdout’ or jack-knife.

      1.) The authors say enough above to prime the pump on cross-validation. As Mosher pointed out to me earlier (I had missed it) some arctic buoy data were employed–maybe they are limited in quantity at this time but they were used and the use is apparently documented [Way comment]. One author (Way) in a comment response has indicated: ‘Hey this cross-validation is important to our work. Go crawl over it, through it and under it’ (My wording). It is a pretty clear message/request/invitation.

      The approach taken seems to be a logical line of development from the perspective of applying geostatistical techniques. This is important because error analysis weighs heavily in those techniques and offers prospects not found in other commonly employed interpolation/estimation schemes [below]. Need more data? Of course. But after one gets that data one still has to process it and processing is really what the paper is about. I view side-bar implications from interim processing such as reported as potentially useful and natural part of an evolving process in which shelf-life of ideas/implications is always lurking in the background. Take note and move on. Not everything is answered at once.

      2.) Kriging is more than an interpolation scheme. Local and global error estimation is at its core. That is one big reason why it has proven useful for spatial analysis in the earth sciences. One has to think beyond just estimating the value of a variable at a point, and estimation with error analysis integrated part-in-parcel is plain useful.

      In many applications of kriging, e.g. hydrology, mining, a typical practice has been to use the statistical (variogram/kriging) model to estimate the value at each datum location using the data less that point’s value. It is important to note that unlike most interpolation techniques used for spatial estimation kriging also estimates the error at a/each location where applied.* Thus in the cross validation phase one can build a picture (maps, diagnostics) of the estimated performance of the model over and in the context of the area under study.

      Of course since kriging produces predictions of local estimation errors one can produce error maps for the kriged entity. This can be can be quite useful in designing future ‘sampling’ programs (use of virtual data in unsampled areas is also a neat costing trick), confidence bands and/or surfaces for mapped temperature/anomaly contour lines.

      4.) There is a lot of speculation without the paper in hand, but that is the breaks. In commenting I recognize the limitation, but prefer to view the glass as half full. The authors’ comments here are certainly helpful.

      5.) BTW General caveat–an extra precaution when looking at any contoured results: all spatial estimation techniques have tendencies for characteristic anomalies that show up in the final product. Practiced eyes are needed.

    • NW, mwgrant,

      I think understand what they did, at least as well as a (statistical) layman can. My question was more rhetorical actually. I was pointing out the fact, that they are verifying one set of assumptions by testing them against another. Their assumption is that they can cross validate their model using data from an area with a totally different climate.

      I think it’s nonsense.

      You can make educated guesses all day long. And you may have more confidence in one kind of guess than another. But to claim such precision based on a method depending on so many unverifiable assumptions, is ludicrous. In the context of a push for the public policy of decarbonization.

      • The billions being spent on “climate science” should be being spent on getting actual data. Instead, we are funding an army of “scientists” who use statistics in various forms to “analyze” other people’s data.

        If you actually think the entire population is at risk from global warming, that is a stupid way to do business. But hey, at least you get your funding and don’t have to even leave your office while telling the rest of the world what the average temperature is in remote regions of the antarctic (and deep sea) – TO WITHIN TENTHS OF A DEGREE.

        I ask again. Does anybody really believe this stuff? (“You can keep your massaged temperature reports if you like them. Period!”)

      • GaryM, Why don’t you ask Cowtan and Way how much dedicated climate science funding they got? And they don’t even live in your country so whose tax money are you complaining about?

      • WHUT,

        My last comment was not just about them specifically. it was about your whole industry – a black hole of tax payer’s money throughout the developed world.

        “Why don’t you ask Cowtan and Way how much dedicated climate science funding they got?”

        As to the SS contributors funding, this is what they say about their work on the SS site:

        “Earlier in the year, Skeptical Science ran an appeal to fund the publication of the Cook et al Consensus Project paper. The required funds were raised in less than a day, a powerful example of citizen-science in action. Our new paper ‘Coverage bias in the HadCRUT4 temperature record’ is somewhat different from the consensus paper: it is not a Skeptical Science project, and the primary audience are the users and providers of global surface temperature data.

        As a spare time project neither Robert nor myself have academic funds which can be legitimately contributed to making this paper open access. In the light of your generosity last time, we would like to ask you to help crowd-fund making our paper open-access and freely available to the general public.”

        Unless I completely misread the synopsis of their paper above, their work is a reanalysis of government funded data, which might well have been done on their own time. I do wonder who paid for the computers they used to do this yeoman’s work n favor of government control of the world energy economy.

        Not to mention, finding out they are members of the SS troop of Klimate Keystone Kops, makes me wonder how anyone is surprised that they “found” that warming was greater (where no one has measured it) than was thought before.

      • Now that I read it, their comment that “neither Robert nor myself have academic funds which can be legitimately contributed to making this paper open access”, begs the question of what “legitimate academic funds” (again, dedicated or otherwise) were used in doing this bit of Skeptical Science propaganda.

      • The more I think about the funding issue WHUT raised, the curiouser I become.

        “Our new paper ‘Coverage bias in the HadCRUT4 temperature record’ is somewhat different from the consensus paper: it is not a Skeptical Science project, and the primary audience are the users and providers of global surface temperature data.”

        It’s not a Skeptical Science funded project, and its intended audience are the “users and providers” of the temperature reports they have reanalyzed.

        So now I am curious how this was funded. Was this back door funded in some way by NASA and UEA because they are unhappy with their own results?

        Who paid for the computer time and other expenses of this piece of “citizen science”?

      • GaryM, “So now I am curious how this was funded. Was this back door funded in some way by NASA and UEA because they are unhappy with their own results?”

        Can’t people just do stuff for the challenge?

      • captdallas,

        Absolutely. And if they funded their work, whatever it might have cost, more power to them.

        In fact, even if they were funded by NASA and UEA directly, for the express purpose of finding what they “found”, that would not make their analysis wrong (I think there are plenty of other reasons to make that argument).

        But given the politicization of the debate, I think it s fair on all sides to at least know about funding.

        In litigation, the experts of both sides are paid. That does not disqualify them. But the jury is entitled to know who paid, and how much, to take that into account as a factor in evaluating the testimony.

      • GaryM,
        Talking about funding, there is another recent and equally interesting peer-reviewed climate science paper written by Caldeira and Myrhvold [1]. I blogged about it the other day:
        http://contextearth.com/2013/11/13/simple-models-of-forced-warming/

        Ask Myrhvold where he gets his funding — THE Nathan Myhrvold , founder of Microsoft Research and likely billionaire through his patent portfolio company Intellectual Ventures.

        I don’t think he needs any outside funding. Pretty cool, eh?

        BTW, I think their paper is spot on.

        [1] K. Caldeira and N. Myhrvold, “Projections of the pace of warming following an abrupt increase in atmospheric carbon dioxide concentration,” Environmental Research Letters, vol. 8, no. 3, p. 034039, 2013.

      • Some of this geoengineering funding is coming from Bill Gates.

        Is that OK with you, that billionaires set the direction of research?

  44. Imagine if banking used data in this manner.

  45. A comment or ten on observational data. I have a machine that does BP. And pulses . I am very happy with the pulse measurement . It is always accurate, I trust it implicitly.
    BP on the other hand is inherently unreliable, It is dependent on the placing of the cuff over the artery in the same place every time.It is dependent on the size of the arm and the size of the cuff being used. It changes with the time of day, minute by minute and on the stress levels of the patient and of the measuree. Mechanical measurements are technically more precise than human measurements but when they are wrong are extremely wrong (high or low).
    Fudge factors creep in all the time with winding down to zero’s and 5’s and whether the patient needs to have good levels for the BP trial drug or not .
    This paper has all the attributes of a good BP trial paper.
    The data all fits the assumptions made by previous models.
    It is a confirmational paper par excellence with not one iota of doubt allowed to creep in.
    I feel very sorry for Steve, it is very tough when people you have worked with use methods that you would like to agree with produce results like this.
    Gee , they even produced one result that agreed with his!
    Random errors?, Systemic errors?
    Unknown unknowns?
    But when all the changes go one way remember the Ponzi schemes Steve.
    By the way for all the people with BP problems out there, High BP could kill you in 30 years, but you can afford to wait and check it out with minimal risk before taking BP medication , which is wonderful stuff but can have a lot of nasty side effects, even worse if not needed.

    • angech ” BP medication , which is wonderful stuff but can have a lot of nasty side effects, even worse if not needed.”

      That should be posted over ever clinic door. Borderline high normal is definitely not worth the risk.

      • Borderline high normal is definitely not worth the risk.

        My feeling exactly (140/90) before I had a heart attack in 1992. After my quintuple bypass I decided to follow my cardiologist’s advice to take the medication. The only side effect I’ve noticed after 20 years is a big reduction in BP (typically 115/70). My heart muscle remains damaged from the attack however, which might not have happened if I’d followed the earlier advice to take the medication. How was I to know? I didn’t want to be on medication the rest of my life, but that can be a self-fulfilling prophecy if you don’t survive the attack. YMMV.

      • Mine was a little lower, 135/86 but they wanted me to start anyway even with a good ekg. Turns out that my slight elevation was due to chronic dehydration living in the tropics and all and the HTCZ water pill aggravated that into a remarkably painful DVT. At the same time a friend took a required physical and they started him on BP meds with HTCZ so he could meet the training program BP guidelines. He lost all his teeth due to chronic dehydration and tended to pass out a lot.

        So I guess it is about a toss up.

  46. R. Gates aka Skeptical Warmist

    All the talk of how much the oceans have warmed and how accurate the past data is is rather secondary: they are in trouble, today, and if they are in trouble, we are in trouble:

    http://phys.org/news/2013-11-scientists-hot-sour-breathless-oceans.html

    These inconvenient facts do not set well with those who think humans can have no significant effect on planet Earth. If the facts related to the slow human caused death of the ocean don’t mesh with your meta-memeplex, perhaps you need to do some house cleaning.

    • It doesn’t seem consistent to me. More upwelling causing more mixing is preventing the Pacific from warming. More stratified with less mixing causing loss of oxygen and lower pH.

      • R. Gates aka Skeptical Warmist

        Steven,

        You’ve got your physics upside down on this one. Nothing is causing the Pacific to not warm, because it is warming, along with the rest of the global ocean. Where did you get this “more upwelling” idea?

      • R. Gates, probably just some silly rumor.

    • R Gates,

      I was a little worried about hot sour breathless oceans.

      I was initially concerned that 540 “international scientists” had apparently
      contributed to an unidentified UN report – 26 pages long! According to a journalist, that is.

      No references, vague references to a theory that squid might migrate along with changes to ocean conditions, and a smattering of “it’s really, really, bad” future facts.

      It is obvious, that if the atmosphere, aquasphere, lithosphere, and so on, behave chaotically, then at any given time, extremely small inputs from human activity may result in large excursions of any given parameters.

      This is called “change”. Many people don’t like “change”.

      If you are really concerned, you might like to form a society for the abolition of “change”, and agitate for laws against “change”.

      Natural “change” has managed to wipe out more than 99% of all species that have ever existed on Earth, without our help. Other than exterminating the human race, I can’t really see a way of not causing “change”. Laws don’t seem to work. People seem like to water, food, shelter, electricity etc.

      But good luck anyway. You’ll need it.

      Live well and prosper,

      Mike Flynn.

    • R. Gates

      You write (rather “unskeptically”):

      they [the oceans] are in trouble, today, and if they are in trouble, we are in trouble

      Citing a scaremongering blurb by Seth Borenstein about “hot, sour, breathless oceans”.

      Such rubbish, Gates – I’m shocked that you, as a self-proclaimed “skeptical warmist” would cite such BS.

      The global ocean is supposed to have warmed by 0.05C over the past decade (since ARGO was installed), after correcting raw ARGO data that showed net cooling.

      And the pH is still very much in the basic (not “sour”) range, due to its enormous buffering capacity.

      And “breathless”?

      Gimme a break, Gates.

      If you want to call yourself a “skeptical warmist”, don’t cite every scaremongering rubbish blurb out there.

      It makes you look silly (even if you aren’t)

      Max

    • Gates- Max noticed that you are dodging my question. He thinks it is because you dont know the answer. I guess I will have to wait for a real expert to address the issue I raised. I am going to wait until The Chief returns.

  47. “Surface warming has slowed somewhat, in large part due to more overall global warming being transferred to the oceans over the past decade. However, these sorts of temporary surface warming slowdowns (and speed-ups) occur on a regular basis due to short-term natural influences.”

    I have seen similar comments many times but I find myself perplexed.

    What exactly are the physical processes that would heat the atmosphere for 30 odd years and then suddenly change to stop heating the atmosphere and heat the oceans instead?

    I find this extremely puzzling. This is, I think a very serious question but I have not seen the mechanism explained.

    So what is the cause of the change?

  48. R. Gates aka Skeptical Warmist

    Hugh,

    The net flow of energy is always from ocean to atmosphere. You understand that part, right? As the atmosphere warms, the net flow of heat from ocean to space slows down slightly, allowing the oceans to warm, this in turn may cause the atmosphere to not warm as fast (though it will still warm) but that really depends on how fast GH gas concentrations continue to rise.

    • OK, that makes sense. But wouldn’t this manifest itself as a slow decrease in the rate of heating rather than the abrupt change we have seen?

    • Addendum … if this is true wouldn’t also mean that the oceans, which has a heat capacity many orders of magnitude greater than the atmosphere, act to dampen the rate of temperature increase? My understanding (and it’s been 35 years since I took physics as an undergraduate) is that temperature is a state variable and energy has a magnitude (I know the terminology is not right here but as I said it’s a long time since I studied this) … Couldn’t most of the heat go into the oceans and mitigate the effects of global warming?

    • R Gates,

      You wrote : –

      ” . . . the net flow of heat from ocean to space slows down slightly, allowing the oceans to warm . . .”.

      Absolute nonsense. A body experiencing a net loss of energy does not warm. If the rate of loss slows, the rate at which the temperature falls also slows.

      You cannot warm anything by reducing the rate at which it cools. If you don’t believe me, try it.

      In typical Warmist fashion, you will no doubt claim that what you wrote really means something different. It doesn’t work anymore.

      Try something else!

      Live well and prosper,

      Mike Flynn.

      • Thanks MIke, let me have a try at mucking it up
        Gates says ocean sends heat to atmosphere which heats up, obviously the ocean was hot and now the air is too. Heat from ocean into air leaves ocean cooler.
        Hot atmosphere radiates more heat into space cools down overnight but hot ocean pumps more heat into air thus cooling down more.
        Oh damn he said the ocean was heating up from its heat loss. I must have it wrong,

      • R. Gates aka Skeptical Warmist

        You are incredible wrong about your physics Mike Flynn. The oceans nor atmosphere are seeing a net loss of energy. Both are warming, but the oceans are simply warming much faster and at greater amounts, being the primary heat reservoir of the planet.

      • R. Gates

        Try thinking “skeptically” before you answer this.

        If the ocean is gaining heat (warming up) since ARGO started in 2003, why is the sea surface temperature losing heat (cooling) over this same period?

        Max

      • R. Gates

        Let’s ASS-U-ME that you are right.

        The oceans are absorbing the AGW heat that is not being seen in the atmosphere.

        So, instead of a significant fraction of a degree global (atmospheric) warming (which could become unpleasant if it continued indefinitely) we have a few thousandths of a degree of ocean warming, disappearing forever into the deep blue sea, where it affects no one.

        This is good news, indeed!

        Let’s do a quick sanity check on that.

        If CO2 levels rise to 650 ppmv by 2100, we would see around 2.2ºC warming from today, using IPCC’s latest mean ECS estimate of 3ºC, assuming the heat all goes into the atmosphere and assuming equilibrium is reached. This equals forcing of around 7.5W/m-2.

        But if this warming goes into the ocean instead, it would hardly be noticed.

        The mass of the atmosphere is about 5,140,000Gt and its specific heat is about 1,000J/kgºC
        The mass of the upper ocean is about 637,000,000Gt and its specific heat is about 4,000J/kgºC

        So the same amount of energy would warm the top 2000 meters of ocean by 0.004ºC.

        Doesn’t sound like a catastrophe for anyone, Gates, even all those little fishies down there.

        Isn’t that good news, Gates?

        Try being a bit more skeptical, before you get all alarmed.

        Max

      • R. Gates

        That’s the upper 2000m of the ocean, of course.

        Max

      • Matthew R Marler

        Mike Flynn: Absolute nonsense. A body experiencing a net loss of energy does not warm. If the rate of loss slows, the rate at which the temperature falls also slows.

        You cannot warm anything by reducing the rate at which it cools. If you don’t believe me, try it.

        Why is it so hard to understand that the sun warms the earth every day and the earth cools at night; and if you decrease the rate of cooling the net effect is to have a slightly higher temperature result from the daily warming?

        What you wrote would be true only if there were not heat input to the earth, an obviously counterfactual conditional..

      • Max, being a chemical engineer and all does not understand how thermal diffusion works. The heat content will never uniformly spread throughout the depths but the diffusive source will have a higher temperature than anywhere else. What this means is that the SST and subsurface layers will always maintain a higher temperatute than the depths. The .004C is a canard by the deceptive ChemE and he likely in fact knows this but prefers to spread FUD.

      • -Mike Flynn: Absolute nonsense. A body experiencing a net loss of energy does not warm. If the rate of loss slows, the rate at which the temperature falls also slows.

        You cannot warm anything by reducing the rate at which it cools. If you don’t believe me, try it.-

        “Why is it so hard to understand that the sun warms the earth every day and the earth cools at night; and if you decrease the rate of cooling the net effect is to have a slightly higher temperature result from the daily warming?”
        If monday the day is 90 and the nite is 70, tuesday it’s 95, then nite will be
        warmer than 70. But tuesday is 85 than nite could cooler than 70.

        With body of water there less variation in temperature. If average temperature of upper surface is cooler, than lower part doesn’t warm as
        fast.
        But if upper water is being cooled by mixed with lower water, then the lower water warms while top cools.
        So generally warmer lower water and cooler upper water, should tend to indicate an increase in mixing of the water.

      • Webby

        You are confused again (as usual).

        If we take only the top 700m of the ocean, the GH warming by 2100 would be 0.01ºC. (Yawn!)

        And, hey, isn’t all that missing heat supposed to be disappearing into the deep blue sea?

        How does it get way down there if it doesn’t first warm the upper ocean?

        It’s the old shell and pea con game, Webby – now you see it, now you don’t.

        Max

      • Matthew R Marler,

        Place an object on the surface of the Earth. As the earth rotates, the object will start absorbing energy from the Sun. As the object is rotated “away” from the Sun, its temperature will commence to drop. This will continue for around 18 hours, depending on latitude, season, local weather conditions and so on.

        The object will reach a maximum temperature in unconcentrated sunlight of less than 100 C, regardless of whether the object is in Death Valley or the Libyan desert.

        I don’t make the rules. Nature does. The Pantheon in Rome has been absorbing sunlight for around 2000 years. The gelato shop across the road has been absorbing sunlight for around 50 years. The temperature of both is indistinguishable at dawn.

        Live well and prosper,

        Mike Flynn.

      • How does it get way down there if it doesn’t first warm the upper ocean?

        Typical denialist bilge. Just because a pipe is flowing faster doesn’t mean there’s more of what it carries in the middle lengths.

        Folks want an example of a skeptic calling out a “skeptic” on BS? Here! Here!

      • AK,

        Pipes, water – more stupid, irrelevant analogies.

        You might care to provide a succinct physical explanation telling us all how photons avoid interacting with matter for several hundred meters.

        Then after the photons have interacted with water several hundred meters down, explain why the now warmer water remains at depth, rather than becoming less dense and rising towards the surface.

        You are talking nonsense, I think. I await your explanation. I am always willing to learn. Please, no more silly analogies or links to Warmist tracts.

        Live well and prosper,

        Mike Flynn

      • “A body experiencing a net loss of energy does not warm.”

        One of my gripes with the warmists is their claim that they know there is currently a net gain of energy in our climate. They don’t know.

        So how precisely do you know that we are currently experiencing a “net loss of energy”? Where are the measurements of energy in and out? Where are the measurements of actual total global climate heat content?

        All the talk of measurements, and experimentation, and verifiability used against CAGWers claims of warming, are equally relevant to anyone claiming they “know” the Earth’s climate as a whole is cooling.

      • GaryM:

        The measurements are here:
        http://www.clivar.org/sites/default/files/GSOP/resops/DISCUSSION_II_LOEB.pdf
        On page 13.
        Placing your money on CERES TOA, you’d have net losses at TOA over the last 13 years, but it’s slight. If atmospheric temps were flat, the Oceans are not warming. I don’t know what any error bars are, so maybe they are warming a bit. The source at the link should be acceptable.

      • GaryM wrote:
        Where are the measurements of energy in and out?

        Is this a joke?

        You aren’t keeping up with the scientific literature. Try Loeb et al, Nature Geoscience 2012. Trenberth et al, BAMS 2009. Or “An update on Earth’s energy balance in light of the latest global observations,” Graeme L. Stephens et al, Nature 2012: http://www.nature.com/ngeo/journal/v5/n10/full/ngeo1580.html

      • GaryM,

        “Real” scientists such as geophysicists measure heat loss, and then try to figure out where the heat is coming from.

        There are differences about both the amount, and the sources. Real scientists admit they have no way of knowing the temperature of the core, the amounts of heat generated as a result of radioactive decay, and all the rest.

        There doesn’t seem to be any dispute that the Earth is cooling, amongst “real” scientists.

        I could be wrong.

        Live well and prosper,

        Mike Flynn.

      • David Appell,

        From the article you cited (which I believe was a subject of a post here at Climate, Etc. previously):

        “The combined uncertainty on the net TOA flux determined from CERES is ±4 Wm2 (95% confidence) due largely to instrument calibration errors. Thus the sum of current satellite-derived fluxes cannot determine the net TOA radiation imbalance with the accuracy needed to track such small imbalances associated with forced climate change.”

        Yes, there are measurements of energy in and out. But not measurements that tell you what the energy balance is.

        Like virtually everything else in climate “science”, the energy imbalance is the product of models. It is just (falsely) represented to be the product of precise measurements by satellites.

      • It’s hard to explain, but changes in surface temperature plus the rate of change of ocean heat content are an independent way at getting the energy imbalance. The long-term rise in ocean heat content is due to an energy imbalance, which means the surface warming isn’t keeping up with the changing forcing.

      • Jim D,

        It’s not hard to explain. You are just describing one of the ways in which the supposed energy balance in inferred. My comment above was in reference to the suggestion that the TOA energy imbalance has been measured by satellite. Everything I have read indicates the instruments involved are not sensitive enough to measure the AGW effect on the difference between ingoing and outgoing radiation.

        Virtually everything involved in CAGW, observations, models, and paleo, depend on models, statistics and assumptions. Nothing wrong with that, but it seems to be a genetic trait of the species Warmus Advocatus to fudge over the nature of the “science” that underlies their assertions of certainty.

      • The satellite-derived energy imbalance has bigger error bars than surface/ocean-derived estimates, but they are consistent with each other.

      • Matthew R Marler

        Mike Flynn: Place an object on the surface of the Earth. As the earth rotates, the object will start absorbing energy from the Sun. As the object is rotated “away” from the Sun, its temperature will commence to drop. This will continue for around 18 hours, depending on latitude, season, local weather conditions and so on.

        The object will reach a maximum temperature in unconcentrated sunlight of less than 100 C, regardless of whether the object is in Death Valley or the Libyan desert.

        I don’t make the rules. Nature does. The Pantheon in Rome has been absorbing sunlight for around 2000 years. The gelato shop across the road has been absorbing sunlight for around 50 years. The temperature of both is indistinguishable at dawn.

        All well and good, but that has nothing to do with what you wrote and what I wrote in response.

    • Mike Flynn, I wish you would drop that – of course you can warm a body (or a system) by reducing by reducing the rate at which it cools. This is not debatable!

      • R. Gates aka Skeptical Warmist

        Mike Flynn can’t help himself. The memeplexes battling in his head are in conflict and so the debris from this battle end up spewing across CE blog posts.

      • Edim

        Ya gotta “reduce the rate at which it cools” to less than a net zero for your statement to be true.

        Believe that’s what Mike is saying.

        Max

      • simon abingdon

        “This is not debatable!” If you think (like normal people) that “to warm” means to increase the temperature of, rather than merely moderate the rate of cooling of, then I agree it needs no debate, because Edim, you’re clearly just plain wrong. Sorry.

      • R. Gates, Skeptical Warmist

        Edim is correct in the basic thermodynamics. If the oceans are transferring X+1 w/m^2 of energy to the atmosphere per second, but receiving X w/m^2 per second from solar SW, then the gain in ocean heat content is 1 w/m2, even though they are transferring energy to the atmosphere, and some might call this “cooling” but from a thermodynamics perspective, the transfer of energy, being greater in than out, is not actually cooling. Seems like a great many people can’t quite fathom this basic concept.

      • R. Gates, Skeptical Warmist

        Actually, the oceans are receiving X + 1 w/m^2, but transferring out X w/m^2, and thus gaining 1 w/m^2…duh, seems basic math escapes me.

      • Edim,

        I hope you are attempting sarcasm by writing: –

        “Mike Flynn, I wish you would drop that – of course you can warm a body (or a system) by reducing by reducing the rate at which it cools. This is not debatable!”

        If you are not, may I respectfully suggest you attempt to stop anything at all cooling, by reducing the rate at which it is losing energy.

        Unless you can demonstrate a circumstance where losing energy results in an increase in temperature, I win. “Increasing temperature by reducing the rate of cooling” is Warmist nonsense. Make a cup of coffee. Now, stop it cooling by reducing its rate of heat loss. Even if you surround it with a perfect insulator, (physically impossible), its temperature cannot rise.

        You may debate it, neither you nor anybody lose can do it. Neither can Nature, it appears.

        Live well and prosper,

        Mike Flynn.

      • R. Gates

        It appears that there is a typo in your last comment.

        You are saying that the net heat into the ocean from the sun = X

        And the net heat out of the ocean to the atmosphere = X+1

        And yet you claim the ocean is warming by 1?

        You cannot be serious.

        Max

      • R. Gates

        Thanks for correcting the typo.

        (I didn’t think you really meant what you wrote.)

        Mike Flynn is correct. You cannot warm an object by reducing the rate at which it is cooling – UNLESS you reduce the cooling rate to less than 0 (in other words, warm the object).

        That’s what your corrected comment also shows.

        Max

      • I would think you certainly can warm a system by reducing one of the means by which it cools, over a period of time. Put two pots of water on a stove with the same heat. Put a lid on one pot. The pot with a lid will warm faster that the pot without (all other things being equal).

        For the period in which the heat of the lidded pot is higher than the un-lidded one, that increased heat over that period of time is caused by the lid, the reduction in cooling.

        It does not work without the exterior heat source. But when you are talking about a system, you can cause a rise in net heat, ie. net warming, by reducing the escape of heat, cooling.

        Put it another way. You are standing in a vertical tube. One pipe is letting water in, another is letting water out. You can raise the level of the water by monkeying with either tube. If you slow the flow out of the outlet pipe too much, and drown, it will be little consolation that only the other pipe was adding water.

      • Gary M

        You wrote (bold type by me):

        I would think you certainly can warm a system by reducing one of the means by which it cools, over a period of time.

        Yep.

        As long as you are warming it with other means, so that the net heat flow is in.

        If, however, an object is cooling (Mike Flynn’s example), you cannot warm it by reducing the net cooling rate, unless you reduce the net cooling rate to less than zero (= warm it).

        But we have beaten this dog to death.

        Flynn was right.

        Max

      • manacker,

        Thanks for the support.

        Obviously, too much time spent reading the Book of Warm rots the brain of the Warmist concerned.

        I’m starting to understand the language, I think. Cooling means warming.

        Live well and prosper,

        Mike Flynn.

      • R. Gates aka Skeptical Warmist

        Max,

        Mike Flynn is no where close to being correct on this issue. The ocean is gaining energy faster than it is losing it to the atmosphere. This is simple thermodynamics, can be illustrated by numerous examples, and either Mike is being intentionally obtuse, or he really doesn’t understand basic thermodynamics.

      • max,

        I missed the assumption that the object was experiencing net cooling. I assumed the subject was our climate, which, in my opinion, no one knows whether it is currently experiencing net heating or cooling.

    • R. Gates aka Skeptical Warmist

      Hugh, you are right– the oceans are buffering the atmosphere from as large of temperature increases as we might otherwise see with rising GH gas concentrations. But also remember the natural variability in ocean to atmosphere energy transfer caused by things like the ENSO cycle. During El Niños, a bit more energy is transferred to the atmosphere, and during la Nina’s a little less, though across all cycles and all timeframes it is always very positive, as over 50% of the net energy in the atmosphere at any given time came directly from the ocean, and of course the ocean receives its net energy directly from the sun.

  49. David Springer

    So the Arctic is warming faster than we thought just last week.

    It was already warming far faster than models had predicted. Now it makes the models even more wrong about Arctic warming. The models are broken. This puts another nail in the GCM coffin. When do we get around to burying them? How wrong do they have to be before the team admits there is a problem?

  50. This is probably the right time to remind you all that a global temperature anomaly is not a very meaningful metric: A temperature increase of 0.1 degrees Celsius in the Arctic desert (especially in winter!) represents far less excess energy than a temperature increase of 0.1 degrees Celsius in a humid tropical forest.

    Has there been any attempts at all to measure something like a “global enthalpy anomaly”?

    • ” Has there been any attempts at all to measure something like a “global enthalpy anomaly”?

      Yes, it is very easy to do that as a first order check. Add the amount of heat that is entering the ocean (dH) and compare that to the Planck response of SST (dW) and the numbers jive.

      dH + dW(ocean) + dLatent(ocean) ~ dW(land) + dLatent(land)

      There is some correction for latent heat of evaporation that impacts the lapse rate differently for ocean versus land which I show in the equation. Latent is tricky because whatever cooling it provides at the surface is a warming at higher altitudes when it condenses out.

  51. Judith Curry, you assert, based on the graphic at the bottom of your article:

    The bottom line remains Ed Hawkins’ figure that compares climate model simulations for regions where the surface observations exist. This is the appropriate way to compare climate models to surface observations, and the outstanding issue is that the climate models and observations disagree.

    Apparently, you allege the disagreement between measured temperatures and model simulations, because we can see in the graphic that the measured temperatures are currently at the boundary of the 95% confidence interval of the model simulation sample. Please correct me, if I misunderstand you here.

    Does the graphic account for differences between the time variability of the forcings from the climate drivers in the real world and the time variability of the forcings from climate drivers, ideally prescribed after 2005 in the CMIP5 scenarios? This is an important question, because even a perfect model would show a disagreement between the statistical properties of the simulated climate and measurements in such a case.

    Anyway, regardless whether such differences between real world forcings and CMIP5 forcings have been accounted for in the graphic or not, a 95% probability confidence interval means that 1 out of 20 data points of the probability distribution lies outside of the 95% interval. Thus, if measured temperature and model simulations have exactly the same probability distribution, i.e., when they statistically agree perfectly, one out of 20 data points of the measured temperature data points must lie at or outside of the 95% confidence interval of the model simulation sample on average. This would happen in clusters due to autocorrelation, though. Therefore, I do not see that your assertion is supported by the graphic you show, because the graphic shows only that there are some instances when this happen, as one should expect. Your assertion of a disagreement between measurements and model simulations only based on the fact that you find an instance, like the current one, where the measured temperature lies at the 95% boundary is based on logically and statistically fallacious reasoning then.

  52. Judith Curry writes:

    So I don’t think Cowtan and Wray’s analysis adds anything to our understanding of the global surface temperature field and the ‘pause.’

    This alleged pause that has never been defined based on precise scientific (statistical) criteria, and that, perhaps, has never been?

    • This alleged pause that has never been defined based on precise scientific (statistical) criteria, and that, perhaps, has never been?

      The alleged pause is defined with the same data, terms and criteria as the alleged rise. goose.gander.good

      • Well, if so then the global surface temperature rise since 1970, or any longer time period since the 19th century is real, whereas the alleged pause isn’t. Former is highly statistically significant. In contrast, the alleged pause lacks statistical significance.

    • Jan Perlitz

      Here are three papers from the Met office on the pause

      http://www.metoffice.gov.uk/research/news/recent-pause-in-warming

      I was there talking to several of their scientists two weeks ago and they certainly acknowledge the pause, but as yet have not determined its exact cause other than to suggest heat is now entering the ocean rather than staying in the atmosphere. What the mechanism is for that change will no doubt be the subject of more papers

      tonyb

      • I know these publication, but I didn’t find any definition for the alleged pause, based on scientific (statistical) criteria in there. Perhaps, I missed it and you could point me to it?

      • Jan

        I posted three Met office reports that go into great detail on the pause. THEY admit there is one. Their SCIENTISTS admit there is one. The data presented shows there is one.

        Perhaps you can clarify where you disagree with their analysis as I thought that was the job of sceptics?

        Just for good measure here is an example of a temperature record-CET- that is actually showing a substantial decline and prompted a full scale meeting at the met office last July to talk about this and the pause

        http://www.metoffice.gov.uk/hadobs/hadcet/

        As you know, CET is a reasonable but not perfect proxy for Northern Hemisphere temperatures but does not have the noise introduced by tens of thousands of climate records all doing different things-going up, going down or remaining static.

        How significant the pause turns out to be we shall have to wait and see. In the case of CET it reverses some 320 years of steadily rising temperatures and seen in historic context may turn out to be merely another blip in this very long record.
        tonyb

      • tony b and Jan P. Perlwitz

        The “pause” in the “globally and annually averaged land and sea surface temperature anomaly” (HadCRUT4) is observed in the HadCRUT4 record.

        It is a “physical observation” (warts and all). As such it represents “empirical evidence” that the global surface temperature is not warming at present..

        Its “statistical significance” is a subjective premise.

        Its “definition based on scientific (statistical) criteria” is an even more nebulous concept.

        Max

      • Ask Dr. Cowtan about the pause. He has been putting out stuff on SkS as KevinC and provided some interesting models ofwarming via his Trend Calculator and the simple response function model
        http://diyclimate.x10.mx

        This is DIY stuff and it inspired me to work out the CSALT model.
        http://ContextEarth.com/context_salt_model/navigate

        These models reproduce the pause very readily.

      • Web

        Your link to csalt didn’t work. Please repost.

        tonyb

      • @climatereason:

        I posted three Met office reports that go into great detail on the pause. THEY admit there is one. Their SCIENTISTS admit there is one. The data presented shows there is one.

        So, you are asking me to just uncritically accept what other scientists “admit”, at least when it seems to confirm your views, even though it’s just an fait-accompli assumption.

        Actually, according to the MET Office, the data show:

        “- A wide range of climate quantities continue to show changes. For instance, we have observed a continued decline in Arctic sea ice and a rise in global sea level. These changes are consistent with our understanding of how the climate system responds to increasing atmospheric greenhouse gases.

        – Global mean surface temperatures remain high, with the last decade being the warmest on record.

        – Although the rate of surface warming appears to have slowed considerably over the most recent decade, such slowing for a decade or so has been seen in the past in observations and is simulated in climate models, where they are temporary events.”

        Thus, on one hand the starting point of the MET Office publications is to take the claimed “pause” as fait accompli assumption, but then they come to the conclusion, nothing unusual can be seen in the data, and climate models simulate similar behavior. Which is obviously not the same what e.g., Judith Curry says when she is making her assertions about the “pause”.

        If just every temporary wobble in a data series of a climate variable, which is opposite to the longer-term trend and not even statistically significant, called a “pause” what’s the point? Making headlines in the Daily Mail?

        I do not have a problem with efforts to also understand short-term variability in the system. On the contrary, it’s necessary. I don’t even have a problem with calling some short-term behavior a “pause” as a work term. But many of the claims about the “pause”, which are out there, outside of the realm of science, are not really about that, aren’t they?

      • Jan

        Good to see you being sceptical. I am often Sceptical of stuff from the Met Office and Nasa as well.

        So if there is no ‘pause’ what do YOU think is happening?

        tonyb

      • @manacker:

        No explanation for anything comes from pure “physical observation”. There is always an explanatory framework in everyone’s head, which puts observation in a context, through which the observation is interpreted. I state this in this general form, since such a framework isn’t necessarily a scientific one. For instance, it also could be a framework of superstitious beliefs or one of non-scientific prejudices.

        A statement like,

        “The “pause” in the “globally and annually averaged land and sea surface temperature anomaly” (HadCRUT4) is observed in the HadCRUT4 record.”

        cannot be made by you without any presumption in your head about how such a “pause” is recognized in the data, based on which you state your interpretation of the data.

      • My link to the CSALT model does indeed work.

        The pause is very easily explained by a SOI downward trend in the last 10 years

        However, since any SOI bias away from a mean index of zero can not be sustained, this pause is expected to disappear within a few years.

        The other mitigating circumstance is the Cowtan and Way correction. This suggests that the pause is not as flat as we were lead to believe.

        I will add the C&W correction to the CSALT model as soon as I can get a hold of their data.

      • Jan P Perlwitz

        The “pause” is an observed slowdown or slight reversal of the observed late 20thC warming trend, which lasted almost 30 years.

        It is “recognized in the data” by all those thermometer readings (even the ones next to AC exhausts in the summer or heated buildings in the winter), and as such has been (reluctantly?) “recognized” by the Met Office, by James E. Hansen and by others.

        Neither the “pause” nor the late 20thC warming cycle can be “explained” with any high certainty, simply because there are still too many unknowns in what makes our climate behave as it does.

        This is even more so for the early 20thC warming cycle and the mid-century cycle of slight cooling, both of which lasted around 30 years.

        If the “pause” lasts as long as these other cycles, then it becomes as significant.

        Max

      • The “pause” is an observed slowdown or slight reversal of the observed late 20thC warming trend, which lasted almost 30 years.

        It is “recognized in the data” by all those thermometer readings (even the ones next to AC exhausts in the summer or heated buildings in the winter), and as such has been (reluctantly?) “recognized” by the Met Office, by James E. Hansen and by others.

        One could define “pause” in this way. Simply as negative deviation from the average longer-term trend. However, then a “pause” could be observed 50% of the time, even if the longer-term trend stays the same, assuming a symmetric probability distribution of the trends with a given period length, based on which the trends are estimated. It would be foolish to not recognize something like this, which is just a feature of the probability distribution.

      • WebHubTelescope,

        Adjusting your CSALT model so soon?

        It’s a good thing you didn’t offer an incorrect model to the IPCC, then. They seem to have enough of their own.

        If your model can’t even accurately forecast the past, what is it useful for?

        Live well and prosper,

        Mike Flynn.

      • If your model can’t even accurately forecast the past, what is it useful for?

        You mean hindcast the past like this model does?
        http://davidappell.blogspot.com/2013/09/a-useful-paper-on-one-models-results.html

      • Jan P Perlwitz

        Good. So we agree.

        The “pause” is real as it is based on physically observed data, we both “recognize” it as such, and we agree that its “statistical significance” is less than that of the late 20thC warming cycle (which lasted over twice as long).

        IF it lasts 30 years, however, its “statistical significance” will be the same (even though the amount of observed cooling may be less than the observed warming during the late 20thC warming cycle).

        Max

      • Good. So we agree.

        No, not really.

        The “pause” is real as it is based on physically observed data, we both “recognize” it as such,

        We don’t agree on the pause. I said one could define it as you proposed, and I said that any trend lower than the average trend of the trend distribution would be a “pause” then, for a symmetric distribution 50% of all the cases. However, I do not accept such a definition, even if it was possible from a purely technical point of view. It doesn’t make any scientific sense to me to claim a “pause”, if such a “pause” isn’t statistically distinguishable from the longer-term, statistically significant warming trend, and to claim the presence of a “pause”, even if nothing has changed in the system with respect to continuing global warming, and the deviation from the average trend was just due to some random short term fluctuation.

        and we agree that its “statistical significance” is less than that of the late 20thC warming cycle (which lasted over twice as long).

        You are talking about the warming “cycle” as if it was something from the past. I don’t see any substantial empirical evidence at present for such a claim. The trend is also statistically significant for the most recent 30 years. And why “cycle”? I don’t see just cycles. I see a secular increase. I see some justification for the hypothesis that the secular increase is overlaid with some multi-decadal quasi-cyclic behavior over the course of the 20th century. How much of this is really internally generated like due to lower frequency chaotic behavior, or whether the observed pattern is just due to how external forcings (mostly solar, GHG, and Aerosols) coincidentally combined remains to be seen.

        IF it lasts 30 years, however, its “statistical significance” will be the same (even though the amount of observed cooling may be less than the observed warming during the late 20thC warming cycle).

        If the temperature record of coming years became distinguishable from the currently detectable statistically significant warming trend, based on robust statistical analysis, it would change things.

      • David Appell,

        I’m not sure what use a model that forecasts the past is. I use the term advisedly.

        If you believe it has some use, have you considered selling (or giving) it to someone like the IPCC?

        Their models seem to be of no use for anything, except to demonstrate they are worthless. Maybe yours is better.

        Live well and prosper,

        Mike Flynn.

      • Jan P Perlwitz

        Your lengthy waffle surprised me.

        I thought you had recognized that the currently observed reversal of the late 20thC warming cycle was real, as have James E. Hansen and the Met Office.

        But it appears you are still in denial.

        “Tant pis” (as the French say).

        Max

      • I thought you had recognized that the currently observed reversal of the late 20thC warming cycle was real, as have James E. Hansen and the Met Office.

        I am not aware of any scientific publication or other statement where Jim Hansen or the MET Office are supposed to have “recognized” that we currently are observing a “reversal” of the warming observed during the late 20th century, i.e. where they supposedly have claimed that something substantially has changed in the physical system compared to the last decades of the 20th century. Especially, now you are talking about the physical process of global warming as a process of continuing heat accumulation in the oceans, land, cryosphere, and troposphere due to the radiative disequilibrium coming from increasing greenhouse gases in the atmosphere, which involves much more than just the surface temperature record and for which the surface temperature is not the most important component or indicator.

        I think you just have made this up about Jim Hansen and the MET Office.

        But it appears you are still in denial.

        You mean I do not accept assertions as to be true, just because you make them w/o having available any scientific evidence to back up such an assertion. If you like to call this to be “in denial” suit yourself.

    • Jan, “This alleged pause that has never been defined based on precise scientific (statistical) criteria, and that, perhaps, has never been?”

      A rose by any name would smell as sweet.

      • The precise definition of the term “pause”, as I’ve understood it by trying to read between the lines of the climate blogosphere, is any period of duration equal to that from January 1998 to the most recent December inclusive (presently 15 years = 180 months) such that the HadCRUT4 temperature trend of that period is less than +0.5 °C/century.

        Since 1967 there has been only one pause by this definition, namely the 15 years starting with 1998, which trended up +0.417 °C/century.

        The 14 years starting 1998 trended up +0.524 °C/century. Hence for there to have been a pause prior to this year one would have to raise the threshold slightly to allow this one.

        The nearest thing to a pause substantially before that was the 15 years starting with 1980 which trended up +0.974 °C/century. The 14 years starting with 1980 trended up +0.892 °C/century.

        This definition of “pause” is precise enough to start an annual pool on whether “the pause” will reach 0 (horizontal trend) or below, to be adjudicated at WoodForTrees after each December’s HadCRUT4 anomaly is announced.

        For this December (16 years) it looks like the trend will be around +0.4 °C/century. Last December (15 years) it was +0.417 as noted above, while the preceding December (14 years) it was +0.524. In contrast the 30 years starting 1974 trended up +2.01 °C/century, considerably stronger than “the pause”.

        At this rate I doubt many people will want to bet on “the pause” reaching 0 for December 2014. 2015 may hold more promise for some though.

      • Too many significant figures. Trying to be too precise.

      • > A rose by any name would smell as sweet.

        I guess that depends what you mean by “sweet”, Cap’n:

        In general, roses with the best scents are darker colors, have more petals, and have thick or velvety petals. Red and pink roses often smell like what we term “rose”. White and yellows often smell of violets, nasturtium, and lemon. Orange roses often smell of fruits, violets, nasturtium, and clove.

        http://perrysperennials.info/articles/rosefrag.html

      • @Appell: Too many significant figures. Trying to be too precise.

        The extra digits were in case anyone took my proposal of a pool seriously. They give a sanity check that everyone in the pool is on the same page, i.e. that we’re all using the same data and same formulas for trends. Everyone else only needs the first significant digit, as you rightly point out.

    • Dr. Perlwitz
      Only pause that matters is one of the peak annual insolation, it has been constant for the last 350 years
      http://www.vukcevic.talktalk.net/CET-Jun.htm
      The rest is simply the natural variability due to the oceans perpetual circulation change.
      http://www.nasa.gov/topics/earth/features/perpetual-ocean.html

  53. Puzzled Scientists Say Strange Things Are Happening On the Sun
    “‘ If so, the decline in magnetic activity could ease global warming, the scientists say. But such a subtle change in the sun—lowering its luminosity by about 0.1%—wouldn’t be enough to outweigh the build-up of greenhouse gases and soot that most researchers consider the main cause of rising world temperatures over the past century or so. ‘Given our current understanding of how the sun varies and how climate responds, were the sun to enter a new Maunder Minimum, it would not mean a new Little Ice Age,’ says Judith Lean. ‘It would simply slow down the current warming by a modest amount.'”
    http://science.slashdot.org/story/13/11/13/0150213/puzzled-scientists-say-strange-things-are-happening-on-the-sun

    When doubling CO2 is less than 1 watt per square meter, why is .1% of 1360 watt square meter small?
    Particularly when we have not doubled CO2, nor likely in less than 50 years.
    What if instead of .1% is was actually .15 %. So rather 1.3 watts, it was 2 watts per square meter less at top of atmosphere?
    Anyhow, I agree we aren’t going to enter another Little Ice Age, if nothing else it would probably require century of such cooling before it resembled
    the LIA. And it seems we also need some volcanic eruption exceeding 50 cubic km of ejecta, like we had during the LIA.

    • I just thought something. The AGW religion holds that “global warming” causes more hurriances. And there quite of historical evidence which shows this is not true. So if we assume that cooler conditions cause more hurricanes, if we have more and larger hurricanes when it’s cooler, is this another small cooling effect?

      • gbaikie,

        WHT knows all. He has made a model. It doesn’t work, so he is going to “adjust” it to fit a little bit of the past.

        WHT confuses estimates of albedo with fact.

        He assumes the GHE exists. It doesn’t. He must spend a lot of time reading the “Book of Warm”. He sounds like he actually believes what he writes. Pity him, don’t condemn him.

        Live well and prosper,

        Mike Flynn.

      • “The AGW religion holds that “global warming” causes more hurriances.”

        Wrong.

        Ironic that you call science religion without yourself knowing basic facts.

    • 1360 is not the comparison yardstick, it is closer to 250 w/m^2 after geometry and albedo is figured in.

      And then you screw up on 1, as it should be more like 3.7 or 10 w/m^2 when the water vapor positive feedbacks are factored in. That gives 3C ECS.

      No wonder no one ever responds to your arithmetical drivel gbaikie.

      • WHT

        Rather than getting too concerned about “arithmetic” regarding the impact of the sun on our climate, it would seem wiser to look at the recent past (past 1300 years or so).

        We cannot explain the MWP and LIA with human GH gases, so it leaves other human influences plus natural forcings.

        Other human influences were arguably not very significant prior to industrialization (~1750), so that leaves natural forcing.

        The warmest century of the MWP was (at least) as warm as the past century, and the coldest century of the LIA was arguably around 1C cooler.

        So, in addition to multi-decadal bumps and grinds caused by natural variability, we have 1C either way that was caused by natural factors.

        We know the sun was unusually inactive during the depth of the LIA.

        We also know that it was unusually active during the 20thC.

        And we know that is has become very inactive lately.

        So we have the sun essentially driving our climate for over 1,000 years, including the first half of this century (before there was much GHG influence) with swings of at least 1C.

        So it seems silly to write the sun off as insignificant by only considering changes in the direct solar irradiance.

        Max

      • -1360 is not the comparison yardstick, it is closer to 250 w/m^2 after geometry and albedo is figured in.

        And then you screw up on 1, as it should be more like 3.7 or 10 w/m^2 when the water vapor positive feedbacks are factored in. That gives 3C ECS.

        No wonder no one ever responds to your arithmetical drivel gbaikie.-

        O3 is suppose to be one of the greenhouse gas and O3 is near top of atmosphere.
        Sunlight obviously is the primary factor which would “forces” all greenhouse effects, so less sunlight, subtracts or is the reverse of your “water vapor positive feedbacks” and all of your stipulated feedbacks.

        “By their percentage contribution to the greenhouse effect on Earth the four major gases are:

        water vapor, 36–70%
        carbon dioxide, 9–26%
        methane, 4–9%
        ozone, 3–7% ”
        http://en.wikipedia.org/wiki/Greenhouse_effect

        But in general averaging sunlight where is doesn’t shine, seems a bit silly to me.

  54. Just posted a reference at RC to Cowtan and Way’s comments here, nothing more than the web address, and it was Bore-holed.

    • This may sound like a stupid question but in your opinion is that due to not wanting to send anyone to the competition or are they just boycotting a skeptic site? I noticed they don’t list Climate Etc or WUWT on their blog roll that is headlined: Other Opinions; but Dr Curry only lists her buddys as well. I know she allows links to Real Climate as I’ve done it. My only conclusion would be that they are afraid people might get wise to their propaganda. They don’t want free exchange just group think.

      • The Borehole at RC makes for an interesting read. To their credit, they keep it on line even though it makes them looks more intolerant than they probably are. My experience there leads me to believe that bore-holing depends on who the moderator is and his mood at the time. They never borehole remarks from their regular supporters, no matter how fatuous (e.g. “there is no debate”). As to Way & Cowtan, it will be entertaining to watch attack dog Ray Ladbury gradually come around to supporting the paper. He and others there are in a bit of a bind, though, having finally acknowledged the pause and explained where the missing heat has been hiding. Now it turns out not to have been missing after all.

      • That sounds funny. I’ll have to check a few of those threads out.

      • One of the hallmarks of the warmist movement, and RC may still be the standard-bearer in this regard even in its reduced state, is they have not been, can not be, and will not be wrong. Thus Hansen’s predictions have proven out, the hockey stick is unbroken, Trenberth was misunderstood, etc. Data will be tortured until it gives in. Now that the new paper is out, how long will it be (I assume it’s already happened) before someone at RC or elsewhere trumpets the news that, “Aha, the warming that we told you would ‘come back with a vengeance’ has arrived! Not only did we find it hiding in the deepest levels of oceans and elsewhere during the alleged pause, we now know the pause didn’t exist to begin with. Add it all up, people, the catastrophe is here!” And there you have it, a veritable heat bonanza.

      • anonymous,
        I am so new to this stuff that I would never know the difference. I am glad you explained things to me like that point blank. If the reality is as you describe they would be in denial so I guess that’s why they hurled the invective first. I also understand that the media portrayal and the public perception would be more in RC direction. I still haven’t looked at McIntyre’s papers (I keep promising myself) but looking at the balance of all the charts I’ve seen it looks more like a boomerang than a hockey stick. And yet here is Mann front and center trying to slay the denial dragon with truth, justice and the American way. I just now read Trenberth on wiki I had no idea that happened. After reading that, all I can say is that it sounds Clintonesque. I have been under the impression that Hansen’s predictions were pretty good? So I take it you think the pause is having an effect on team catastrophy?

      • Uncle Ray is making progress. ‘Of two minds’ at 8:19 AM today. Prediction: will be of one warm mind by week’s end.

      • …accept for Aunt Judy

      • To my prediction that someone at RC would proclaim even more heat playing hide-and-seek than we imagined, not only in the deeper ocean but in the kriged-up Arctic, a Mr. Roger Lambert is giving Stefan a nudge. Here it is, quick, before it gets bore-holed:)

        Roger Lambert says:
        16 Nov 2013 at 12:24 PM
        Stefan, could you comment on the stoichiometry of the heat? Enough ‘hidden’ heat was accounted for in deeper ocean strata to explain the “pause”. Now this study interpolates enough missing heat somewhere else. Do we have too much heat floating around or not?

      • That’s hilarious! heat popping up everywhere!!
        post was still there

      • ooh, stoichiometry; they use longer words over at RC than here.

    • It gets better and better over at RC. Last night there was another reference [Steve] to the authors’ contributions to the discussion here. This morning — boreholed. So childish, and so tone-deaf!

      • Too much emotion, not enough data.

      • Ordvic, please understand this is my venting more than anything else, based on long experience, to be sure, and I do have strong, informed opinions, but I’m no climate scientist, and you are right to determine things through your own reading.

    • “Just posted a reference at RC to Cowtan and Way’s comments here, nothing more than the web address, and it was Bore-holed.”

      They tend to clip posts that have no content from the poster, unless in direct reply to a request or somesuch. Try adding what the links lead to next time and see if they permit it.

  55. Steve Fitzpatrick

    Judith,

    Do you have a link to Ed Hawkin’s graphic shown in the post (or better, a link to the data for that graphic)?

  56. Matthew R Marler

    It looks like everything is available except the complete paper, even actual data and computer code used are available for downloading.

    There is a circularity here. In the regions that are not missing data, if you randomly exclude some and use their method to impute the missing data then their method does a good job if imputation. They infer from that that the method is equally valid for imputing the missing temperatures from the areas that are already undersampled. However, if there is a bias in the lack of coverage, this method will not reveal any bias: it only confirms that if you delete data completely at random from the sampled regions the imputation works, not that their method accurately imputes where it can not be assumed the data are missing at random.

    The data are missing from rather large regions, and there is no reason to assume that those regions have any particular relationship to the measured regions. If the true relationship between missing and measured data is different from the relationship that is used in the imputation, then the imputed values will be wrong, and there is no way (now, at least) to disclose the errors.

    This based on the abstract and press release.

    • Kind of frustrating that everything but the paper is available. They do have out of sample data, buoys in the Arctic some floating and some on ice which helps a lot.

    • 1.) Often there Is and should be multiple rounds of sampling. The geostatistical techniques of course bring some known charactirstic ‘design’ tools, e.g., error maps for the region of interet, use of virtual data for ‘optimizing sampling, etc., to that process.

      2.) Use of co-regionalized variables brings more locations into play though not without limitations and qualifications. But in the light of a tiered protocol ‘more’ is is better than ‘none’. Uncertainties associated with the quantitative relation between the two variables in principle can be incorporated into the data extended (kriging) model.

      3.) These two facets of a geostatistical approach used in tandem make it attractive and potentially viable.

      0.) I too am outside the pay-wall so none of this may apply…but I hope it does :O)

    • Matthew, I was able to look at the paper itself. The model selection is not done on the basis of prediction of randomly excluded data. Rather, it is done on the basis of prediction of excluded data near the edges of the regions where data is missing. I think this is a reasonable way to address the issue you are raising as well as it can be addressed. Of course, the problem remains that patterns of spatial covariance of observations with missing observations cannot be observed; they have to be based on an informed assumption. Dr. Curry suggests that the covariance pattern is likely to be different in the arctic for various reasons. So I think that’s where we are.

      • There is some information that the correlation length in the arctic is shorter ( during some seasons) than the correlation length at other parts of the globe.

        Reference

        http://iabp.apl.washington.edu/data_satemp.html

        ‘The statistics of surface air temperature observations obtained from buoys, manned drifting stations, and meteorological land stations in the Arctic during 1979–1997 are analyzed. Although the basic statistics agree with what has been published in various climatologies, the seasonal correlation length scales between the observations are shorter than the annual correlation length scales, especially during summer when the inhomogeneity between the ice-covered ocean and the land is most apparent. During autumn, winter, and spring, the monthly mean correlation length scales are approximately constant at about 1000 km; during summer, the length scales are much shorter, i.e. as low as 300 km. These revised scales are particularly important in the optimal interpolation of data on surface air temperature (SAT) and are used in the analysis of an improved SAT dataset called IABP/POLES. Compared to observations from land stations and the Russian North Pole drift stations, the IABP/POLES dataset has higher correlations and lower rms errors than previous SAT fields and provides better temperature estimates, especially during summer in the marginal ice zones. In addition, the revised correlation length scales allow data taken at interior land stations to be included in the optimal interpretation analysis without introducing land biases to grid points over the ocean. The new analysis provides12-hour fields of air temperatures on a 100-km rectangular grid for all land and ocean areas of the Arctic region for the years 1979–1997.
        The IABP/POLES SAT data set is then used to study spatial and temporal variations in SAT. This data set shows that on average, melt begins in the marginal seas by the first week of June and advances rapidly over the Arctic Ocean, reaching the pole by 19 June, 2 weeks later. Freeze begins at the pole on 16 August, and the freeze isotherm advances more slowly than the melt isotherm. Freeze returns to the marginal seas a month later than at the pole, on 21 September. Near the North Pole, the melt season length is about 58 days, while near the margin, the melt season is about 100 days. A trend of +1°C/decade is found during winter in the eastern Arctic Ocean, but a trend of –1°C/decade is found in the western Arctic Ocean. During spring, almost the entire Arctic shows significant warming trends. In the eastern Arctic Ocean this warming is as much as 2°C/decade. The spring warming is associated with a trend toward a lengthening of the melt season in the eastern Arctic. The western Arctic, however, shows a slight shortening of the melt season. These changes in surface air temperature over the Arctic Ocean are related to the Arctic Oscillation, which accounts for more than half of the surface air temperature trends over Alaska, “Eurasia, and the eastern Arctic Ocean but less than half in the western Arctic Ocean.

      • Yes, you can’t even get the annual cycle phased correctly if you are using arctic land temps to infer arctic ocean and sea ice temps.

      • Variability in correlation length in space and in time are would not be as surprise given the underlying organized dynamic system. It is just not the correlation length either. Existence of a finite correlation length necessitates other parameters characterizing the correlation, e.g., sill, nugget, etc. Such are the cards dealt.

      • Matthew R Marler

        NW: Rather, it is done on the basis of prediction of excluded data near the edges of the regions where data is missing.

        That is a good detail.

        Your post is good.

      • Matthew R Marler

        Steven Mosher | November 14, 2013 at 12:47 pm |

        thanks for an informative post.

  57. Kenneth Fritsch

    From Judith’s excerpt from Kennedy we have:

    “Unknown unknowns will only come to light with continued, diligent and sometimes imaginative investigation of the data and metadata.”

    I suspect that many of those laypersons and even more technical persons discussing the issues of uncertainties in the observed temperature series and using those series for important science efforts have not mulled through the assumptions required to make these estimates and potential areas where a better understanding of these assumptions could potentially and significantly change the currently accepted uncertainties. Here I am referring the known unknowns or at least unknown or poorly understood conditions that could affect the instrumental temperature record over the historical period in which we use it.

    We appear to have temperature series put forth with uncertainty limits that are then exceeded with a later version of the same series or from newer series. I believe this is the case more recently with GHCN versions 2 and 3 and the Best effort versus HadCRU4 , GISS and NCDC – if you are allowed to select part of the period for comparison. Cowtan and Way, though I have not yet read the paper, would appear to be following this same line.
    Without placing any personal preference of any of these methods, I think one can see that even a measurement of utmost importance to the AGW issue, i.e. instrumental temperature series, and more importantly the trends derived from these series, remains a work in progress.

    I have looked in some detail at the algorithms that are used in most temperature series to adjust those series using breakpoints of station difference series, and while I think that approach is the better for using an objective approach versus depending on more subjective meta data, it continues to have some potential weaknesses in areas such as finding slow non climatic changes effecting temperature measurements. We have some confidence in validating these adjustments after having the satellite record available for comparison and after recognizing that the lower troposphere trends can differ from the surface temperatures trends and have differences varying by global regions. I judge that much of the uncertainty resides in the period before we had satellite data even when conceding that satellite data has uncertainties unique to those measurements. Benchmarking these various approaches versus some simulated climate works well for me as long as we include in the benchmarking some conditions that might arise from the so-called known unknowns.

  58. Pingback: Curry on the Cowtan & Wray ‘pausebuster’: ‘Is there anything useful [in it]?” | Watts Up With That?

  59. Stitching together multiple data sets collected by different methods is fraught with potential problems, as we’ve often seen in climate science.

    To me, when the satellite data sets clearly show “the pause”, and you then turn around and say stitching the satellite data onto the surface station data makes “the pause” largely go away. . . .well, that cries out to me that you’re actually seeing an artifact of the stitching process of two data sets collected by different means rather than real data.

  60. Re: “Kriging across land/ocean/sea ice boundaries makes no physical sense.” But does infilling make any more sense? Doesn’t Cowtan & Way’s testing show kriging makes more sense than infilling?

    • Infilling by Kriging within Africa makes sense. Kriging or any other kind of infilling that uses data from land to infer something about ocean temps makes no physical sense.

      • But does kriging make less sense than infilling, across physical boundaries?

      • I think there is one exception, ice free coastal land temperatures seem to be useful as long as they are very close to sea level. At least checking the buckets to intakes issue it seemed to be, but that’s a stats guy’s call.

      • Judith

        There are three choices

        1. leave arctic blank. This is CRU. understand that this approach
        amounts to INFERRING that trends north of 70, are less than
        trends at 70. That is, by leavinng the arctic missing you imput
        the global average trend to the area north of 70

        2. Extrapolate from 70 to 90. GISS. this asssumes that the trend is the
        same at 90 as is at 70. in short, that the polar amplification magically
        ceases at 70 north

        3. A method like Roberts and Kevins.

        The argument that a method isnt physical, holds against ALL THREE approaches. the question is which is best.

        Since 3 has passed cross validation and is compared to bouy data, I’m going to bet on 3.

      • There is a 4th option. Do a more comprehensive job of figuring out what the AO temperatures are, and then do a comprehensive uncertainty analysis. Otherwise, I prefer option #1

      • So then Judith, do you agree that kriging is at least no worse than current algorithms for areas that have no coverage?

      • Steven,

        There’s a third choice: Accepting that coverage is incomplete. If the resulting time series is more reliable, and if it in addition has less “random” variability, this time series may be more useful for almost all practical purposes. It’s not truly global, but that’s fine as long as that’s recognized and known by users of the time series.

      • There is a fifth option. Admit you don’t know enough to compute a “global average temperature” with anywhere near sufficient precision to detect changes of tenths of a degree per day/year/decade.

        But then what would happen to the gravy train of other people’s money?

      • There is a fifth option. Admit you don’t know enough to compute a “global average temperature” with anywhere near sufficient precision to detect changes of tenths of a degree per day/year/decade.

        All scienfitic measurements have uncertainties. So do GISTEMP and HadCRUT4 — they’ve both been very open about that.

        If you disagree with their error bars, you’re welcome to engage them on their science.

      • Regarding a global average temperature, i don’t think this is prima facie important. I think the key issue is comparing model simulations with observations, implying making the comparison in regions where you have the observations. This is what Ed Hawkins did, and I think this is the most illuminating thing that can be done with the ‘global’ surface temperature data

      • “There is a 4th option. Do a more comprehensive job of figuring out what the AO temperatures are, and then do a comprehensive uncertainty analysis. Otherwise, I prefer option #1

        of course we all like option 4.

        But I’m not seeing the motivation behind option 1. That is no different than asserting that the trend above 70N is the same as the global average.

        Now, if we had no other physical argument for thinking that the trend above 70 was the same as the global average, then this might be defensible. But we do have a reason for thinking the trend above 70N will be different:polar amplification. I know of nothing that would suggest this will suddenly disappear north of 70.

        I can think of “purity” reasons why folks might want to avoid infilling

        But, if you wanted to make a bet, would you bet

        A) the trends above 70 are lower than they are at 70?
        B) higher?
        C) about the same

        which bet would you take

      • “GaryM | November 14, 2013 at 5:50 pm |
        There is a fifth option. Admit you don’t know enough to compute a “global average temperature” with anywhere near sufficient precision to detect changes of tenths of a degree per day/year/decade.”

        nobody claims that

      • Judith, can you please answer this directly: do you agree that kriging is at least no worse than current algorithms for areas that have no coverage?

      • Judith Curry wrote:
        Regarding a global average temperature, i don’t think this is prima facie important. I think the key issue is comparing model simulations with observations,

        But that would require comparing regional changes to regional predictions. Models haven’t yet reached that level — and may never reach it.

        There are many things, though, that physics can predict in toto, but not in smaller regions or domains — basically all of thermodynamics

        “Global warming” is global. How can it be assessed with something less than global measurements of global variables?

      • @David Appell,
        But warming isn’t the same globally.

      • Dr, Curry,

        “Regarding a global average temperature, i don’t think this is prima facie important.”

        Well, if the consensus were correct that GAT is rising as fast as they claim it is, it would at least be evidence in support of their claim (not proof, evidence). And I am the wrong one to tell that it is not prima facie important. The IPCC and every other CAGW advocate has been crying GT wolf for decades.

        Steven Mosher,

        ” nobody claims that”

        Read harder.

      • Steven Mosher,

        You are right. I should apologize. You warmists are claiming to know changes in temperature in hudredths of a degree per decade.

        “The updated 100-year trend (1906–2005) of 0.74°C ± 0.18°C is larger than the 100-year warming trend at the time of the TAR (1901–2000) of 0.6°C ± 0.2°C due to additional warm years. The total temperature increase from 1850-1899 to 2001-2005 is 0.76°C ± 0.19°C. The rate of warming averaged over the last 50 years (0.13°C ± 0.03°C per decade) is nearly twice that for the last 100 years.”

        http://www.ipcc.ch/publications_and_data/ar4/wg1/en/tssts-3-1-1.html

        (Please note the header – “TS.3.1.1 Global Average Temperatures”)

        See also:

        “the average surface temperature across the contiguous 48 states has risen at an average rate of 0.14°F per decade (1.4°F per century)”

        http://www.epa.gov/climatechange/science/indicators/weather-climate/temperature.html

        My bad. I heartily apologize for understating the absurd claims of precision by you warmists.

      • Mi Cro wrote:
        But warming isn’t the same globally.

        A global average is a global average. And climate science says that the global average should depend, in part, on the global distribution of all GHGs.

      • @David Appell
        “A global average is a global average. And climate science says that the global average should depend, in part, on the global distribution of all GHGs.”

        This isn’t climatology, it’s math.
        But yes that is what it says, and yes of course local temps will respond as they might respond to local well mixed GHG’s. But what if there is no trend in regional temps as GHG’s went up, and it’s only once you average all of the regions does anything like a warming trend shows up(actually it’s more like a couple or three burps of warming in different regions)?

      • ‘ “Global warming” is global. How can it be assessed with something less than global measurements of global variables?’ I like the question, but your answer makes no sense. Are you claiming that we must look at the entire system that contains all the heat, or thermodynamics doesn’t work? It’s not true; in fact, we are not look at the entire system, but a tiny fraction of it. The ocean contains almost all the heat, and climate scientists now suspect that there is significant heat transfer into (and presumably out of) the deep ocean.
        Are you seriously claiming that the climate system is chaotic, but ceases to be once you look at the global surface temperature average? I’d like some evidence for this very important claim.

      • miker wrote:
        Are you claiming that we must look at the entire system that contains all the heat, or thermodynamics doesn’t work?

        Yes. Obviously, yes.

      • Steven Mosher

        There are three choices

        1. leave arctic blank. …

        2. Extrapolate from 70 to 90. GISS. …

        3. A method like Roberts and Kevins. …

        4. A method unlike Bobby and Kev’s.

        5. Admit that you do not have sufficient data, and resist the urge to make $#!^ up and pretend that you do.

        The argument that a method isnt physical, holds against ALL THREE approaches. the question is which is best.

        No. The question is: Which, if any, is sufficient?

        Since 3 has passed cross validation and is compared to bouy data, I’m going to bet on 3.

        &

        But, if you wanted to make a bet, would you bet

        A) the trends above 70 are lower than they are at 70?
        B) higher?
        C) about the same

        which bet would you take

        When a scientist makes methodological decisions based upon “how I would bet it is”, he is in fact deciding how its going to be. This, esp when combined with your two false-choice restrictions of the possible alternatives, is how science succumbs to bias.

        Pick 4, IFF the sufficiency question can be answered with a defensible positive. Else, pick 5.

      • @Appell
        ‘ “miker wrote: Are you claiming that we must look at the entire system that contains all the heat, or thermodynamics doesn’t work?” Yes. Obviously, yes.’
        ———-
        Well, then answer my question: Why should we be talking about global surface temperature when it is part of a larger system not in equilibrium – that is. including the deep ocean. Is there any reason to believe that global surface temperature alone is not chaotic?

    • David Appell

      We are talking about warming cycles that can be measured in hundredths of a degree per year over ~30 years.

      So the “fifth option” simply recognizes that you cannot get a globally and annually averaged figure that is accurate to a hundredth of a degree(expressed in thousandths of a degree) from individual readings that are significantly less accurate UNLESS you include the individual inaccuracy in your error bars.

      Add to that the number of measuring stations that have been shut down over time, the gaps that need to be filled because there are no measurements, the human errors involved (particularly in the past record), local distortions to the land record from urbanization, the known inaccuracies of the SST record resulting from changing measurement methods, etc., and you have a can of worms.

      Another unexplained fact is that the surface record shows more rapid warming than the satellite record, despite the fact that GH warming should occur more rapidly in the troposphere than at the surface.

      But the record is the best we have – so we have to live with it for what it’s worth, until something better comes along.

      Max

  61. Judith: Do we really care precisely how much warming has been occurring in obscure corners of the planet? Has anyone ever constructed a temperature or climate index weighted towards where most people live and grow/harvest products?

    One could even go further and recognize that extreme weather causes the most problems. We really want to know how fast warm extremes are increasing and cold extremes are decreasing – and this approach would remind everyone that their are costs and BENEFITS to climate change.

    Polar warming is important to sea level rise, but temperature alone does not tell us the significance of warming. Warmer, moister air brings more precipitation. We need to know about the balance between accumulation and melting. Greenland (and the polar bears) survived several millennia of the Holocene Climate Optimum.

    • We do care about the global average temperature, because it is determined by the greenhouse gases, which are well-mixed in the atmosphere.

      • It’s good to have a representative time series, and it’s better to have many time series as the rate of warming varies. An 84% complete coverage serves as well as the single time series as a time series with 100% coverage.

      • David Appell

        We do care about the global average temperature, because it is determined by the greenhouse gases, which are well-mixed in the atmosphere.

        I’d consider that a statement of faith, rather than an observation backed by empirical scientific evidence, David.

        The globally and annually averaged land and sea surface (or tropospheric) temperature may be partially determined by GH gases but (despite IPCC’s statement of 95% confidence), there is still way too much uncertainty to state that “it is determined by GH gases”, as I’m sure you will agree.

        Max

      • It is a statement of physics, not of faith.

        Here are just some of the measurements that have detected an enhanced greenhouse effect:

        “Increases in greenhouse forcing inferred from the outgoing longwave radiation spectra of the Earth in 1970 and 1997,” J.E. Harries et al, Nature 410, 355-357 (15 March 2001).
        http://www.nature.com/nature/journal/v410/n6826/abs/410355a0.html

        “Comparison of spectrally resolved outgoing longwave data between 1970 and present,” J.A. Griggs et al, Proc SPIE 164, 5543 (2004). http://spiedigitallibrary.org/proceedings/resource/2/psisdg/5543/1/164_1

        “Spectral signatures of climate change in the Earth’s infrared spectrum between 1970 and 2006,” Chen et al, (2007) http://www.eumetsat.int/Home/Main/Publications/Conference_and_Workshop_Proceedings/groups/cps/documents/document/pdf_conf_p50_s9_01_harries_v.pdf

        “Radiative forcing – measured at Earth’s surface – corroborate the increasing greenhouse effect,” R. Phillipona et al, Geo Res Letters, v31 L03202 (2004)
        http://onlinelibrary.wiley.com/doi/10.1029/2003GL018765/abstract

        “Measurements of the Radiative Surface Forcing of Climate,” W.F.J. Evans, Jan 2006
        https://ams.confex.com/ams/Annual2006/techprogram/paper_100737.htm

      • It’s a statement of physics, not of faith:

        “Increases in greenhouse forcing inferred from the outgoing longwave radiation spectra of the Earth in 1970 and 1997,” J.E. Harries et al, Nature 410, 355-357 (15 March 2001).
        http://www.nature.com/nature/journal/v410/n6826/abs/410355a0.html

        “Comparison of spectrally resolved outgoing longwave data between 1970 and present,” J.A. Griggs et al, Proc SPIE 164, 5543 (2004). http://spiedigitallibrary.org/proceedings/resource/2/psisdg/5543/1/164_1

        “Spectral signatures of climate change in the Earth’s infrared spectrum between 1970 and 2006,” Chen et al, (2007)

        “Radiative forcing – measured at Earth’s surface – corroborate the increasing greenhouse effect,” R. Phillipona et al, Geo Res Letters, v31 L03202 (2004)

        “Measurements of the Radiative Surface Forcing of Climate,” W.F.J. Evans, Jan 2006

      • I have links to all these papers, but for some dumb reason this site won’t allow more than a couple of URLs in a post.

      • David Appell,

        Serious question.

        When the Earth’s global average surface temperature was 300 K, (and it must have passed through this as it cooled from the molten state), what concentration of greenhouse gases was necessary to cause this temperature?

        Why has the Earth’s surface cooled from 300 K to whatever it is at present?

        I think you are reading from the Book of Warm, but I am always open to new facts. I await your answer.

        Live well and prosper,

        Mike Flynn.

      • David Appell

        This ended up in the wrong place so am re-posting:

        “Physics” (as you put it) tells us that CO2 (among other GH gases, principally H2O) absorbs and re-radiates LW radiation. This has been corroborated by empirical evidence based on physical observations.

        “Physics” does NOT tell us that this represents the principle determining factor of global average temperature, as you stated. This has NOT been corroborated by empirical evidence based on physical observations or reproducible experimentation.

        Max.

      • Mike Flynn wrote:
        When the Earth’s global average surface temperature was 300 K, (and it must have passed through this as it cooled from the molten state), what concentration of greenhouse gases was necessary to cause this temperature?

        I don’t know, without doing some research.

        Why does it matter? There are many factors that influence climate. Some predominate over others, depending. Right now anthropogenic GHGs seem to predominate — is there some other apparent cause for modern warming? If so, what is the evidence?

      • David Appell,

        OK, you should at least know this.

        When the surface was 0.001 C warmer than it is now, why did it cool?

        Surely there was more CO2 in the atmosphere to make it warmer then?

        Live well and prosper,

        Mike Flynn.

      • “Physics” does NOT tell us that this represents the principle determining factor of global average temperature, as you stated.

        Of course it does — climate change is the sum of all forcings. Q.E.D.

      • Mike Flynn wrote:
        When the surface was 0.001 C warmer than it is now, why did it cool?

        Where did you ever learn that CO2 was the only climate forcing?

      • “Of course it does — climate change is the sum of all forcings. Q.E.D.”

        That’s how linear systems work. This isn’t linear.

      • Harold wrote:
        That’s how linear systems work. This isn’t linear.

        What is your evidence that recent warming hasn’t been a linear function of climate forcings?

      • David Appell,

        Good Warmist response, but no longer sufficient.

        Let me rephrase. Why did the Earth’s average surface temperature fall from 0.001 C warmer than it is now, to its present temperature?

        You refer to, in usual Warmist fashion, “climate” and “forcing”.

        Now, climate is the average of weather – no more no less. “Forcing” in this context, is a meaningless Warmist concept. Stick to physics, and you might convince someone. Warmist wafflespeak is only effective with Warmists.

        Live well and prosper,

        Mike Flynn.

      • Mike Flynn: Unless you stop the name calling, you won’t get any further replies.

        Got it?

      • David Appell,

        You haven’t provided any answers so far, so you are obviously confusing me with someone who cares.

        Live well and prosper,

        Mike Flynn.

      • Don’t you just love the logic? “When did you stop beating your wife?”

      • Don’t you just love the logic? “When did you stop beating your wife?”

        Do you think CO2 doesn’t absorb infared radiation, or do you think the Earth doesn’t emit it?

      • Mike Flynn, the last time the temperature was 300 K would have been over 40 million years ago in the Eocene, when GHG concentrations were at least 1000 ppm. So you ask why did it cool. It was because over long geologic periods without much volcanic activity such as since the Eocene peak, CO2 tends to be sequestered in the soil and rocks. Less CO2 equals cooling.

      • Jim D,

        It’s a start. Now consider when the Earth was only 0.001 C warmer. More CO2? Less CO2?

        How did it continue to cool to the present temperature?

        Live well and prosper,

        Mike Flynn.

      • Mike Flynn, unforced decadal noise is about 0.1 C, so nothing less than that over any period is attributable to climate factors and would not be called climate change. The pause people are confounded by this, because they keep thinking the pause anomaly which is 0.1 C is something to do with climate. Hope that helps.

      • Jim D,

        You have obviously been absorbing the lessons from the Book of Warm. Well done! So far, you have managed the following : –

        1. Pretend to misunderstand the question.

        2. Assume an air of superior knowledge.

        3. Use Warmist terms that can have different meanings ascribed to them if anybody is silly enough to quote them back at you.

        4. At all costs, avoid giving a direct answer. You may paint yourself into a corner.

        Unfortunately, Jim D, people have realised that facts trump fantasy. As you can’t provide any cogent facts, I will leave you to enjoy your fantasy.

        Live well and prosper,

        Mike Flynn.

      • Mike Flynn, I gave you the reason your question made no sense in a climate context. Maybe, if you read the answer again, you will see how your 0.0001 C can be viewed in the big picture.

    • i dont care about how much warming there is in the corners.
      but thats not the question

      The question is:

      Given all the data you have construct the very best estimate. the estimate that minimizes error.

      This is not policy changing science. That is what makes it so cool.

      • key issue is to accurately characterize the error, to decide whether the ‘very best estimate’ is useful for anything

      • Steven Mosher,

        “This is not policy changing science.”

        No, but that was clearly their intent. Which is the reason that the fact that they are Skeptical Science drones is so instructive.

        The “pause” in reported “global” temps is the bete noir of the CAGW cognoscenti right now. It not only undermines the claims of imminent thermageddon, it also makes the GCMs look as useless for setting global energy policy as they are.

        The “pause” undermines two of the legs of the CAGW stool. Observations and models. And worst of all, the “pause” comes from the work of some of their own prelates, NASA and UEA.

        I wonder if you would find this so much “fun” if their conclusion was that the guessed, sorry estimated, temps of the Antarctic have been over stated.

      • GaryM wrote:
        The “pause” in reported “global” temps is the bete noir of the CAGW cognoscenti right now.

        You want to pretend that the data is sacrosanct and cannot be questioned, but this is false.

        The data itself depends on models. All climate data does:
        http://davidappell.blogspot.com/2013/11/without-models-there-are-no-data.html

        There is no prefect data — it all depends on models. The C&W paper is an excellent lesson in that rule.

      • David Appel,

        Of course. Observations, GCMs and paleo all depend on models. Its unfortunate the public is not informed of this when y’all get your scare headlines in the media.

        I am not the one who pretends that (artificial) data is sacrosanct. That would be you and your comrades in arms. Your inflated claims of certainty rise even as your models prove increasingly unreliable.

        But keep preaching brother. Your congregation is with you.

      • GaryM: Shove it. I am not responsible for how the media reports on global warming, only what I myself write. And I think that informs the public very well.

        Of course. Observations, GCMs and paleo all depend on models. Its unfortunate the public is not informed of this when y’all get your scare headlines in the media.

        You wrote as if you don’t understand that the data, too, depends on models.

        Nor have you offered any reason to doubt the data uncertainties that climate scientists assign to their data. It’s clear you don’t have any.

      • “You wrote as if you don’t understand that the data, too, depends on models.”

        Nope. You just assumed I didn’t. Which is how, after all, climate “science” is done.

        “Nor have you offered any reason to doubt the data uncertainties that climate scientists assign to their data.”

        If you need someone to offer you evidence that increasing failure of models to match observations is inconsistent with an increase in the certainty of what those models predict, you are beyond hope.

      • D Appell: “Shove it.”

        Oh yeah? Well up your hole with a Mello Roll.

        GaryM: “they are Skeptical Science drones.”

        When I go there, the only post I can find by Cowtan and Way is a crowd-funding request for open access fees for their new paper. If that’s all there is, I think this is unfair. Is there something you found on SkS, written by them, that is clear BS?

      • NW,

        I posted elsewhere in this thread (lord knows where) their comment on Skeptical Science requesting funding from SS denizens to make their paper and data available free of charge. And Robert Way is listed as a member of the SS “Team”.

        http://www.skepticalscience.com/team.php

      • k scott denison

        curryja | November 14, 2013 at 6:35 pm |
        key issue is to accurately characterize the error, to decide whether the ‘very best estimate’ is useful for anything
        ======
        +1000

        Combine the error, with the meaningless time frame over which we have data and it’s not hard to see why many, including myself, aren’t ready to buy AGW.

    • David Appell

      “Physics” (as you put it) tells us that CO2 (among other GH gases, principally H2O) absorbs and re-radiates LW radiation. This has been corroborated by empirical evidence based on physical observations.

      “Physics” does NOT tell us that this represents the principle forcing of our climate, as you stated. This has NOT been corroborated by empirical evidence based on physical observations or reproducible experimentation.

      Max.

      • No one is saying CO2 is the “principle forcing of our climate.”

        They are saying it is a major forcing for the *perturbation* of our climate. And, yes, this has been corraborated by both empirical evidence and by climate models.

      • David Appell

        You wrote that greenhouse gases determine global average temperature.
        Let me quote it for you:

        We do care about the global average temperature, because it is determined by the greenhouse gases, which are well-mixed in the atmosphere

        I simply pointed out to you that this statement was a statement of faith, as GH gases are not necessarily the principle determining factor for global temperature.

        So stop waffling and simply concede that you should have stated:

        We do care about the global average temperature, because it is very likely in part determined by the greenhouse gases, which are well-mixed in the atmosphere

        That would have been a “physics-based” statement, rather than one based on “faith”.

        Max

      • “No one is saying CO2 is the ‘principle forcing of our climate.’”

        Uh oh, so CO2 is NOT the control knob for the heat content of the climate? You better not tell the IPCC that.

      • David Appell

        You claim:

        [CO2] is a major forcing for the *perturbation* of our climate. And, yes, this has been corraborated by both empirical evidence and by climate models.

        Please show empirical evidence corroborating the premise that CO2 is a major forcing for the *perturbation* of our climate.

        Forget the climate models, David. This is no empirical evidence.

        And forget subjective interpretations of dicey paleo-climate proxy data of carefully selected periods of our geological past, using the argument from ignorance (“we can only explain this if we assume…”).

        Show me empirical evidence based on real-time physical observations or reproducible experimentation, which corroborates your claim.

        Max

      • I simply pointed out to you that this statement was a statement of faith, as GH gases are not necessarily the principle determining factor for global temperature

        False. False, false, and false.

        I have already given you the many studies that show a change in Earth’s outgoing longwave radition, due to man’s greenhouse gas emissions.

      • Uh oh, so CO2 is NOT the control knob for the heat content of the climate? You better not tell the IPCC that.

        No, CO2 isn’t the largest influence on our climate.

        But its perturbations ARE currently the biggest factor on the PERTURBATION of our climate.

        Learn the difference — it’s getting to be tiring to explain this time and time again.

      • Please show empirical evidence corroborating the premise that CO2 is a major forcing for the *perturbation* of our climate.

        “Increases in greenhouse forcing inferred from the outgoing longwave radiation spectra of the Earth in 1970 and 1997,” J.E. Harries et al, Nature 410, 355-357 (15 March 2001).

        http://www.nature.com/nature/journal/v410/n6826/abs/410355a0.html
        “Comparison of spectrally resolved outgoing longwave data between 1970 and present,” J.A. Griggs et al, Proc SPIE 164, 5543 (2004). http://spiedigitallibrary.org/proceedings/resource/2/psisdg/5543/1/164_1

        “Spectral signatures of climate change in the Earth’s infrared spectrum between 1970 and 2006,” Chen et al, (2007) http://www.eumetsat.int/Home/Main/Publications/Conference_and_Workshop_Proceedings/groups/cps/documents/document/pdf_conf_p50_s9_01_harries_v.pdf

        “Radiative forcing – measured at Earth’s surface – corroborate the increasing greenhouse effect,” R. Phillipona et al, Geo Res Letters, v31 L03202 (2004)

        “Measurements of the Radiative Surface Forcing of Climate,” W.F.J. Evans, Jan 2006

      • Also, be sure to read

        Lacis, A.A, G.A. Schmidt, D. Rind, and R.A. Ruedy, 2010: Atmospheric CO2: Principal control knob governing Earth’s temperature. Science, 330, 356-359, doi:10.1126/science.1190653.
        http://pubs.giss.nasa.gov/abs/la09300d.html

      • “The ratio of the total climate response to the no-feedback response is commonly known as the feedback factor, which incorporates all the complexities of the climate system feedback interactions. For the doubled CO2 (a) and the 2% solar irradiance forcings, for which the direct no-feedback responses of the global surface temperature are 1.2° and 1.3°C, respectively, the ~4°C surface warming (b) implies respective feedback factors of 3.3 and 3.0 (5).” – Andrew A. Lacis, Gavin A. Schmidt, David Rind, Reto A. Ruedy, – 2010

        (5) J. Hansen et al., AGU Geophys. Monogr. 29, 130 (1984).

        As we reach back to 1984. In 1984 one of the most important steps was taken in climate science. It’s worth looking at.

      • Andy Lacis and NASA:

        “Within only the past century, the CO2 control knob has been turned sharply upward toward a much hotter global climate.”

        “Atmospheric carbon dioxide performs a role similar to that of the house thermostat in setting the equilibrium temperature of the Earth.”

        http://www.giss.nasa.gov/research/briefs/lacis_01/

        Think Progress (fka Climate Progress):

        “In must-see AGU video, Richard Alley explains ‘The Biggest Control Knob: Carbon Dioxide in Earth’s Climate History'”

        “So, CO2 explains almost all the historical temperature change.”

        http://thinkprogress.org/climate/2009/12/21/205242/agu-richard-alley-explains-biggest-control-knob-carbon-dioxide-in-earths-climate-history/

        Skeptical Science:

        “Carbon Dioxide the Dominant Control on Global Temperature and Sea Level Over the Last 40 Million Years ”

        David Appel:

        “No, CO2 isn’t the largest influence on our climate.

        Learn the difference — it’s getting to be tiring to explain this time and time again.”

        It’s getting tiring reminding these rabid of what their own party line has been for years.

        Someone is taking self-contradicting obscurantist lessons from Mosher.

      • GaryM,

        The Team that wrote about CO2, knobs, and climate, may have been spending too much time playing with, or at least examining, both their own knobs, and maybe those of others.

        Pathetic, I know. Both my attempt at humour, and their theories.

        Live well and prosper,

        Mike Flynn.

      • Mike Flynn wrote:
        The Team that wrote about CO2, knobs, and climate, may have been spending too much time playing with, or at least examining, both their own knobs, and maybe those of others.

        Juvenile. Pathetic. Stupid. Dumb. Clownish.

        Congratulations — you’ve earned a filter.

      • David Appell,

        I thought you said you weren’t going to talk to me again. Be consistent, at least!

        You wrote : -“Juvenile. Pathetic. Stupid. Dumb. Clownish.”

        Thank you for your unsolicited compliment.

        I’m not sure about your last comment in relation to what I have earned. Are you intending to donate something to me? Please provide details, as I can’t think of anything I want or need right now.

        Live well and prosper,

        Mike Flynn.

      • David Appell

        No, CO2 isn’t the largest influence on our climate.

        But its perturbations ARE currently the biggest factor on the PERTURBATION of our climate.

        Learn the difference — it’s getting to be tiring to explain this time and time again.

        “PETURBATION”?

        Ouch!

        David, if you seriously think that human CO2 is “currently the biggest factor on the PERTURBATION of our climate”, you are acting on “faith”, as I wrote above.

        Our climate is changing – it always has. But there is no empirical evidence that our climate is being PETURBED by anything, let alone human CO2 emissions.

        The multi-decadal cycles of warming and slight cooling of ~30 years duration, which we have observed since the record started, show no statistical correlation with atmospheric CO2 concentrations. And, where there is no robust correlation, the case for causation is weak.

        The underlying warming trend of around 0.6C per century started long before there were any significant human CO2 emissions.

        Max

  62. Here is the legend and content of Table III in Cowtan and Way:

    “Table III. Temperature trend in dCent/decade on the 16 year period
    1997/1 to 2012/12 for the GISTEMP, NOAA and HadCRUT4 temperature series and in the null, kriging and hybrid reconstructions. The standard error in the trend is calculated assuming an ARMA(1,1)
    error model using the method in the appendix of Foster and Rahmstorf
    (2011) (however in contrast to that work the temperature series have not been adjusted for exogenous influences).”

    Dataset trend ± stderr(trend)
    NCEP/NCAR 0.178 ± 0.107
    GISTEMP 0.080 ± 0.067
    NOAA 0.043 ± 0.062
    HadCRUT4 0.046 ± 0.063
    Null 0.064 ± 0.078
    Kriging 0.108 ± 0.073
    Hybrid s=1 0.119 ± 0.076

    the critical value of a one-tailed test at 5%, against the null that the trend is nonpositive (i.e. that there is a pause) -1.64. The NCEP/NCAR z = -1.66, so that rejects a “pause.” That is the only one of these seven mean/stderr pairs that does so. In particular, Cowtan and Way’s null, Kriging and Hybrid s=1 results do not reject the null of a pause.

    So let’s keep that straight. You can believe everything that Cowtan and Way have done is reasonable, and simultaneously believe that there is not significant evidence against a pause over the 16 years indicated above.

    • thank you

      • Howard,

        I agree!! and want to thank him again for putting that neo-classical econ pdf up the other day

    • Of course we have to remember that NW is an economist and can not add any physics-based insight to what is happening.

      The reality is that a levelling off did occur in global temperature rise which can be explained by the temporary downward trend in the SOI over the last ten+ years. No need to invoke any statistical tests to demonstrate the impact of natural fluctuations .. hint, hint stadium waves.

      • “No need to invoke any statistical tests to demonstrate the impact of natural fluctuations.” Fine Web, keep counting white swans.

      • Yes, no need to invoke statistics. Cowtan and Way report standard errors because everyone does it, right Web?

      • “NW is an economist and can not add any physics-based insight.”

        Yes. But at least I’ll never make the mistake of putting endogenous variables on both sides of a regression model.

      • Whut: Thanks for prompting me to review NW’s credentials, which are outstanding. We are quite lucky to have someone of his caliber participating on this blog.

      • NW “Yes. But at least I’ll never make the mistake of putting endogenous variables on both sides of a regression model.”

        :)

      • Howard,

        I agree!! and want to thank him again for putting that neo-classical econ pdf up the other day

      • Web do you know that Economists perform more steady state modeling than do climate scientist? Do you know that they were looking at the elasticity of complex systems in the 30’s, some 40 years before the biochemists did. You know that much of the work done by economists is essentially classical control theory? You know they actually know the difference between negative and positive feedbacks? You know they wouldn’t say links like ‘inflation causes an increase in the money supply’, as they know the difference between a cart and a horse.


      • NW | November 14, 2013 at 4:41 pm |

        Yes. But at least I’ll never make the mistake of putting endogenous variables on both sides of a regression model.

        Like this?
        dE = -pdV + TdS

        NW doesn’t understand physics so he can’t comprehend that constraint approaches such as principle of least effort or maximum entropy do actually allow us solve kinetic and thermodynamic problems similar to the one I set up as a Gibbs free energy minimization problem.

        Looky here — A couple of examples in biophysics. Just for DocMartyn:
        http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1003098
        http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1003098

      • Webster, what NW is referring to is the SOI is related to solar and volcanic forcings and dnLOD is related to the response of those forcings. Your model is inbred.

      • > physics-based insight

        Such as?

      • Web,

        You simply assert that various measurable things are stand-ins for theoretical differentials of an identity. So what? Your assertion about what measures what is just assertion: It is not physical theory. The differentiation of a constraint is physical theory, but the idea that the terms in that differential correspond to various measured quantities is not.

        I think you believe that dCO2 causes dT, as I do, but I also think you believe that dT causes dCO2 via outgassing, which I learned about from reading your posts of a couple of years ago. So these two things are mutually causing one another… the causality arrows point both ways.

        When you put T(t) on the left-hand-side, I understand that to be dT(t) + dT(t-1) + dT(t-2) +… So in that left-hand-side variable, we eventually reach a dT(t-k) that causes dCO2(t-k+j), k,j > 0. Causality runs both ways.

        When you use linear regression to model Y as a function of a vector X, you are making a strong assumption about the theoretical error terms e and the right-hand-side regressor vector X, namely, that E(e|X) = 0. This assumption is violated when causality runs both from Y to X and from X to Y.

        Get it?

      • NW, it is asymmetric. For example a 100 ppm increase in CO2 might cause 1 C warming, but a 8 C rise in temperature might cause 100 ppm rise in CO2. This means that the outgassing is a small positive feedback to the CO2 rise. On the other hand, the 6-10 C temperature rise out of the last Ice Age got a significant positive feedback from CO2 outgassing.

      • Jim D, so I gather–though I sense (from my quiet reading of what people say on this blog) that there is some degree of controversy here because of the various sinks that may (or may not exist).

        But when Web estimates that linear regression, the coefficient he gets on lnCO2 will not be the causal impact of lnCO2 on T. It will be a biased estimate of the causal impact, because the causality runs both ways. Simply asserting that it is small and ignoring the simultaneous equations bias just doesn’t cut it.

        Actually I have been wanting to find the time to post these issues to Web’s blog quietly, but push has come to shove. The estimation Web is doing really requires a multi-equation treatment, and thought about what variables are truly exogenous and which are not.

      • NW …

        This assumption is violated when causality runs both from Y to X and from X to Y.

        I start to chafe a little when seeing ‘causality’ in any regression context–conditioned reflex. Here, as a practical matter both T and CO2 are are measured quantities and have associated measurement error. Doesn’t that get you to the same place here–namely that the assumptions of SLR do not hold? Just curious.

      • mwgrant: “I start to chafe a little when seeing ‘causality’ in any regression context…” As every sane person should!

        Here’s an answer to your question… It is just an illustration because general multivariate, multi-equation situations defy a general answer.

        Measurement error in the left-hand-side variable Y inflates standard errors (obvious) but it doesn’t bias estimates of the coefficients on right-hand-side variables X. In a univariate regression of Y on scalar X, measurement error in X biases the OLS estimate of the coefficient on X toward zero. (It is interesting and counterintuitive that this is not necessarily the case when X is a vector with several elements that are measured with error.)

        Simultaneous equations bias is different. It can bias coefficients the opposite way, and in the simple case where one of the two equations of the system is a bivariate relationship (Y on scalar X), stability of the system (non-explosive behavior) will imply that the estimated coefficient on X is inconsistent and asymptotically positive biased. In other words, in that case you will overestimate the causal effect of X on Y, if the 2-equation system has a stable solution (And this is the opposite of the bias with measurement error in X.)

        You can download a very nice, step-by-step powerpoint on simultaneous equations bias at this page.

        http://learningresources.lse.ac.uk/135/

      • NW

        Thank you for the response and link–I think ;OP. … the dismal science indeed! I wonder if your response may actually have a clue on the topic I was working up to. Here goes…if you humor me you may want to start a new higher level comment. Then again maybe it is approach in the ‘physical insights’ discussion. In a nutshell:

        A researcher a a sample of a number of paired experimental observations–both measured quantities, with x[i] in X and y[i] in Y. Let’s say conventional wisdom/practice attributes Y as caused by X. The researcher then performs SLR, Y=mX + b and subsequently calculates the confidence band and prediction band per standard textbook formulas. Inferences are made–end of story. One sees this ritual SLR done over and over by physical scientists and engineers. Climate science seems to be no exception

        Guidance that I have followed (largely from the USGS) points to the geometric mean regression (reduced major axis regression) line as more appropriate (my words). This seemed to be be reasonable and the source is highly respected in environmental statistics. Unfortunately, there was no guidance on how to get at the confidence and prediction bands–and I have never found any. Maybe that fits in with your ‘no general answer’ remark. [My ‘practical(?) approach was to calculate the residuals orthogonal to the GMR line and plug those into the SLR band expressions and be very explicit in stating what I had done. Glazed-over eyes do not see these sort of things.]

        I would be interested in your and others’ observations and suggestions, even at the risk of public blog-flogging. [Any other forum works, but we do see plenty of SLR’s here.]

      • NW
        ERRATA sorta…

        [P1] “it is approach …” -> “it is appropriate…”
        [P2] “A researcher a a sample …” -> “A researcher has a sample…”
        [P3] “This seemed to be be…” -> “This seemed to be …”

      • NW,
        (1) You are completely lost as to what is first-order and what is a second-order factor. Reading what JimD added, he is spot on that the second-order outgassing term is inconsequential. That is physics and it constitutes knowledge that is more important than whatever “purity” you have in mind.

        (2) This isn’t economics or game theory where one doesn’t know or can’t assume the causality direction. We know the linkages and I won’t get bogged in debating causality. Again, physics is what rules.

        Right now I am seeing that the vast majority of the residual error is in the Arctic amplification factor which is essentially amplifying the left-over noise, which I am following through elsewhere in this thread.

      • Web,

        The simultaneity bias in OLS regression depends as much on variances and covariances of stochastic factors and endogenous variables as it does on the relative size of the effects. You might want to check out the link I posted for mwgrant. On page 35 you will find the formula for the asymptotic bias in a (very) simple 2-equation system, and you will see why I say this. I think I understood what Jim was saying, as well as your assertion that one thing is first-order and another is second-order, but I think you may not understand the nature and determinants of simultaneous equations bias.

        You don’t seem to be reading what I write with care. If you had done so, it would be plain to you that I am not talking about situations where we do not know the direction of causality. (In this regard, your assertions about what is known or unknown in economic or game theory about directions of causality is off the mark.) Rather, I am talking about situations where we are quite certain that causality runs both ways. That applies with force in your instance.

        If you insist that causality is irrelevant in your application, I would like to know why you lag your right-hand-side variables. It has been a few weeks since I read your description of this, but I think I recall you saying something about it allowing time for the effects of the right-hand-side variables to have their impact on the left-hand-side variable. Is my memory of your discussion incorrect?

      • “If you insist that causality is irrelevant in your application, I would like to know why you lag your right-hand-side variables. “

        Thermal response lags the forcing stimulus.

      • I see. Response comes after stimulus. Sounds like causal language to me. Is that wrong?

        Could we also regard an increase in temperature as a stimulus which leads to an outgassing response? With a lag, of course. And, would it be correct to call that a causal relationship too?

      • NW,

        WHT knows all. First determine your outcomes, then tailor your model. When the model doesn’t even reflect the past, “adjust” to suit.

        Model, schmodel!

        Look up “predetermined” in a dictionary. Don’t be surprised if you see a picture of you-know-who.

        Live well and prosper,

        Mike Flynn.

      • NW

        For now belay that question on GMR or RMA. It’s been a few years and a search the morning has proven more fruitful regarding the interval/bands. I would be interested in your thoughts based on observation on any prevalence/impact of suing SLR (OLS) versus RMA. (Biology, e.g., allometry, seems to faced that much better the the physical scientists–a highly subjective observation on my part.) In any case, thanks again for your earlier remarks–I have found some slightly different search phrases that taken me in a more productive direction.

        mwgrant

      • NW,
        Outgassing is second order to anthropogenic emissions. I take it you have never worked on vapor phase epitaxy in a semiconductor research lab? The combination of a physics education and hands on lab work can’t be beat.

      • Web,

        I frequently work in a lab too (being an experimental economist). I really like the fact that a lab allows me to assign treatments randomly to units of observation, thus ensuring that the error term e is independent of the treatment indicators contained in X. This makes inferential life very easy: You can legitimately model the relationship by a linear model of the form Y = b’X + e *because* E(e|X) = 0 is ensured by the method of random assignment of treatments to units of observation.

        When the data vector (Y X) is naturally occurring, and there is good reason to regard not only Y but one or more elements of X as joint outcomes of a large process, X has not been randomly assigned. When that’s true, one has no good, a priori reason to assume that E(e|X) = 0.

        In this respect, working in the field does beat working in the lab in this sense: It teaches you that the legitimate assumptions one can easily invoke with lab data (because of random assignment) do not usually apply to observed data from an ongoing, naturally occurring process… like a world economy, or a world climate system. Capisce?

      • Climate has to obey the laws of physics and not some game theory that you and your economist buddies are trying to dream up.

        The CSALT model is getting hard to distinguish from the actual data:
        http://img811.imageshack.us/img811/1410/dtb.gif

        This works in the short term correction during the WWII years where the SST values were reading high. Described by Kevin Cowtan here:
        http://skepticalscience.com/hadsst3_a_detailed_look.html

  63. This new study merely shows GIGO?
    The algorithms for the unknown areas may be set to correct (over correct) a global cooling bias the authors have predetermined that the previous data contained for unmeasurable areas.
    Hence there is a correcting factor that infuses and influences every output figure for the unknown areas in a positive way.
    The easy way to prove this is to run tests for smaller unknown areas around the edges of the unknown areas and to see what proportion of them are positive to negative.
    If the programme was unbiased then some areas would show cooling.
    In a true programme this would be fifty percent .
    In a slightly biased programme this might be twenty percent cooling.
    In a programme claiming an IPCC like no pause I would state that not one test would be able to show an area in the unknown areas that had cooled.
    The reason that the method works in the known unknown areas (that is when you subtract a known area and retest using the models) is that the programme may be set to only correct for the “bias” in unknown areas.
    This would be shown by having fifty percent of results cooler in these areas when tested but we already know this as they state their models are very good in approximating these areas.

    • ngech says: “The easy way to prove this is to run tests for smaller unknown areas around the edges of the unknown areas and to see what proportion of them are positive to negative. If the programme was unbiased then some areas would show cooling. In a true programme this would be fifty percent.”

      That would be asking for a zero median forecast error on the known holdout cells near the edges of the unknown regions. Cowtan and Way ask for a zero mean forecast error. That is how they end up choosing the parameter s=1 for their hybrid model: It minimizes the mean forecast error near the edges of the unknown region.

      I think if you want to estimate mean global temperature, you should be aiming to minimize mean forecast errors, not median forecast errors. But unless the distribution of forecast errors is highly asymmetric, this isn’t going to be much of a distinction anyway.

  64. JC

    I think you’re wrong in assuming you can’t perfroming kriging across boundaries. This quite routine in the geological sciences where you have sudden changes in the expected values across faults (i.e. “attribute regions”).

    There are a number of strategies to deal with this.

    1) The most commonly used is to create a trend surface with discrete jumps across boundaries. One way to do this to use a B-Spline in a region-by-region basis. This, the spline-surface, is then used to compute the residuals. The entire dataset now share the same “stationary surface”. Kriging can then be carried out routinely on the residual data. Afterwards the splined-surface is added to give you your gridded estimate. The only problem here is that the autocorrelation itself may not be stationary across the boundary even after subtracting the trend surface: the variogram model may be different between regions. But I guess if you don’t have a good deal of control data on the ice this mightn’t work so well.

    2) The other possibility would be, if you have reliable soft data that correlates with temperature such as reflectance from satellite data then you could perform co-kriging or even better collocated co-kriging. But then this might be less reliable between regions (i.e. ice vs water).

    3) The other method would be to construct a series of cdfs using an indicator approach with similar detrending steps as mentioned in 1. This at least could be used to ascertain some type of range of possible outcomes at different degrees of certainty.

    • cd, I don’t think the issue is kriging across boundary but across phase changes. If you are kriging to a “surface” that tends to invert its relationship should it be ice or water with the surface you are trying to krige, things can get entertaining, unless you know what kind of mode that surface is in. You might not get real results, though they will be real consistent results.

      • Oops posted that comment in the wrong place:

        Sorry capt

        I’m not sure what you mean by a phase change. If you’re kriging temperature and you account for the structural component (detrend => redisuals) it shouldn’t matter.

        You’re probably right, and I’m probably been a bit slow here. But when you talk about relationship is that in relation to kriging with secondary information. Could you elaborate?

      • cd

        Phase change is land/ice/water

        In geology, it would be stapling together regions with soil concentrations that border water concentrations that border vapor concentrations. You are still measuring the same molecule, but in a different regime where you have different physical, chemical and biological process that change the nature and distribution of a chemical species.

      • Howard

        I’m not sure he is talking about changes in phase as in thermodynamic phase transition. I think he means changes in the phase of a physical system where the interaction between the various components changes – well I’m assuming from what capt has stated.

      • cd:

        yeah, I’m a muddled writer.

        I presume the problem Capt and others are concerned with is air temp over land versus land temp over water versus land temp over ice. Completely different distribution mechanisms at play. You don’t want to infill air temp over ice with the same model as used over water or land.

        I agree with Judge Judy here. Get better arctic data and use that, otherwise, don’t use an infill kriging kluge.

      • Howard

        This has nothing to with the statistic. If you model the structural component (the trend surface) then kriging should work just fine.

    • nice ideas.

      • Can you use EXACTLY the same algorithm, on the same stations, using Tmin and the Tmax? If you get the same distribution of errors is estimating the Tav, Tmin, Tmax when you run without the input of a station that you have the record you have data for, this will be a good indication that you have a robust method. If the system is not very good at or near freezing point, the Tmin and Tmax values should diverge; you could be underestimating Tmin and over estimating Tmax.

  65. Sorry capt

    I’m not sure what you mean by a phase change. If you’re kriging temperature and you account for the structural component (detrend => redisuals) it shouldn’t matter.

    You’re probably right, and I’m probably been a bit slow here. But when you talk about relationship is that in relation to kriging with secondary information. Could you elaborate?

    • cd, not a problem. Pretty much everything hinges on having an evenly distributed error. Since you are looking for temperature, latent heat causes a bias since it is releasing heat that isn’t change the temperature you are looking for. In the Arctic it is super confusing because you have mixed phases clouds and humongous temperature inversions that are at the worst possible altitude and range of the satellite data. Since 1976 with the climate shift you have one latent bias that shifted to another around 2000. If the satellite data were better or it the average altitude of both the Arctic and Antarctic the same, then you would have less bias and a reference to check your work. As it is the Antarctic looks great, because it it hypocool and not much latent while the Arctic is in hyper latent with Sudden Stratospheric Warming events to beat the band. Kriging has the worst conditions possible.

      I think they could average the lower and middle troposphere data to get a better reference or get with Dr. Roy for a more specialized product, but right now it looks like their method needs some northern polar tweaking.

      • Thanks capt.

        I’ll defer to your greater appreciation of the physical uncertainty on the issues you raised. But with all due respect I can’t see how this relates to surface controls and creating a gridded attribute map. It might rule out co-kriging but again the surface information, even if sparse could still be used. It’s almost a given that the associated kriging variance data would likely be dominated by the maximum kriging variances (i.e. residual variance) but with the caveat that the confidence trend surface (the B-spline trend with abrupt breaks) may be too high and therefore the real variance may be even higher. Surely, this does not rule-out kriging of surface data. As I say an indicator approach to continuous data (temperature in this case) at least would give us some insight.

      • cd, ” Surely, this does not rule-out kriging of surface data.”

        No, it just adds a layer of complexity.

      • Hmmm….

        capt, I don’t think a statistic adds complexity. The statistic if anything is reductionist and can smooth over complexity.

      • cd, if you have enough data to know you have a reliable convergence on a mean. There was a regime shift in 1976 and another in 1998ish which may be parts of 30 to 300 year sequences. 17 years is likely not long enough to do much correct assuming for 0.05C degrees of additional accuracy. The complexity would be literally adding more data layers. Lower Troposphere, Mid troposphere plus surface to make sure one is not biasing the whole process. NW and MattStat would be better at explaining alternate checks, that is just my opinion.

  66. Judith Curry

    I don’t think Cowtan and Wray’s analysis adds anything to our understanding of the global surface temperature field and the ‘pause.’

    Not being an expert, I’ll go with that.

    But it was apparently a good try.

    Max

  67. Why would you ever use data against the advice of those who gather it. The UAH satellite temperature people who are the experts on how this data was gathered, say that their measurement technique cannot be used reliably to measure temperature over sea ice. What incredible arrogance to override this warning.

    We’ve seen several other instances in climate science of people using data despite explicit advice from those gathering it that it was unsuitable for purpose.

    The Tiljander sediment proxy. The researcher who gathered the data explicitly warned that the data was contaminated and hence unreliable beyond a certain date. Mann used it anyway (the hockey stick shape arising from the contamination may have attracted his interest) and subsequently made himself a laughing stock by applying a negative weighting (i.e. using the data upside down.)

    Tree rings data from Bristlecone pines. There have been several explicit warnings from those gathering this data that Bristlecone pine ring measurements are not a reliable temperature proxy (they are really a moisture proxy, and are also contaminated by inconsistent bark stripping). Despite this many global temperature reconstructions continue to place heavy weight on tree ring data from Bristlecone pines.

    What is it about climate science that lets people think it is OK to use data in a manner which the person who gathered has explicitly warned against. I find this extraordinary.

  68. Look out windmills. Solar panels are about to replace you as the greatest mass murder in of the avian kingdom.

    http://motherboard.vice.com/blog/solar-plants-are-burning-birds-wings

    “When it comes to death by solar farm, birds typically die in one of two ways. In the first, the glimmering sheer of solar panels might trick birds into thinking they are actually part of a body of water. And so the birds, especially waterfowl in this scenario, dive towards the panels, looking for moisture and food, only to find themselves, bones broken, dying in the middle of the arid California sand.

    Blunt force trauma aside, others feel the wrath of the harnessed sunlight. At the right (or really, wrong) angle, the potent radiation bouncing off solar mirror’s are enough to burn a bird’s fragile wings, abruptly sending the creature downward towards the ground and impending death. They’re like tragic avian Icaruses, except without an easily digestible moral lesson behind their fatal crashes.

    Anthropogenic threats to birds are everywhere….”

    Anthropogenic specicide. We have to kill the animal kingdom to save it.

    • As killers of birds, skyscrapers, automobiles, and house cats dwarf the numbers killed by windmills.

      The have placed windmills in the flyway of the Whooping Cranes. That I know of, not a single Whooper has been killed by a windmill. While hiking through the windmill farm that occupies my boyhood playground, South Gulch, I ran into an ornithologist who was there doing a longterm study on bird kills. Had she counted any dead birds that day? No. Had she counted any dead birds that week? Two. Did she think windmills pose a risk to the Whooping Crane? No. To birds in general? No.

    • Solar panels are about to replace you as the greatest mass murder in of the avian kingdom.

      False. Buildings kill far more birds than do solar panels or wind turbines. Where is the outrage?

    • “I focused the magnifying glass
      That brought the downfall of Icarus.
      I focused the magnifying glass
      That brought the downfall of Icarus.
      Balloons were easy; a simple pin;
      Or a knife in the case of the zeppelin.

      Robert Calvert – The Song Of The Gremlin (Part 1)

    • I think we skeptics all need to chip in and buy the warmists a collective sense of humor.

  69. R. Gates aka Skeptical Warmist

    David Appell asked: November 14, 2013 at 6:36 pm |
    “Judith, can you please answer this directly: do you agree that kriging is at least no worse than current algorithms for areas that have no coverage?”
    —-
    Did she ever answer this? Sorry, it was a great question to her but I did not see a direct answer.

    • I’ve answered this elsewhere. Kriging is fine for africa, but not fine for the Arctic Ocean whereby land temperatures are used for extrapolation. So kriging in this instance is worse than using climatology. If you are trying to get a trend in surface temperatures of the arctic ocean say for the last two decades, you need to do some real work with existing field observations in the arctic ocean plus satellite data. The satellite data are highly problematical in the arctic with very cold surface temperatures, temperature inversions, highly heterogeneous surface, and difficulties in discriminating surface from clouds in both the IR and shortwave. So this is not a simple problem.

      • Judith, you are avoiding this question: is kriging any worse than infilling, over physical boundaries?

      • Dr. Curry, Appell and Gates seem to think they have you on the witness stand, and can demand yes or no answers from you.

      • Hi Prof. Curry,
        On Lucia’s blog, Robert Way comments that there has been too little focus on the cross-validation aspects of the paper in some quarters. I inferred that to be you. In any case, do you have any comments regarding that?

      • R. Gates aka Skeptical Warmist

        NW,

        This is Judith’s blog and I “demand” nothing from her. We are having a rational discourse and are asking for clarification. This is how conversations flow as people normally discuss things of mutual interest. There are many science blogs where this does not happen. I am always quite appreciative of Judith’s willingness to carry on a dialog in such in such an open manner. She (hopefully) is setting a standard that many other scientists will someday follow.

      • Actually, Robert Way does pointedly address Prof. Curry’s criticism by noting that she hasn’t placed her critique in the context of the cross-validation tests. Link here

      • Kriging is fine for africa, but not fine for the Arctic Ocean whereby land temperatures are used for extrapolation.

        And

        Judith, you are avoiding this question: is kriging any worse than infilling, over physical boundaries?

        Wouldn’t these be rather easy statements to evaluate? What happens if Kriging is compared to other methods as a form of validation using measurements in areas where SSTs can be obtained directly? Which methods match the measured temps more accurately?

        I know I’m not very bright, so there must be something about this that is more complicated than what I can understand, right?

    • R Gates — I haven”t seen a direct answer either.

    • R. Gates aka Skeptical Warmist

      If I understand her, she is saying kriging is okay for land areas like Africa, but not for the Arctic– and I guess by extrapolation she is suggesting no coverage is better for the Arctic than kriging? The results from Cowtan & Way would seem to disagree with that perspective quite pointedly.

    • R. Gates: Maybe that is what Judith is saying, but I can’t be sure. To me, frankly, it seems like she is avoiding the question.

    • NW: No, I don’t think that. But I think I’ve asked a legitmate question, that deserves an honest answer.

      • Your question looks rather illegitimate to me. Current algorithms for areas that have no data in the Arctic are close to worthless. Kriging in the Arctic is also close to worthless. Dr Curry has clearly stated that this is her opinion. In doing so I believe she has answered your question. You seem to be complaining that she dodged your cunning semantic trap by refusing to give a yes or no answer. Not all questions which demand a yes or no answer deserve one.

        Do you agree that kriging is at least no worse than current algorithms?

        Don’t you agree that making up data is no worse than using wrong data?

        Have you stopped beating your wife yet?

        Are you planning to give up snorting cocaine?

      • Ian H wrote:
        Kriging in the Arctic is also close to worthless.

        How is it worse than the current method of either ignoring the uncovered areas, or infilling?

        That’s all I’m asking.

      • David Appell commented on Uncertainty in SST measurements and data sets.
        “How is it worse than the current method of either ignoring the uncovered areas, or infilling?
        That’s all I’m asking.”

        So you’re trying to make sense of a hodgepodge of measurements, and use it for a multi-tens of trillions of dollar investment that is based on the utmost accuracy of your measurements, and you think it’s okay to just make up data where it’s missing? And that somehow makes it better?

        Also, when you stop adding made up data to your measurement, it looks much different that with the made up data?

        The difference between these is the tens of trillions betting on the outcome, and you might say, ‘hey, we’re only scientist just trying to find another piece of the puzzle on how things work’, That’s fine, but someone should be telling all of the activists the data is not up to the task for making policy decisions.

      • The article implies it is mostly the extra UAH satellite data that helped fill in the Arctic. Kriging was used for other areas where it worked better. See the youtube video, too.

    • Joshua wrote:
      Wouldn’t these be rather easy statements to evaluate?

      Cowtan & Way did do such a validation test.

      But for some reason Judith won’t respond to the question about whether kriging is worse than infilling over physical boundaries. I think I know the reason why.

      • Dr Appell,
        She clearly says they both fail over physical boundary and it’s better with Mosher’s 1st option that is no data. If you really want an answer why don’t you ask just simply, ‘in general is kriging better (or worse) than infilling?’ In other words ‘don’t cross physical boundry it’s a waste of time’ is not an option in your questioning. Otherwise it is and she answered you. So my question to you is why include ‘over physical boundaries’. I ask that since it would still get you to what you think the reason why is and should also stop her from answering if your right.

      • She clearly says they both fail over physical boundary

        I don’t see that she’s said that. Where?

      • So my question to you is why include ‘over physical boundaries’.

        Because that is the limitation she claimed for kriging. And some of us don’t see why it doesn’t also apply to the two other interpolation methods used. Do you?

      • Matt and mwgrant and NW have an interesting discussion related to their validation, at least. Seems that Judith is content to argue by assertion.

      • She had already answered when you asked:

        http://judithcurry.com/2013/11/13/uncertainty-in-sst-measurements-and-data-sets/#comment-413382

        Infilling by Kriging within Africa makes sense. Kriging or any other kind of infilling that uses data from land to infer something about ocean temps makes no physical sense.

      • Sorry — the question she’s avoiding is, does it make less sense than other interpolation methods?

        Some of us do not see how it is worse.

      • David Appell,

        Some of you don’t appear to see much at all. Who cares? Just keep reading the Book of Warm. Remember, The Truth is Out There!

        Live well and prosper,

        Mike Flynn.

      • Because that is the limitation she claimed for kriging. And some of us don’t see why it doesn’t also apply to the two other interpolation methods used. Do you?

        Near as I can tell she does think it would apply to all methods.
        As for myself I wouldn’t know I’m not a scientist but I’d probably want to try everything just to see since there is a perceived data void. I believe Mosher probably thinks that?

        That leads me to another question. I’m reading a paper about CO2 (linked from Mosher) that has the Stephan-Boltzman law and Planck. I kind of have the hang of that but I haven’t looked at any trig (quadratic formula etc) since I reupped myself back in the 80s. Since I never use it I’ve completely forgotten it. Would your math book get me up to speed? Would I be able to understand simple equations like that?

      • Where does she say it applies to “all methods?”

        She made a big deal of saying kriging doesn’t work here. But now she’s avoiding the question of whether other algorithms work better or not. Her silence can only be taken as a telling admission….

      • David Appell,

        Which is better?

        Zero, a number indistinguishable from zero, or a number very, very, very close to zero?

        Quickly, lad! Don’t just sit there looking bemused and awkward!

        Live well and prosper,

        Mike Flynn.

      • Infilling by Kriging within Africa makes sense. Kriging or any other kind of infilling that uses data from land to infer something about ocean temps makes no physical sense

        ….or any other kind of infilling that uses data……

      • ordvic: I am interested in Judith’s opinion on this, not yours.

      • I agree my opinion is worthless that is why I pointed to hers.

      • Judith hasn’t answered the question, though she’s been asked several times.

        (That, itself, is an answer.)

      • well if you would repeat the question rather than just complaining that I haven’t answered it, i might know what you are referring to.

      • oh I see you think i haven’t responded to the kriging issue. Yes you can ‘mechanically’ krige across boundaries, but whether this gives you anything useful is a different story.

      • David Appell,

        Maybe Prof Curry is not replying to you, like you are not replying to me.

        She might be more capable of restraining herself. What do you think? What leads you to think she cares what you think?

        I certainly don’t, and she is probably more clever than I.

        Live well and prosper,

        Mike Flynn.

      • Well it is a circular argument now. I know what your inferring in her so called non-answer it’s not important to me.

        You didn’t answer my question about your book?

      • It’s not a circular argument — she is refusing to answer the question.

      • You didn’t answer my question about your book?

      • David Appell

        Without getting into the nitty-gritty details on “Kriging in the Arctic”, let me ask you this question:

        Is it wiser to do something you already know is stupid rather than do nothing?

        Max

      • David Appell wrote:
        It’s not a circular argument — she is refusing to answer the question.

        She answered the question. You refuse to notice her answer. You sound like a stuck record bleating on and on about your silly question. You’ve stepped outside the bounds of rational civilised argument and are arguing like a spoiled 2 year old. Grow up.

  70. R. Gates aka Skeptical Warmist

    Judith said:| November 14, 2013 at 6:18 pm |
    “Regarding a global average temperature, i don’t think this is prima facie important. I think the key issue is comparing model simulations with observations, implying making the comparison in regions where you have the observations. This is what Ed Hawkins did, and I think this is the most illuminating thing that can be done with the ‘global’ surface temperature data.”
    —–
    Very interesting perspective if all you cared about were the current model simulations, but what about if you actually want to learn something about how the dynamics behind how the actual climate system works? We know from the beginning the models will diverge from the actual system eventually– Chaos theory tells us this much. The models are always going to be wrong. However, by carefully observing global average temperatures we can tell a great deal about net energy flows to and from that part of the system. Global average temperatures rise and fall with net energy flows from both the sun and from the ocean. Recording these rises and false is the observational side and forms the foundation to then matching up these rise and falls with known natural and anthropogenic forcings as the next step as we then develop models that explain the dynamics behind these rises and falls–the theoretical side. Accurately measuring the rises and falls in global temperatures is therefore exceptionally critical and useful, as this data is imperative as we refine the models to more accurately reflect the dynamics being observed.

    • Jim2 wrote:
      In this case, laying out some sensors around the Arctic would be much better than an educated guess.

      And why, exactly, do you think that hasn’t been done?

      • This is one thing that gave me a clue (from the post)::

        “The Guardian has an extensive article, excerpts:

        There are large gaps in its coverage, mainly in the Arctic, Antarctica, and Africa, where temperature monitoring stations are relatively scarce.”

      • jim2 wrote:
        There are large gaps in its coverage, mainly in the Arctic, Antarctica, and Africa, where temperature monitoring stations are relatively scarce.”

        And how, exactly, would you fill those gaps?
        Say, in the Arctic, in a region that is water in the summer, ice in the winter?
        Hmm?

      • So, David, do you have a map of the sensors handy?

      • JimM wrote:
        So, David, do you have a map of the sensors handy?

        Did you not find what you looking for at the GHCN?

      • OK, David, I found a couple of maps. There are a handful of sensors there. But apparently GSOD network has a good deal more than GHCN. Point taken, nevertheless. (I am here to learn) So, I didn’t say it would be easy to add sensors, but it is still doable. Floats in the Summer, drop cheap probes by air on ice.

        http://moyhu.blogspot.com/2010/07/arctic-trends-using-gsod-temperature.html

      • So, I didn’t say it would be easy to add sensors, but it is still doable. Floats in the Summer, drop cheap probes by air on ice.<i?

        What good is dropping probes on ice that will melt in the summer?
        What is a "cheap" probe?
        These things are expensive. Grants are difficult to get, especailly with today's Congress. Who do you propose pays for all this, for the sensors, and plane fllights, and the data analysis, etc?

      • David, it would be better to divert a good deal of money from expensive climate models to projects like getting a better temperature and other measurements of the Arctic and Antarctic. More data will help us understand what is it we are trying to model. More data will make statistical exercises like this much better than without adequate coverage. I stand by my overarching point that more data is better than more statistical inferences or models.

      • jim2 wrote:
        David, it would be better to divert a good deal of money from expensive climate models to projects like getting a better temperature and other measurements of the Arctic and Antarctic.

        Why?

        And I’d still like to know how you intend to put a sensor in a region that is ice half the year and water otherwise…..

      • Well, we could hang them on the drilling rigs.

      • David, looking at those maps, the jump around 2005 is an outstanding step jump from the chart previous to that year.

        http://moyhu.blogspot.com/2010/07/arctic-trends-using-gsod-temperature.html

        The WFT UAH chart shows nothing notable in 2005.

        http://www.woodfortrees.org/plot/uah/from:1979

        It looks like the 2005 jump is heading back down.

        http://www.yaleclimatemediaforum.org/pics/0710_records4.jpg

        I can’t find a more recent plot. Sea ice started getting lower around 2005.

        http://www.climate4you.com/images/NSIDC%20GlobalArcticAntarctic%20SeaIceArea.gif

        I guess it’s hard for me to visualize how the smoothly rising CO2 concentration can produce step jumps like that. Some mechanism like the stadium wave seems more plausible.

      • “And I’d still like to know how you intend to put a sensor in a region that is ice half the year and water otherwise.”

        Ahhh David. I know you are smarter than this question would lead us to believe.

      • JimM: I’d still like to know how you intend to put a sensor in a region that is ice half the year and water otherwise.

      • Well, David. People have shot sensors into tornadoes, flown air planes through hurricanes, gathered data from the sea with floats, shot rockets into the sky, balloons, the list is long. You seem to be hung up on the idea that only one sensor can be used for water and ice. I don’t feel compelled to persuade you. At this point, the Denizens may decide for themselves.

    • jim2: Tornado sensors aren’t pemanent, are they? Nor are any of the other you list. But, if you want to understand changes in the Arctic, don’t you need sensors that stay approximately in the same position? (Yes, you do.)

      • No. I am surprised that someone who apparently supports statistical contortionism would quibble with a suite in time and space of sensors. Each to his own, I suppose.

      • You don’t want permanent sensors in the Arctic? Interesting. And what will you conclude when they all drift, say, to the East and bunch up in some cove?

      • David Appell,

        I know you didn’t ask me, but as you clearly stated you weren’t going to reply to me again, why not respond?

        He might well conclude that the sensors had moved, and had bunched up in some cove.

        Nice try. You don’t need to respond.

        Ask a silly question, get a silly answer. If you really want information about global warming or cooling, feel free to ask – better to ask a silly question than to make a silly mistake.

        Live well and prosper,

        Mike Flynn.

    • R. Gates

      An “accurate” measurement of “the rises and falls in global temperatures” would, indeed, be a nice thing to have.

      We obviously do not have such a thing today, because the separate records all show different results.

      They do show us general trends, however (which are undoubtedly useful): such as the observed multi-decadal cycles from warming to slight cooling lasting ~30 years each, and the long-term underlying warming trend of around 0.6C per century.

      Recording these rises and false is the observational side and forms the foundation to then matching up these rise and falls with known natural and anthropogenic forcings as the next step as we then develop models that explain the dynamics behind these rises and falls–the theoretical side.

      This is also true, of course. But there is still very much that climate science does not yet know about what makes our climate behave as it does (unknown natural forcings and mechanisms), leading to great uncertainties regarding the value of the model outputs, especially for trying to make projections for the future.

      And there is a logical trap that should be avoided when evaluating model outputs: the “argument from ignorance” (classically: “our models can only explain this if we assume…”).

      Unfortunately, IPCC uses this false logic in attributing “most” of the late 20thC warming to increases in human GHG concentrations.

      And it compounds this logical fallacy by adding that it is “95% certain” this is the case.

      Max

  71. These sorts of papers make for an easy publication. Well, at least if you actually understand the statistical techniques you attempt to use, they are.

    In this case, laying out some sensors around the Arctic would be much better than an educated guess.

    That goes for Steig’s paper also.

    There was another paper on Climate Audit that made a study of mosses and I believe here on CE: http://climateaudit.org/2013/10/26/18501/

    While this is an interesting attempt, it would be much better to take cores of the ice cap. It might tell us how the various layers of the ice has moved – did the base flow faster than the top and middle? And also, the moss at the base of the ice could be sampled to see if the pattern of age of the moss is what we believe it to be.

    The best we have isn’t the best we can do.

  72. The approach uses article amplification in the missing areas as the excuse for bunging in an amplification factor.
    Presumably they also used it in the models for the Antarctic when due to the large increase in the ice sheet and albedo over the last 15 years they should have used an Antarctic de-amplification factor .
    Are the authors prepared to state if the algorithms and kriging used differ for the north and south hemispheres. If not,why not?

  73. Judith says:

    So I don’t think Cowtan and Wray’s analysis adds anything to our understanding of the global surface temperature field and the ‘pause.’

    Notice, that she shows no uncertainty w/r/t whether there has been a “pause.”

    You gotta love the logic.

    We can’t actually measure a global average temperature. But we know that measuring it shows a “pause.” Well, at least some ways of measuring it show a pause. Of course, those ways of measuring it aren’t as accurate as other ways of measuring it that don’t show a pause. But even though it can’t be measured, the less accurate way of measuring it shows a pause, so although we don’t know whether there has been a pause and even though the more accurate ways of measuring it don’t show a pause we still know there’s been a pause.

    Oh, and don’t forget, even though we believe that acknowledging uncertainty is important, we know that there has been a pause for certain even though it isn’t really certain.

    Judith – call for you on line #2. It is Uncertain T. Monster – and he doesn’t sound happy.

  74. Dagnabit:

    Judith says:

    So I don’t think Cowtan and Wray’s analysis adds anything to our understanding of the global surface temperature field and the ‘pause.’

    Notice, that she shows no uncertainty w/r/t whether there has been a “pause.”

    You gotta love the logic.

    We can’t actually measure a global average temperature. But we know that measuring it shows a “pause.” Well, at least some ways of measuring it show a pause. Of course, those ways of measuring it aren’t as accurate as other ways of measuring it that don’t show a pause. But even though it can’t be measured, the less accurate way of measuring it shows a pause, so although we don’t know whether there has been a pause and even though the more accurate ways of measuring it don’t show a pause we still know there’s been a pause.

    Oh, and don’t forget, even though we believe that acknowledging uncertainty is important, we know that there has been a pause for certain even though it isn’t really certain.

    Judith – call for you on line #2. It is Uncertain T. Monster – and he doesn’t sound happy.

    Leave a Reply

  75. Joshua,

    You create an illogical argument based on assertion.

    You then demolish your own argument.

    I am not sure how this is supposedly Prof. Curry’s fault.

    Too much study of the Book of Warm can lead to softening of the brain. Have you some facts to show the Earth is not cooling – perhaps supported by “real” scientists?

    Live well and prosper,

    Mike Flynn.

    • Truthfully Mike, I don’t know how you can stand the guy. Troll of the first order with some sort of sick, psychosexual obsession that causes him to seek negative attention from Judith.

  76. Dr. Strangelove

    @Joshua
    The logic is simple. When measurement methods previously used and accepted show warming, it is deemed logical. When the same methods show a warming pause, it is deemed illogical.

    Let’s assume the first statement is true. Prove why the second statement must be true. Without proof, we conclude one statement must be true and the other false.

    Do you love the logic?

    • Dr.

      The logic is simple. When measurement methods previously used and accepted show warming, it is deemed logical. When the same methods show a warming pause, it is deemed illogical.

      Let’s reverse the logic.

      You say – “when measurements methods were previously used to show warming, it is deemed logical.”

      By whom? By “skeptics?” Which “skeptics” deemed that method “logical?” The ones who say that you can’t measure global temps because it is a meaningless concept? The ones who say that the methods for measuring the temps are invalid? The ones who say that the methods for statistical analysis of the temps are invalid?

      We have now picked off a large group of “skeptics.”

      Many of those same “skeptics” are now saying that not only are those methods valid (so as to prove a pause), but further, but they are saying with certainty that those methods are better than alternative methods that are specifically deigned to address (at least some of) the criticisms leveled against the previous methods (that they didn’t accept).

      Have you notice now Mr. Monster is here generally when global average temps are being discussed, but just picks up and races out of the room as soon as “skeptics” start talking about the “pause?”

      We have seen this same dance before, with BEST.

      • Dr. Strangelove

        Try the logic forward and backward. Assume deemed by skeptics. Then assume deemed by believers. Does it prove both statements are true? Does it prove both statement are false? Or the same conclusion – one statement must be true and the other false.

        You gotta love the logic.

      • Dr.

        My perspective is that both sides pick and choose their arguments based on, basically, what they are “motivated” (in the sense of motivated reasoning) to prove.

        What seems to matter to me, here, is that a method has been proposed as an improvement – specifically intended to address problems with other methods. I wonder what the reaction from “skeptics” would have been had the method been proposed as a generic method – and not in a context related to identifying existing temperature trends or whether there has been a pause. Even if “skeptics” might have rejected the method under such circumstances, they certainly would not likely have done so with the uniform certainty that we see in this thread.

        At any rate, with my limited knowledge and even more limited brain power, it seems to me that it should be fairly straight-forward to compare the validity of this method to the validity of other methods that “skeptics” (and perhaps “realists”) alternately think are valid and invalid depending on which results they want to highlight (and when). As I understand it, the authors have attempted such a process of validation. It seems illogical, to me, to say that the method isn’t valid w/o addressing why their methods for validation, relative to other methods, are invalid.

        Does that make any sense?

      • Joshua,

        Nature doesn’t care whether what you or I say makes sense or not, as far as I can see.

        The Earth is either warming or cooling, as an entity.

        The “surface”, however defined, obviously warms and cools at different times and locations, for a variety of reasons.

        I believe that “real” scientists of the Earthy variety have established that the Earth is losing heat at the rate of around 44 TW. If it is losing energy, it sure as heck isn’t warming. Not even maintaining temperature. By definition, it’s cooling.

        As to the surface, you can no doubt think of many reasons for localised warming, ranging from oxidising carbon compounds to maintain life, to tidal friction, energy released as conversion loss and so on.

        So the Earth is cooling as it has done for 4.5 billion years or so.

        The surface warms, cools, or remains the same based on local conditions at the time.

        Let me know if you require further information.

        Live well and prosper,

        Mike Flynn.

  77. I get the impression that since UAH’s temperature trend over the Arctic has been used to disconfirm the pause, skeptics now want to throw Spencer and UAH under the bus rather than give up the pause. Interesting dilemma. What does Spencer think about his UAH Arctic measurements? Better than nothing as claimed in this study, or worse than worthless (degrading results when used)?

  78. OK, maybe CAGW zealotry isn’t the worst killer of birds. But it apparently is of 537 year old clams.

    http://www.dailymail.co.uk/sciencetech/article-2505155/Worlds-oldest-creature-confirmed-507-years-old-scientists-KILLED-shellfish.html

    By way of Mark Steyn at NRO. Who just posted an interesting update of the Mann v. Steyn litigation.

    http://www.nationalreview.com/corner/364032/my-new-friends-mark-steyn

  79. The board game geek do it yourself skeptical science warm the world project. Love it.
    Made by scientists for scientists who want to warm the world.
    Skepticism
    The arctic amplification is in full swing here.
    Despite the Antarctic diminshing polar warming with increased albedo.
    A toy model introduced just in time to deny the pause with the most unbelievable adjustments by a confirmed warmist.
    Wake up world. Wake up modellers.
    Put this where it belongs, in a bin.

    • Why don’t you calculate for us the forcing of the albedo change in the Antarctic, versus in the Arctic.

      Please show your work.

      • David Appell,

        Probably because he doesn’t care. But I’m only assuming. If he actually thought that your approbation counted for more than a number indistinguishable from zero, he probably would have responded.

        Oh well, we can’t all be appreciated as well as we would like!

        So sad. Too bad.

        Live well and prosper,

        Mike Flynn.

      • Dear David
        Gavin Schmidt , Eric Steig 3/12/2004 real climate there is widely cited 2 decadal Antarctic cooling .
        This increases albedo as you well know.
        Same source as Kevin C the Board game geek of real climate 2011 who graduated from a simple 1D statistical climate model (what’s this.)
        To being the Kevin cow whatever it is of the current study 2013.
        So it should be genuine.
        Dear Mike thanks for your input, very appreciated.

  80. 2011 woks out own basic model of global warming at home. 2013 publishes science of doom like supermodel of kriging. I have to admire his chutzpah
    and computer skills though. Obviously way above me and most people here.


    • angech | November 15, 2013 at 2:13 am | Reply

      2011 woks out own basic model of global warming at home. 2013 publishes science of doom like supermodel of kriging. I have to admire his chutzpah
      and computer skills though. Obviously way above me and most people here.

      angech shows his Aussie jealousy over the fact that Kevin Cowtan is a one-man wrecking crew and KC possesses skilz that he lax.

  81. Is this cross-validation of the kriging method for land-to-ocean anything akin to Prof Mann’s demonstration that the retetion of Strip-Bark Bristlecone dendro data improves the validation statistics of his temperature reconstruction, and therefore mandates for it to be kept in, despite obvious physical questions about the samples themselves?

    Is it possible that without the proper physical basis, even demonstrable validation can still be spurious or coincidental? Or is this impossible?

  82. mwgrant. I did a quick test to estimate the impact of kriging temperature instead of energy which is the issue crossing land/ocean boundaries and layers.

    If you use absolute temperature for 11 degree latitude bands of SST versus approximate energy for the bands to determine an “average” SST, there is and average 0.4C difference between the two. So by kriging temperature which is depend on the different thermodynamics of the surfaces and boundary layers you have ~0.5C margin of error for temperature/energy relationships from -2C to +30C degrees. Since most of the SST per bands are greater than the mean, the Global anomaly would be biased low.

    When you kriginin the polar regions where the mean is lower, your regional values would be biased high. At least that is how it works out from this simple check.

    • Hi Captain

      I have just noticed your comment and responding with take a little time–it has caught me by surprise. I’ll get back here in a few hours. In the interim could you note here which of my comments lead you to make the calculations.

      Regards, mwgrant

      • I might add that I think that much of what constitutes a kriging effort is probably substantially different than that most people here. In a nutshell my advocacy/interest is taking a measured distinctive geostatistical approach in which kriging is only one of the steps and not employed as it were from a bag of interpolation schemes. Other component are the upfront development of predictive relationship–including error estimation–between variables if co-kriging is employed, variography, and cross-validation. [A lot of effort and a lot of time I probably leaving stuff out because I’m am writing off the cuff on material from four or five years back.

      • nwgrant, I was afraid you were going to ask that :)

        “3.) These two facets of a geostatistical approach used in tandem make it attractive and potentially viable.”

        That got me thinking that since the temperature data is in absolute not anomaly that kriging temperature and kriging energy based on the absolute temperatures would allow for a type of co-kriging double check.

        My quick test indicates that for the oceans there should be about 0.5 C of error or an order of magnitude greater than indicated for absolute but about the same for anomaly. “Globally” it should be biased low because there is more latent energy associated with the higher SST. Regionally near the poles it should be biased high because more latent energy is transferred to the poles. So if you can adjust your “global” mean to reduced the biases you should be closer to relating temperature to global mean energy.

        That may not be the greatest check in the world, but since I am more interested in energy than temperature, it might be interesting.

      • “I was afraid you were going to ask that :)”

        And I appreciate that you answered; I always try to have wriggle room!

        Should I continue with my longer comment or belay it? It will take time but it forces me to think about thinks and that is why I do this. But it may distract from your interests/direction.

        While writing from a position of non-trivial ignorance on the particulars, I will note that co-kriging without a thorough quantification of the relation between to co-variables, including errors associated with any final predictor, risks arriving at a wrong conclusion. BTW were I looking at this particular issue I would first try to become very familiar with what Cowtan and Way did regarding the UAH – temperature relationship. Among things it would develop a feel for the problem given what they had in hand — quality coverage, etc.– and how that influenced the way in which they proceeded. Just a thought.

        Dropping out of sight now to carry on with life.

        mwg

        PS One last thought just popped into my mind. The Cowtan and Way hydrid probably was not co-kriging, though maybe their kriging approach used it. [I think they would have been expicit, using the term. Guess I’ll quit poking there because I’ve lost patience trying to glean sense off scrollings blog screens. Physically demanding and still has a high error rate on my part. Now I wonder what the other two papers were about?

    • mwgrant, since you mention EDA, I think the first step is just looking at your data.

      https://lh3.googleusercontent.com/-MeiWxxIGNUI/Uod3JWbPoMI/AAAAAAAAKiE/jv6JNGLAzQU/w675-h433-no/10+degree+bands.png

      That is SST using the actual temperature estimates in 10 degree latitude bands. The blue lines are the range at the start of the data and the red lines the range at the end of the data. The total range is about 1.5C with blue range a little over 1 C and the red range a little under 1 C so there is about a 0.5 C overlap of the ranges. That is about +/-0.5 C degrees uncertainty. No novel stats required.

      Then I can take those 10 degrees bands and the areal contribution of each band and use one to directly convert to temperature and since energy is the main object convert each band to an effective energy then combine by areal contribution and determine another average absolute temperature. There is about 0.4C difference between the two so we have about a +/-0.2C uncertainty.

      There is virtually no difference in the anomaly of the combined series, but since T^4 and T are not going to produce the same averages, there is an inherent 0.2C margin of error. So when I hear confidence levels that are just about too good to be true, my BS detector goes off.

      Then consider kriging arctic SST. We know that salt water freezes at around -2C. By kriging across water/ice/land boundaries the paper finds that SST is 4 C or more warmer in winter especially during the months of DJF. We know that the lower troposphere, mid-troposphere and lower stratosphere relationships get completely disrupted during DJF due to SSW events breaking down the polar vortex. So we know what the winter temperature of sea Ice cover sea water should be and we know that the Arctic atmosphere gets completely unstable in DJF so why should we trust kriging that has sea ice covered ocean jumping 2 to 10 degrees in the winter?

      Mosher says because we “know” that there is Arctic Amplification. Okay, we also “know” that sea water freezes in a tight temperature range and we “know” the range and margin or error for the rest of the oceans that have actual temperature measurements. What wins? Basic thermodynamics or novel methods?

      • Typo alert! That should be +/-0.25C for the first and +/-0.20 C for the second.

      • Hi CaptDallas

        “mwgrant, since you mention EDA, I think the first step is just looking at your data.”

        I definitely agree. Statistics, geostatistics, graphics, etc. are among the established tools that can be used when exploring data. The hybrid is a different creature and it seems* more to be a model and not an analysis. The kriging only calculations in the paper might be construed either as an (exploratory) analysis or a model, i.e., one facet of characterization** or a product have uses. How one perceives it depends on the content and context of its presentation–the pay-wall again*. By virtue of my experience and prejudices I tend initially to see any geostatistical applications as characterizations. I feel that is quite appropriate in climate science because numerous uncertainties plaguing the effort/debate/science/engineering.
        —-
        * Behind the paywall–hence the mealy wording, sigh.
        ** In reading the online comments of the authors I clearly perceive that they intend for the work to be taken more characterization (or development) than product. I think too many people treat characterization and R&D efforts as product (for drawing inferences.) Things are not so simple and to treat research in this ham-handed way is pretty short-sighted and potentially very detrimental.

        However, I note that in the discussion above you have gone beyond looking at the data. You have made use of external information to develop a model :O). The presentation is informal, but here it is, a nascent model. A little baby model! Now it does look like junior has something to say about hybrid. But the nascent model has limitations, too, and you should look at that. Nascent looks to me like a ‘giggle test’, a simple model that one hopes to capture the quantitative essence of a problem while not employing excessive mathematical contortion. They can be useful but are also limited by their very nature–a compromise between accuracy and complexity. If a giggle test is extremely accurate then the calculation it tests is probably not needed anyway. …just some thoughts.

        “So when I hear confidence levels that are just about too good to be true, my BS detector goes off.”

        IMO anytime, anywhere one see a confidence level one’s BS detector should go off. Confidence levels are often presented inadequately documented. See if all of the body parts are there. In addition if CL’s are given but not discussed, i.e., applied in the text, then I am again wary. Is the paper/author leaving the reader to casual inference on their own. If you put it there it must have a purpose. Be explicit. Then one can/should ask is the CL or CI used appropriately, e.g., is a confidence interval used when a prediction interval or tolerance interval is the appropriate interval?

        “Then consider kriging arctic SST. We know that salt water freezes at around -2C. By kriging across water/ice/land boundaries the paper finds that SST is 4 C or more warmer in winter especially during the months of DJF.”

        …thoughts in a rhetorical vein. Is the number you quote, 4C, a kriging approach result or a hydrid approach result? Does the the hybrid approach at some point entail an application of kriging (or any other interpolation scheme) across the boundary or is it essentially a kludge? An error in a prediction can have origins other than the methodology. The approaches in Cowtan and Way involve interpolation, e.g., kriging and imputation, ancillary methods(?) and data, i.e., there are a number of places for error(s) to crop up.

        “What wins? Basic thermodynamics or novel methods?”

        I’m not aware of a contest, but if there is one you need to state the rules.
        I also protest your prejudiced wording! ;O)

      • mwgrant, “Is the number you quote, 4C, a kriging approach result or a hydrid approach result? ”

        It is just a reverse calculation of what magnitude of anomaly for 3% of the surface would be required for the results looking at winter months only. . there can be any numbers of errors in the methods but the check is simple and straight forward.

        “A little baby model!”

        Yep, the reason to assume that the hybrid kriging is marvelous is that according to some we “know” there will be arctic amplification so it is fine to ignore the blemishes in a first kriging with satellites attempt. I think it has potential and can be a valuable addition, but not when it produces unphysical results. It has been published though and has the regular crowd all giggly.

        ” Confidence levels are often presented inadequately documented.” Surface temperature is especially difficult because temperature is not that good a proxy for energy when there is a large range of temperatures change. That is why SST data has higher confidence and is more useful than “surface” temperature that have a huge range of temperatures and local densities. So while the anomaly uncertainty can be extremely small, the utility of the anomaly wrt energy and absolute temperature is a more important indication of the actual uncertainty. When the anomaly becomes unphysical, as in this case, the listed uncertainty is meaningless.

        http://climexp.knmi.nl/selectfield_obs.cgi?id=someone@somewhere

        That site will let you mask the 70N-90N region for a variety of data sets. The sst sets are all below 0C during Dec through May because the water is frozen. In 70N-90N there is ~3.8 mkm^@ of land and ~11.7mkm^2 of ocean which tends to ice over. That 11.7mkm^2 has considerable more specific heat capacity than the land and air above the ice.

        As for prejudiced wording my apologies, but our hostess pointed out the limitations from the get go and she appears to be absolutely correct.

      • CaptDallas

        Problems with wording? Nope, just chain-yanking :O)

        Signed Giggly-for-the-right-reasons of Team Geostats

  83. How can “kriging”, with respect to temperature, work across a phase change of water, i.e. where there is both water and ice present?

    • I don’t think it is a question of if it works but how well it really works. Plain vanilla interpolation provides about the same confidence interval and by assuming every area with a phase transition in well behaved they eke out a little more confidence that may not be justified. It is pretty simple to test though, just have them krige the mid-troposphere :)

      • or better yet, 0-1000 meters

      • “I don’t think it is a question of if it works but how well it really works.”

        Interesting comment. People will emphasize that action or aspect associated with a system is not physical and so you can apply procedure X to the data. This happens to preclude the upfront data exploration that more often than not is crucial to understanding the data, revealing relationships in the data, etc. My experience is that organizations frequently focus on getting the data often at great expense and give short shrift to the EDA (exploratory data analysis). That is dumb.

        So when I read or hear that isn’t physical used to summarily dismiss an approach or analysis I step back and try to weigh how much is valuable experience shared and much is some sort of prejudice/bias/inclination (quite natural in humans).

      • mwgrant, ” My experience is that organizations frequently focus on getting the data often at great expense and give short shrift to the EDA (exploratory data analysis). That is dumb.”

        I agree 100% People seem to stop when they get an answer when there are often many more questions.

      • ” Plain vanilla interpolation provides about the same confidence interval ”

        Just which plain vanilla interpolation schemes do you have in mind regard calculation of confidence intervals?

      • Are you sure it isn’t ‘French Vanilla’? ;O)

      • Just a simple linear interpolation as a reference you know is rough, but always there as a gut check.

      • Really got to do some stuff…but, “how did you arrive confidence intervals?” Tack on some regression, simulation?

      • “Really got to do some stuff…but, “how did you arrive confidence intervals?” Tack on some regression, simulation?”

        Well go do some stuff :) I just use sequential standard deviations. It tends to highlight areas of interest and provide a basic confidence.

  84. I expand on my earlier comment. Many others have presented similar views, but perhaps it’s worthwhile to put all this together.

    Global warming is a complex process that cannot be described by a single parameter. At best it’s plausible that a single parameter can tell about the stage of the overall process. In that spirit we may ask what would be the best single index for that. Two groups of indices come in mind
    1) temperature indices
    2) heat content indices

    Temperature indices have obvious advantages:
    – (Local) temperatures are directly observable and familiar to everybody.
    – There are historical time series of temperature measurements.
    – Local temperatures affect directly environmental conditions, and have also a strong influence on other weather variables.

    Weaknesses of temperature indices include:
    – Temperature is not a quantity for which a conservation law holds. Therefore temperatures may vary up and down more easily than heat contents.
    – An average of temperatures is only that – an average. The average has no other physical meaning, while the sum of heat contents is the heat content of the combined system.
    – More specifically small changes in weather patterns may cause major changes in local temperatures even when related heat transfers are small.

    Heat content based indices do not share the weaknesses of temperature but they have their own problems:
    – Heat contents can be determined only through measurements of bulk temperatures of volumes of known heat capacities.
    – Heat content does not influence the environment directly but only trough temperature.
    – Conservation laws apply in simplest way to volumes with a heat capacity large in comparison with heat fluxes. The deep oceans have the largest heat content, but only near surface layers affect relatively directly the environment of large interest. Thus it would be necessary to decide, how to define the volume, whose heat content is considered.

    This leads us back to temperature indices, but which temperature index should we choose?

    Preferred properties of the index include:
    – It has a direct connection to the warming that affects our environment.
    – It should reflect the overall warming of the Earth system, not only a small part of the Earth.
    – It should be calculable reliably and unambiguously.
    – It should not have unnecessarily strong “random” variability due to local weather patterns.
    – Availability of historical data is a great plus.

    From the above we can conclude that a major part of the Earth surface should be included in the calculation of the index, but not areas that add more to the uncertainty than to any useful property of the index. Covering with a uniform weight the whole Earth would make the definition of the index conceptually simple, but seems to be contradictory with the other requirements of a good index.

    It may well be that something like HadCRUT4 is the best choice right now for most applications.

  85. A quick check on the surface area of the planet is ~197 Million Sq Miles.
    57 Million Sq Miles of land. GSoD has data from ~10,000 stations in 2012 (best year ever), that’s 5,700 sq miles per station. While weather stations 5-10 miles apart don’t always have the same temperature. If you’re generous and give each station a range of 50 miles, for a circular area of about 750 sq miles. When you do that math we’re generously covering ~13% of the land.

    For the rest of the planets surface, the only thing we know about it is from satellites, maybe(because it isn’t a “surface” measurement).

  86. A Welshman named Daffyd ap Pell,
    said “I’ll stir up the Denizens.” Well.
    The more that he stirred,
    the more wrath he incurred,
    till he wished they would all go to Hell.

    The very same Welshman ap Pell,
    thought that he’d stir Mike Flynn as well,
    but it seems that his dreams,
    fell apart at the seams,
    for Mike won, just by wishing him well.

    And now that this short story’s done,
    I’ll push off, and have me some fun.
    There’s wine to be drunk,
    and thoughts to be thunk,
    here’s wishing you well, every one.

    Live well and prosper,

    Mike Flynn

  87. To you expert Krigers out there, how good is the technique? Has its efficacy been experimentally verified where experiment is possible, and is it more effective in some situations than others? If this ground has already been covered, please point me there. Thanks.

  88. Why does the AGW group always talk about the Arctic but not the Antarctic?

    Answer:

    http://www.washingtonpost.com/blogs/capital-weather-gang/wp/2013/09/23/antarctic-sea-ice-hit-35-year-record-high-saturday/

    • Which is further proof that most of the warming in the late 20th century is due to the positive phase of the PDO and the AMO.

      • How so? Care to elaborate why an Antarctic Sea Ice record is supposed to “prove” that?

      • Easy, Post Doc, Ergo Antarctico Hark.
        ============

      • Well, Jan, me boy, it’s this way. First, ya gotta remember that all ice is local, then put that to the side and forget it a little and bring in that the ice-melting effect of a well mixed radiatively active trace gas is generalized, and then sort consider locale again and wonder why a well-mixed melting gas isn’t melting everywhere, and then look around the locality and lo, and behold, there are oceanic oscillations nearby concatenating and discatenating as we speak, as do the cats speak of phases. Need I elaborate further?
        ============

      • No, you don’t need to do that. I already had been aware before that you are just a troll.

      • Oh, please, I’ve got all these ideas.
        ========

      • Negate Jan,
        Ergo,
        Troll

      • Wrong. Just trying to provocate people w/o contributing any meaningful arguments to a discussion, while hiding in anonymity = troll.

      • Well, fool me once, here I thought we were building bridges, not prisons.
        ======

      • Why the arctic sea ice trend is declining and the antarctic sea ice trend is increasing are two mostly different things. One says very little about the other. The antarctic is mostly about how ice forms (winds and polynyas and freezing temperatures making ice in spite of some warming); the arctic is all about AGW (winds and warming oceans and a warming atmosphere melting ice).

        Politically, that is unacceptable.

      • JCH, ” the arctic is all about AGW (winds and warming oceans and a warming atmosphere melting ice).”

        The Arctic is Assumed to be all about AGW by the AGW affeccionados. There is a growing wealth of peer reviewed papers that indicate that the magnitude of internal variability is on the order of 0.5C instead on 0.15C and that time frames are longer than 60 years instead of less than 60 years. AMO and PDO are literally red herrings (sardines) because they are defined oscillations more suited for weather than climate. In the “Stadium Wave” paper Wyatt and Curry indicate that internal complex variability can have patterns of up to 300 years. There is evidence that Greenland melt/arctic sea ice has a melt period on the order of 150 years.

        The “known” impact of CO2 at current concentrations is on the order 0.25C, all things remaining equal, at the actual “surface”, not the located somewhere in the mid to upper troposphere ERL where lower temperatures and lower densities allow 2XCO2 to have an impact of 0.8 to 1.5C.

        Now based on “first radiant physics principals” the freezing point of salt water is changed by decree to +4C degrees.

        The Koolade concession in located in the rear of the blog.

      • Yes, they sell no cooleraide at the 300-year-cycle stand. None at all. LMAO.

      • Well JCH, if you can actually do math, the C&W paper has the temperature of arctic sea surface temperatures increasing by about 4C during the winter months. Anomaly is a wonderful thing but it has to be related to actual physically possible temperature ranges. So if you look by the Koolade stand there is a pick your physics stand where you can adjust the physical reality of your virtual anomaly world.

      • captdallas 0.8 or less, wrote in http://judithcurry.com/2013/11/13/uncertainty-in-sst-measurements-and-data-sets/#comment-414226

        There is a growing wealth of peer reviewed papers that indicate that the magnitude of internal variability is on the order of 0.5C instead on 0.15C and that time frames are longer than 60 years instead of less than 60 years.

        Could you be more specific, please, by referencing some of the papers, so I know which you mean? Are you talking about the magnitude of the internal global surface temperature variability?

        AMO and PDO are literally red herrings (sardines) because they are defined oscillations more suited for weather than climate.

        The problem I have with the claims according to which a much larger fraction of the global surface warming since 1970 is supposed to be attributed to PDO or AMO, I don’t see the strong correlation between global surface temperature and PDO or global surface temperature and AMO.

        http://www.woodfortrees.org/plot/gistemp/from:1900/mean:156/normalise/plot/jisao-pdo/from:1900/mean:156/normalise/plot/esrl-amo/from:1900/mean:156/normalise

        Sure, one can hypothesize a quasi cyclic behavior in both indices, but if PDO had a strong influence on the global surface temperature anomaly time pattern, it should have been subtracting from the warming starting around 1980, instead of adding, not?

        The AMO seems to show an increase together with the global surface warming in recent decades, but the correlation before that is missing. How could this be, if AMO was supposedly controlling the global surface temperature time pattern? Since AMO is sea surface temperature, the recent increase may just be some of the global warming signal seen also in the AMO index, or just a spurious synchronicity.

        In the “Stadium Wave” paper Wyatt and Curry indicate that internal complex variability can have patterns of up to 300 years.

        I am in interested in what the actual amplitude of the “Stadium Wave” pattern is supposed to be relative to the secular warming trend. The W&C paper uses detrended and normalized indices to detect the “Stadium Wave”. It is conceivable that this kind of low frequency dynamic coupling pattern is real. It may also be an expression of low-frequency chaotic variability in the climate system. The question is what is the actual amplitude of this variability, if it exists. The existence itself of such a pattern is not sufficient to conclude, e.g., that it will have a big effect on how climate will evolve over the next decades.

        There is evidence that Greenland melt/arctic sea ice has a melt period on the order of 150 years.

        And what is the amplitude? What is the scientific reference?

      • Jan,

        http://eesc.columbia.edu/courses/w4937/Readings/Brierley%20and%20Fedorov.2010.pdf

        http://sam.ucsd.edu/sio219/toggweiler_bjornsson.pdf

        Toggweiler has more including the shifting westerlies.

        There is very little correlation between PDO and “global” surface air temperature. The AMO has a good correlation but since the surface area of the north Atlantic is small, it is not considered large enough to cause more than ~0.15C of temperature change. However, the land surface area in the 30N-60N zone amplifies the 30N-60N SST variation. A better metric is then the entire 30N-60N SST for global “climate” variability.

        Of the ocean basins, the Indian ocean has the highest correlation to actual global surface temperature/energy. The IO gets less THC influence so it doesn’t have as much of the shorter term oscillation noise.

  89. Well, perhaps I am just too stupid to recognize what your supposed argument was. Perhaps you should rephrase in a way so even I can understand it.

  90. Pingback: Spiegel On Cowtan And Way: “Feeling of Foreboding”…Other Studies On Antarctic, Mann et al, Tell Us “Caution Is Advised”

  91. Another manual pingback, this time with a quote:

    It is well known that the Arctic is warming up much faster than the rest of the globe. As a consequence, datasets which omit this region (HadCRUT and NOAA) underestimate the global warming trend. A new paper by Cowtan and Way addresses this cool bias by using satellite data to fill in these data gaps. They make a good case that this method also improves upon the NASA GISS dataset, which uses extrapolated data from surface stations to partly fill in the data sparse regions. Combining their new method of infilling with the most up-to-date sea surface temperatures gives a substantially larger trend over the last 15 years than the abovementioned datasets do. The temporary slowdown in global surface warming (also dubbed “the pause”) nearly disappears. As Michael Tobis notes:

    This demonstrates is how very un-robust the “slowdown” is.

    http://ourchangingclimate.wordpress.com/2013/11/15/cowtan-and-way-global-average-temperature-observations-compared-to-cmip5-models/

    • Actually, I believe is illustrates how un-robust our understanding of climate is. Nearly the same trends are in the available land station data above 75N and the largest “warming” in during the winter months when the oceans freeze and the moisture in the atmosphere precipitates out. That is a huge amount of latent energy being released. When there are Sudden stratospheric warming events which are stronger since 1998, the energy released is on the order of 10^22 Joules in just a month or two. The same order of magnitude as the annual ocean heat uptake.

      This has been known but now the new method estimates the temperature above the sea ice. The oceans below the ice are still at the near freezing temperatures that have been used to monitor “global” warming which poses an interesting problem. The magnitude of the energy released by the SSW events is a major contributor to the “pause” since most is lost to space, but the energy being released is being used to “disprove” the existence of the “pause” it is helping to create.

      Interesting conundrum.

  92. Pingback: CSALT and SST corrections | context/Earth

  93. Now it is CSAALT, with extra Arctic oscillation flavor. You know that Arctic Winter Warming would be expected following sea ice retreat. Its like deja vu all over again.

    • I used the Arctic region to characterize some of the extreme fluctuations in the noise, particularly during the WWII years. The Stadium Wave of LOD plus detrended Arctic temperature anomaly w/CO2 can recreate the GISS time series accurately while it also exposes the WWII temperature bias. Yet the CSALT parameters by themselves have a 0.99 R value, which is even better, LOL.

      • If you actually consider the “Stadium Wave” that supposedly inspired your model you would suspect the SOI signal more than the surface temperature and find that the 1940 “glitch” is more likely caused by Arctic Winter Warming related to sea ice variation.

        Then if you really get curious you can look at the Indian Ocean SST which provides a remarkable match to your final signal since it indicates that there has been a longer term recovery which tends to amplify your C and understand why Wyatt and Curry ’13 mention that the “Wave” pattern could extend back ~300 years.

      • How to blame the shape of the next Worm Royal on man? Small odds, from ignorance, but possible, yes, I see.
        ==============

      • Captain

        Speaking of the stadium wave, did you see the addendum that I commented on elsewhere?

        “Speaking of scientists earning trust there is an addendum to the arctic warming story posted by Dr Marcia Wyatt which I think creates trust in both Dr Miller and Professor Miller.

        “I did re-read your piece. Indeed, you did not assert that we claimed the entire Arctic was experiencing unprecedented warmth. Sorry about that. I thought you did an excellent job in explaining our paper…..”

        I appreciate Dr. Miller’s openness and fairness in our dialogues. I have learned many a lesson from this experience.”

        http://judithcurry.com/2013/10/29/unprecedented-arctic-warming-part-ii/

        tonyb

      • Thanks Tonyb, I hadn’t read the final update

      • Fine work for truth those two did there, and here’s the beauty of it: Unintended collaboration, unexpected, unplanned. Let a wave of this emerge.
        ========

      • In contrast, KC is doing the heavy lifting wrt SST uncertainty over on SkS
        http://www.skepticalscience.com/the_other_bias.html

        This chart shows the WWII SST anomaly via CSALT and how it relates to the evolution of SST measurement procedures:
        http://img802.imageshack.us/img802/6750/uxz.gif

        Cappy, you are the definition of an impediment. Do you really get your kicks out of laying down canards? LOL

      • Webster, Engine cooling water intake temperature sensors are installed in thermometer wells located in the raw water intake piping. The thermometer well and the pipe it is installed in may be inside and engine room, but they are designed to measure the raw water intake temperature not the engine room temperature. Typically, the raw water flow is 2 or more times the flow required to cool the engines since often a lot of water is used to cool the exhaust manifolds to reduce engine room temperature. In addition to that, the raw water intake is generally located in a thru-hull fitting in the engine room hull which depending on the hull design and the vessel loading might be 1 to 10 meters below the surface. A loaded vessel then would likely draw deeper raw water than a unloaded vessel. Out bound and inbound vessels would likely have different “average” raw water intake temperatures.

        If the issue were due to “engine room” heating of an immersed thermometer well, the anomaly would be different in different regions of the oceans and should be isolated by region. Also if the anomaly were due to “engine room” heating the friggin anomaly would not be evident in the land surface temperature data.

      • And btw Webster, Since there was a change in the average vessel class during the war years to more specialized vessels, I would expect some due diligence on the “error” based on vessel class and theater of operation rather than a SWAG that magically fits an idea.

      • LOL, Cappy, The correction is rather puny — 0.1C over the course of the war years — enabling the CSALT model to reach an R value of 0.99 over the span of 130+ years of GISS data.

        With C&W’s correction for the unmeasured regions, all arrows point to MNFTIU.

        NASA’s GISS was right, like they always are.

      • ” Also if the anomaly were due to “engine room” heating the friggin anomaly would not be evident in the land surface temperature data.”

        Cappy, try to control your anger, LOL.

        It isn’t very evident in the land-based data:
        http://img198.imageshack.us/img198/1361/ktl.gif

        Point it out and then get even more mad, LOL.

        Kevin Cowtan says:

        http://www.skepticalscience.com/the_other_bias.html#99656

        I also managed to identify the post-war spike by the trivial method of comparing colocated coastal land and SST measurements. The results give a surprisingly good fit to the HadSST3 adjustments, barring a scale factor. The gradual change over the past couple of decades is far harder to check, although I may have picked up a weak echo of the signal in the constrast between sea-lane and non-sea-lane temperatures.

      • Web,

        I’m not saying it isn’t true but where is the proof of this:

        Skeptical Science Kevin C:
        “All suffer from one or more sources of bias which have been suppressing temperature trends since around 1998, i.e. the start of the purported ‘pause’ in global warming. We anticipate that the coverage and SST biases account for most of the difference between the different temperature records over recent decades.”

        Why is the bias STARTING in 1998 where is the proof (no footnotes or explanation). Otherwise it’s just a biased throw away line.

        Is this what he’s talking about?:
        http://www.skepticalscience.com/pics/FR11_All.gif

        That says nothing about 1998 specifically.

      • Oh, never mind he was talking about figure 2 Duh! missed the implication

      • Webster, “The resulting corrections are included in the HadSST3 sea surface temperature record, but these corrections have not yet been applied to the SST records used by NASA or NOAA.”

        That is what C&W are saying, but you say and I agree, “NASA’s GISS was right, like they always are.” My interpretation of, “but these corrections have not yet been applied to the SST records used by NASA or NOAA.” is that C&W are proposing “over correcting” what is about as correct as it can be.

        Now if my interpretation is wrong, I apologize, but further correction is non-sense.

      • NASA GISS has long been accused of cooking the books by ankle-biters. See for example:
        http://libertyboy.wordpress.com/2008/11/12/cooking-the-books-at-nasa/

        What I am now saying is that they were right about using appropriate techniques to infill readings from regions that haven’t been measured. This has now been substantiated by C&W to be a valid method.

        As they continue to improve GISS, it will become historically more accurate.

      • Webster, I am not one of the GISS bashing ankle biters. I think the data sets should be kept as independent as possible and that what ever method was used should be continued as a reference. Hadley has a simple method that is useful, GISS has two, 250km and 1200km interpolations, BEST has a krige interpolation. If everyone uses the same method you just get inbreeding. I have noticed that GISS 1200km seems to be introducing some error in the Antarctic, not a problem, it just gives you a comparison.

        But C&W seem to be on this insecurity complex over correction mania.

      • Let me correct that, I am not a GITEMP bashing ankle biter, there are some thing GISS that I have bashed and I tend to assume GISTEMP error margins are a bit optimist, but that I do not consider “bashing”.

      • Web,
        ” This has now been substantiated by C&W to be a valid method.”

        Has it?
        It seems to me we are lost in minutia when TRUE paleoclimatology is measured on a much larger scale:

        http://en.wikipedia.org/wiki/File:All_palaeotemps.png

        http://wattsupwiththat.files.wordpress.com/2013/06/co2_temperature_historical.png

        http://en.wikipedia.org/wiki/File:Phanerozoic_Carbon_Dioxide.png

        Tell me why all of this isn’t just grasping at straws as far as AGW goes?
        It’s a sincere question that is the main one for me that no one seems to address and I’d really like to know.
        I like looking at your stuff and I’ve learned a lot from it but I wonder if your lost in the trees as you suspect the ankle biters are.

      • “Is this what he’s talking about?:
        http://www.skepticalscience.com/pics/FR11_All.gif

        Nice, but they only go back to 1980. My CSALT model goes all the way back to 1880.

        This is the CSALT model in comparison to the raw GISS data
        http://img33.imageshack.us/img33/2691/j931.gif

        Remember these are stationary coefficients over the entire range, except for the WWII anomaly. The noise is reduced by about 1/2 and it appears more as a white noise, with less Markov process autocorrelation.

  94. Takeaway from the Cowtan and Wray study is that great gobs of time/effort/wealth are being expended on keeping the fear of thermal apocalypse alive, and that influential media is treating the beneficiaries as noble knights on a quest for the Holy Grail.

  95. “First, Kriging. Kriging across land/ocean/sea ice boundaries makes no physical sense. While the paper cites Rigor et al. (2000) that shows ‘some’ correlation in winter between land and sea ice temps at up to 1000 km, I would expect no correlation in other seasons.”

    Totally predictable for climate scientists. They believe to the bottoms of their hearts that any two series of temperature readings are comparable.

    They really should stop demonstrating just how clownish they are.

    • Skeptics had the opportunity to complain about Spencer’s UAH blending satellite data taken over land, water and ice surfaces and somehow blending them together, but chose not to until this study used some of the same data. What is the difference?

  96. The more I look into this correction, the more that it makes sense in terms of explaining the subtle global temperature variations. Over the last couple of years, the expected temperature does seem to be subtly diverging from the SOI-compensated time series. See the yellow region in the log sensitivity correlation produced by CSALT:
    http://img713.imageshack.us/img713/8707/ljp.gif

    Now look at the C&W correction from around 2010 onward:
    http://img189.imageshack.us/img189/3603/qfmq.png

    The C&W correction adds 0.1C to the recent couple of years, making up for that gap.

    What is interesting is that the Arctic is acting as a water balloon in the energy balance equation — when the temperature nudges away in other regions, the Arctic numbers bulge to make up for it. That is to be expected in a world that obeys the conservation of energy.

    • Webster, “What is interesting is that the Arctic is acting as a water balloon in the energy balance equation — when the temperature nudges away in other regions, the Arctic numbers bulge to make up for it. That is to be expected in a world that obeys the conservation of energy.”

      Exactly and Arctic Winter Warming is something related to general Arctic imbalances. With the IPWP warming firing up SSW events it may be due to a number of things and then that amplified by CO2 etc. Seems kinda of silly leap to conclusions one way or the other since it is related to freezing of the sea surface while the summer trends are much lower. The breakdown of the polar vortex though releases a huge amount of energy that I have not seem estimated very accurately. Both Gates and I figure it is in the 10^22 Joule range which is definitely significant and likely related to the “pause”.

    • Webster, since I have the link available,

      http://upload.wikimedia.org/wikipedia/commons/9/97/Water_infrared_absorption_coefficient_large.gif

      My pet peave has been the error in the K&T Earth Energy Budgets and where the major error was. The error has to be with mixed phase clouds. With mixed phase clouds the appropriate radiant spectrum should be the combination of all three of the H2O phase spectra. As you can see, that makes Clouds, an effectively saturated IR ground plane. While below the clouds, the surface experiences a “positive” feedback the effect from above the clouds is a net negative impact on the overall energy budget. That is why there is an ~18Wm-2 error in the K&T budget which I had always assumed was somewhat important. Silly me.

      While the effect of mixed phase clouds is most obvious in the Arctic, it is not unlikely that the same effect to a small degree happens everywhere clouds are formed. That should make the Atmospheric Boundary Layer a better frame of reference for determining “net” impacts which since Stephens et al list the “surface” uncertainty at +/-17 Wm-2, I think perhaps they might agree. Also Troy Masters cloud study tends to more in line with reality than Andy “Balloons are Thermometers” Dessler. And while we are at it, the famous old dead guy Angstrom was probably right also with his “effectively” saturated comments in terms of the real surface of the Earth.

    • Ignore what Cappy says and look at the data. Nick Stokes has a recent post at http://moyhu.blogspot.com which duplicates the work of C&W by using a simpler infilling technique.

      • Here is Webster’s Comment gang. “Ignore what Cappy says and look at the data. Nick Stokes has a recent post at http://moyhu.blogspot.com which duplicates the work of C&W by using a simpler infilling technique.”

        Here is Nick Stokes comment on his results, “So this simple infill almost doubles the trend, but does not go as far as the C&W hybrid method. It is, however, close to GISS, which interpolates to avoid missing cells.”

        So the reality is that Hadcrappy had a cold bias that was well known. GISS as usual appears to be the most accurate, and C&W appear to over compensate for Hadcrappy’s failings.

        That to Webster means that by not getting as much warming as C&W that Nick Stokes has somehow “duplicated” C&W’s work.

        The bottom line folks, Hadcrappy=crappy, GISS = about Right C&W = nice try, I am sure their mothers are proud.

      • Nick Stokes didn’t use Old RoyBoy’s numbers but C&W did.

        The CSALT model sees the last 2 years as the only time that temperatures are lower than expected, by about 0.1 degree and diverging, and the C&W correction is explaining this.

        It is cool to have a model that acts as a fish-locator.

      • I love the smell of some Hadcrappy (I believe I coined it) in the morning.

      • Webster, “Nick Stokes didn’t use Old RoyBoy’s numbers but C&W did.”

        So, that doesn;t decrapify hadcrappy or make C&W all that special. You are leaping past the point that GISS and ERSST are about as close as you are going to get. C&W haven’t contributed anything meaningful to the process other than polar satellite data and polar data in general still sucks.

  97. Pingback: Weekly Climate and Energy News Roundup | Watts Up With That?

  98. Pingback: New Papers About Global Warming Pause And Global Surface Temperature | The Global Warming Policy Foundation (GWPF)

  99. Must read post by steve mcintyre on Cowtan and Way
    http://climateaudit.org/2013/11/18/cotwan-and-way-2013/

    • Yes, the error is about 0.1C and it signifies heat going to the northern latitudes instead of the usual locations.
      This is seen in the CSALT model, as their has been a widening deficit in the last few years.

  100. Nowhere does the phrase “it’s worse than we thought” fit more aptly than in describing the inherent unreliability and inadequate global coverage of pre-satellite SST data. These studies amount to little more than a band-aid approach to a gunshot wound in the temple.

  101. Pingback: Uncertainty in Arctic temperatures | Climate Etc.

  102. Pingback: Klimawandel: Die Erde erwärmt sich laut Hochrechnung doch | Germany News Online

  103. Pingback: On Cowtan and Ray (2013) “Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends” | Bob Tisdale – Climate Observations

  104. Pingback: On Cowtan and Ray (2013) “Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends” | Watts Up With That?

  105. Pingback: Could the Perspectives of Cowtan and Way Negate RSS’s 17 Year Pause? (Now Includes all October Data except Hadcrut) | Watts Up With That?

  106. Since I am not a statistician, and certainly never at this level, I am referencing the following for those that are:

    Briggs, William. “Do Not Smooth Times Series, You Hockey Puck!” Blog. William M. Briggs, Statistician, September 06, 2008. http://wmbriggs.com/blog/2008/09/06/do-not-smooth-times-series-you-hockey-puck/

    “Now I’m going to tell you the great truth of time series analysis. Ready? Unless the data is measured with error, you never, ever, for no reason, under no threat, SMOOTH the series! And if for some bizarre reason you do smooth it, you absolutely on pain of death do NOT use the smoothed series as input for other analyses! If the data is measured with error, you might attempt to model it (which means smooth it) in an attempt to estimate the measurement error, but even in these rare cases you have to have an outside (the learned word is “exogenous”) estimate of that error, that is, one not based on your current data.

    “If, in a moment of insanity, you do smooth time series data and you do use it as input to other analyses, you dramatically increase the probability of fooling yourself! This is because smoothing induces spurious signals—signals that look real to other analytical methods. No matter what you will be too certain of your final results! Mann et al. first dramatically smoothed their series, then analyzed them separately. Regardless of whether their thesis is true—whether there really is a dramatic increase in temperature lately—it is guaranteed that they are now too certain of their conclusion.”

    “The Pitfalls of Data Smoothing.” Scientific. Watts Up With That?, March 30, 2013. http://wattsupwiththat.com/2013/03/30/the-pitfalls-of-data-smoothing/

    “Since we’ve been discussing smoothing in datasets, I thought I’d repost something that Steve McIntyre had graciously allowed me to post on his amazing blog ClimateAudit back in 2008. Let me start by saying that when I got involved in climate science, the go-to blog was the late, great John Daly’s blog, “Still Waiting for Greenhouse”. Sadly, when I went today to get the URL, I got the “Account Suspended” message … Yikes! That was an international treasure trove of climate history! Can we reverse that? Or are we at the mercy of the Wayback Machine? Does his archive exist, and can a host for it be found? [UPDATE: thanks to quick action by Ric Werme, John Daly’s site is back up again, and plans are afoot to move it to a more secure location. -w.]

    “Allan Macrae has posted an interesting study at ICECAP. In the study he argues that the changes in temperature (tropospheric and surface) precede the changes in atmospheric CO2 by nine months. Thus, he says, CO2 cannot be the source of the changes in temperature, because it follows those changes.

    “Being a curious and generally disbelieving sort of fellow, I thought I’d take a look to see if his claims were true. I got the three datasets (CO2, tropospheric, and surface temperatures), and I have posted them up here. These show the actual data, not the month-to-month changes.
    #####################
    “In the Macrae study, he used smoothed datasets (12 month average) of the month-to-month change in temperature (∆T) and CO2 (∆CO2) to establish the lag between the change in CO2 and temperature . Accordingly, I did the same. [My initial graph of the raw and smoothed data is shown above as Figure 1, I repeat it here with the original caption.]”

  107. Huh. Kriging maligned. Sure, if the covariance estimate at the center is overextended, meaning, it is used to predict to a location where the covariance varies markedly from the kernel estimate, it might not do so well. There are LOTS of places where, hypothetically, this could happen, not just land-ocean boundaries. Yet, kriging is the “magic sauce” of the BEST results. (Berkeley Earth Surface Temperature project.) How do you all reconcile your critiques of the technique with what BEST does and did? How is it different?

  108. ezra abrams

    one day, a psychology PhD student is gonna make a fine dissertation, figuring out why there is such loggorhea on sites like this

    it is *really* simple
    Once you get beyond simple CO2 equilibration, and realize that only the outer (low OD) layer concentration matters, you reaize that increased CO2 warming is at least plausible

    so, are you feeling lucky today ?
    cause if you bet wrong, 500 million people from Bangladesh are lookin for a new home….
    Global GDP roughly 30 trillion; a hundred billion a year or so as insurance against climate change doesn’t seem excessive to me..my share is, roughly, 50 bucks a year…