Skeptical of skeptics: is Steve Goddard right?

by Judith Curry

Skeptics doing what skeptics do best . . . attack skeptics.Suyts

Last week, the mainstream media was abuzz with claims by skeptical blogger Steve Goddard that NOAA and NASA have dramatically altered the US temperature record.  For examples of MSM coverage, see:

Further, this story was carried as the lead story on Drudge for a day.

First off the block to challenge Goddard came Ronald Bailey at reason.com in an article Did NASA/NOAA Dramatically Alter U.S. Temperatures After 2000?  that cites communication with Anthony Watts, who is critical of Goddard’s analysis, as well as being critical of NASA/NOAA.

Politifact chimed in with an article that assessed Goddard’s claims, based on Watt’s statements and also an analysis by Zeke Hausfather. Politifact summarized with this statement:  We rate the claim Pants on Fire.

I didn’t pay much attention to this, until Politifact asked me for my opinion.  I said that I hadn’t looked at it myself, but referred them to Zeke and Watts.  I did tweet their Pants on Fire conclusion.

Skepticism in the technical climate blogosphere

Over at the Blackboard, Zeke Hausfather has a three-part series about Goddard’s analysis –  How not to calculate temperatures (Part I, Part II, Part III).  Without getting into the technical details here, the critiques relate to the topics of data dropout, data infilling/gridding, time of day adjustments, and the use of physical temperatures versus anomalies.  The comments thread on Part II is very good, well worth reading.

Anthony Watts has a two-part series On denying hockey sticks, USHCN data and all that (Part 1, Part 2).  The posts document Watts’ communications with Goddard, and make mostly the same technical points as Zeke.  There are some good technical comments in Part 2, and Watts makes a proposal regarding the use of US reference stations.

Nick Stokes has two technical posts that relate to Goddard’s analysis: USHCN adjustments, averages, getting it right  and TOBS nailed.

While I haven’t dug into all this myself, the above analyses seem robust, and it seems that Goddard has made some analysis errors.

The data

OK, acknowledging that Goddard made some analysis errors, I am still left with some uneasiness about the actual data, and why it keeps changing.  For example, Jennifer Marohasy has been writing about Corrupting Australian’s temperature record.

In the midst of preparing this blog post, I received an email from Anthony Watts, suggesting that I hold off on my post since there is some breaking news.  Watts pointed me to a post  by Paul Homewood entitled Massive Temperature Adjustments At Luling, Texas.  Excerpt:

So, I thought it might be worth looking in more detail at a few stations, to see what is going on. In Steve’s post, mentioned above, he links to the USHCN Final dataset for monthly temperatures, making the point that approx 40% of these monthly readings are “estimated”, as there is no raw data.

From this dataset, I picked the one at the top of the list, (which appears to be totally random), Station number 415429, which is Luling, Texas.

Taking last year as an example, we can see that ten of the twelve months are tagged as “E”, i.e estimated. It is understandable that a station might be a month, or even two, late in reporting, but it is not conceivable that readings from last year are late. (The other two months, Jan/Feb are marked “a”, indicating missing days).

But, the mystery thickens. Each state produces a monthly and annual State Climatological Report, which among other things includes a list of monthly mean temperatures by station. If we look at the 2013 annual report for Texas, we can see these monthly temperatures for Luling.

Where an “M” appears after the temperature, this indicates some days are missing, i.e Jan, Feb, Oct and Nov. (Detailed daily data shows just one missing day’s minimum temperature for each of these months).

Yet, according to the USHCN dataset, all ten months from March to December are “Estimated”. Why, when there is full data available?

But it gets worse. The table below compares the actual station data with what USHCN describe as “the bias-adjusted temperature”. The results are shocking.

In other words, the adjustments have added an astonishing 1.35C to the annual temperature for 2013. Note also that I have included the same figures for 1934, which show that the adjustment has reduced temperatures that year by 0.91C. So, the net effect of the adjustments between 1934 and 2013 has been to add 2.26C of warming.

Note as well, that the largest adjustments are for the estimated months of March – December. This is something that Steve Goddard has been emphasising.

It is plain that these adjustments made are not justifiable in any way. It is also clear that the number of “Estimated” measurements made are not justified either, as the real data is there, present and correct.

Watts appears in the comments, stating that he has contacted John Nielsen-Gammon (Texas State Climatologist) about this issue. Nick Stokes also appears in the comments, and one commenter finds a similar problem for another Texas station.

Homewood’s post sheds light on Goddard’s original claim regarding the data drop out (not just stations that are no longer reporting, but reporting stations that are ‘estimated’). I infer from this that there seems to be a real problem with the USHCN data set, or at least with some of the stations. Maybe it is a tempest in a teacup, but it looks like something that requires NOAA’s attention. As far as I can tell, NOAA has not responded to Goddard’s allegations. Now, with Homewood’s explanation/clarification, NOAA really needs to respond.

Sociology of the technical skeptical blogosphere

Apart from the astonishing scientific and political implications of what could be a major bug in the USHCN dataset, there are some interesting insights and lessons from this regarding the technical skeptical blogosphere.

Who do I include in the technical skeptical blogosphere?  Tamino, Moyhu, Blackboard, Watts, Goddard, ClimateAudit, Jeff Id, Roman M.  There are others, but the main discriminating factor is that they do data analysis, and audit the data analysis of others.  Are all of these ‘skeptics’ in the political sense?  No – Tamino and Moyhu definitely run warm, with Blackboard and a few others running lukewarm. Of these, Goddard is the most skeptical of AGW. There is most definitely no tribalism among this group.

In responding to Goddard’s post, Zeke, Nick Stokes (Moyhu) and Watts may have missed the real story. They focused on their previous criticism of Goddard and missed his main point. Further, I think there was an element of ‘boy who cried wolf’ – Goddard has been wrong before, and the comments at Goddard’s blog can be pretty crackpotty. However, the main point is that this group is rapidly self-correcting – the self-correcting function in the skeptical technical blogosphere seems to be more effective (and certainly faster) than for establishment climate science.

There’s another issue here and that is one of communication.  Why was Goddard’s original post unconvincing to this group, whereas Homewood’s post seems to be convincing?  Apart from ‘crying wolf’ issue, Goddard focused on the message that the real warming was much less than portrayed by the NOAA data set (caught the attention of the mainstream media), whereas Homewood more carefully documented the actual problem with the data set.

I’ve been in email communications with Watts through much of Friday, and he’s been pursuing the issue along with Zeke and help from Neilsen-Gammon to NCDC directly, who is reportedly taking it seriously. Not only does Watts plan to issue a statement on how he missed Goddard’s original issue, he says that additional problems have been discovered and that NOAA/NCDC will be issuing some sort of statement, possibly also a correction, next week. (Watts has approved me making this statement).

This incident is another one that challenges traditional notions of expertise. From a recent speech by President Obama:

“I mean, I’m not a scientist either, but I’ve got this guy, John Holdren, he’s a scientist,” Obama added to laughter. “I’ve got a bunch of scientists at NASA and I’ve got a bunch of scientists at EPA.”

Who all rely on the data prepared by his bunch of scientists at NOAA.

How to analyze the imperfect and heterogeneous surface temperature data is not straightforward – there are numerous ways to skin this cat, and the cat still seems to have some skin left.  I like the Berkeley Earth methods, but I am not convinced that their confidence interval/uncertainty estimates are adequate.

Stay tuned, I think this one bears watching.

Update:  Watts has a new post Scientific method is at work on the USHCN temperature data set

 

 

 

 

588 responses to “Skeptical of skeptics: is Steve Goddard right?

  1. The scientific method at work! There is hope ….

  2. The data thins and the plot thickens ;OP

  3. Pingback: An example of conservative propaganda. But this time with rebuttal from the right | Fabius Maximus

  4. Hopefully, skeptical questioning of data, no matter the source, will win out over AGW groupthink.

  5. Jeffrey Eric Grant

    I think it best to have independant groups looking at the same (unaltered) data. I also would like to see reasons published for each and every change made to the original (raw) data. Does this exist? If not, how are we to determine if the adjustments are directly influencing the conclusions?

    And then, of course, is the attribution. That is another story for another time.

    • @ Jeffery Eric Grant

      “I think it best to have independant groups looking at the same (unaltered) data.”

      Good idea. Only problem is, as Goddard and others point out, where are you going to get the UNALTERED data? Or data that is not simply estimated? Or krigged? Or filled using some other technique?

      Where is the world data set that consists of thermometer readings vs date and time? No estimates, krigs, fills, or adjustments; thermometer data. And how much of the world do they cover and how long are the records? And what calibration history is available for each instrument used in the data set?

      And does the whole shebang justify headlines such as the recent one, provided of FOMD, announcing that May 2014 SHATTERED the previous record for the Temperature of the Earth (TOE)–by 0.02 degrees? Does ANYONE actually believe that the world wide data acquisition system in place over the last century produced a data set with enough precision to justify monthly TOE comparisons with hundredths of a degree resolution?

      Climate science is a chimera.

      • From the UAH data:
        5/14: 0.33
        5/10: 0.46
        5/98: 0.56

        Where’s the beef, CAGWers?

      • Steven Mosher

        From the berkeleyearth.org data page you can get all the raw data.

        The two most used sources are daily raw from
        Ghcn daily and gsod.

        Then you can use that data to Estimate the
        Global average.

        Be prepared to defend your method

        Goddards method is the worst.

      • SM calls it an estimate, I call it a calculation. Maybe we could agree that it’s a calculated approximation?

      • Steven Mosher

        its a calculation to give you an estimate.

        If you take 40000 raw records and want to create a global average you MUST calculate.

        The question is what calculations give you the best estimate

        A simple goddard style average will NOT give you the best estimate because of sampling inhomogeniety.

        a simple average is the worst method.

        This isnt a skeptic versus warmist issue. Its simple math

      • jim2: Call it a tabulation calculation approximation

      • “Only problem is, as Goddard and others point out, where are you going to get the UNALTERED data? Or data that is not simply estimated? Or krigged? Or filled using some other technique?”

        No the problem is climate deniers.

        Explain to me how you and Goddard aren’t aware of the fact the raw data is available?

        How come I know, and Mosher knows, but neither of you know. Explain that.

      • David Springer

        @mosher

        cannot make a silk purse from a sow’s ear

      • Steven Mosher

        true david,

        but there is a difference between the best purse you can make with a sows ear and the one that Goddard makes.

        Go ahead. I dare you. defend averaging absolutes as the better method.
        make some synthetic data. do a methodological study.

        show us your chops

      • Mosher, whether you average absolutes or anomalies you end up with the same issues just you don’t see them with anomalies. Since you assume you don’t have to correct for altitude with anomalies, the “average” includes higher variance due to the lower temperature/density anomalies producing a nice tidy number that diverges from the energy reality the “average” is supposed to represent. I believe that is why some believe that global mean surface temperature and/or anomaly is pretty useless by itself.

  6. But aren’t we at the point where data, fixed or not, is no longer relevant? The zeitgeist seems to be ‘Climate Change is Real’ and the momentum is strongly in that direction. Now with insurance company’s on board, AGW is a done deal. The Emperor may be clothed or naked. Who cares? There is too much money to be made in these comet pills.

    When people believe that one sure sign of the reality of global warming is extreme cold weather, well . . . .

  7. The hope that temperatures will rise in the AGW is palpable. Diminishes the credibility of collected and revised meteorological data

  8. Seems Mesoman is on to the real issue.

    A faulty cable causing readings to be low. It has been repaired, move along folks, nothing to see here.

    Sometimes its not corruption after all.

    http://moyhu.blogspot.com/2014/06/ushcn-tempest-over-luling-texas-theres.html

    • A systematic failure mode in thermometer telemetry cables? A growth industry which makes for hot news. A growing trend … :-)

      • Weedwhackers and Troybuilts causing a confirmed bias to temperature measurements in the cool direction.

        Perhaps some more pvc pipe is in order, we must protect the cables for data integrity’s sake.

        Raw data is the best but I like my steak temperature adjusted.

  9. Until Mosh turns up it seems appropriate to post this double header which was a comment I originally made to Edim some months ago. I will comment on it and the Goddard article separately

    ——— ———–

    Sorry for this long response. I have at various times asked Mosh why historic temperatures are frequently cooled. There is no better example than with Giss which, between being outlined at the Congress hearing in 1988 and today have been cooled. The first part of this are my various links related to this. The second part is Mosh’s response as to why temperatures are retrospectively cooled. I don’t want to misrepresent Mosh so I am not sure he was directly responding to the Hansen data but more the general question

    ——- ——-

    http://image.guardian.co.uk/sys-files/Environment/documents/2008/06/23/ClimateChangeHearing1988.pdf

    see figure 1 for global 5 year mean

    here is latest giss

    temperatures seem to have warmed in later years and cooled in 1940’s

    http://data.giss.nasa.gov/gistemp/graphs_v3/

    hansen lebedeff 1987

    http://pubs.giss.nasa.gov/docs/1987/1987_Hansen_Lebedeff.pdf

    RESPONSE

    Steven Mosher | September 27, 2013 at 11:18 pm |
    Sure tony.

    First, its hard to reconstruct piece by piece all the changes that
    VARIOUS people made that result in the changes you see.

    But let me have a wack.

    First, understand that the GISS answers are the result of
    Data input and Algorithm.

    1. Data input.
    There are two principle causes. First is the change in the core dataset. The moves throuh various versions of USCHN will result in changes because the processing of that data changed. Essentially the big adjustments for TOBS and other bits in the US.
    By looking at datasets outside USCHN we can see that these adjustments are justified. In fact the adjustments are calibrated by looking at hourly stations close to the USCHN stations.
    Next, the GISSTEMP algorithm will change the estimates of the past
    as New data for the present comes in. This has to do with the RSM method. This seems bizarre to most folks but once you walk through the math you’ll see how new data about say 1995, changes what you think about 1945. There are also added stations so that plays a role as well.

    2. ALgorithm side of things. You have to walk back through all the papers to to get an idea of the changes. But they do impact the answer.

    The fundamental confusion people have is that they think that global indexs are averages. And so if Hansen average 1945 in 1987, then why does his average of 1945 change in 2012? Makes no sense right?
    Well, it does make sense when you understand that

    1. These algorithms do not calculate averages. They estimate fields.
    2. If you change the data ( add more, adjust it etc )
    3. If you improve the algorithm, your estimate of the past will change. It SHOULD change.

    I’ll illustrate this with an example from out work.

    To estimate a feild we have the climate field and a correlation field.
    When we go back in time, say before 1850, we make an assumption.
    The correlation structure of the past will be like the structure of the present. A good skeptic might object.. how do you know?
    well, the answer is.. we dont. thats why it has to be assumed.
    The structure could be different. I imagine somebody could say
    ” use this structure I made up” well, you could do that, you could calculate that. you could make a different assumption.. not sure how you would justify it. Therefore, if we get new data which changes our understanding of today that will cascade and reform what we thought the past was.. principly because of the uniformity assumption.

    What is kewl is that there are a bunch of data recovery projects going on.. With our method we dont need long records. So,
    I have predictions for locations in 1790. That prediction was made using a climate field and correlation field. There are no observations at that location. When the recovery data gets posted then I can check the prediction.

    http://judithcurry.com/2013/09/27/95/#comment-388617

    —— ——– ——

    tonyb
    Sorry for this long response. I have at various times asked Mosh why historic temperatures are frequently cooled. There is no better example than with Giss which, between being outlined at the Congress hearing in 1988 and today have been cooled. The first part of this are my various links related to this. The second part is Mosh’s response as to why temperatures are retrospectively cooled. I don’t want to misrepresent Mosh so I am not sure he was directly responding to the Hansen data but more the general question

    ——- ——-

    http://image.guardian.co.uk/sys-files/Environment/documents/2008/06/23/ClimateChangeHearing1988.pdf

    see figure 1 for global 5 year mean

    here is latest giss

    temperatures seem to have warmed in later years and cooled in 1940’s

    http://data.giss.nasa.gov/gistemp/graphs_v3/

    hansen lebedeff 1987

    http://pubs.giss.nasa.gov/docs/1987/1987_Hansen_Lebedeff.pdf

    RESPONSE

    Steven Mosher | September 27, 2013 at 11:18 pm |
    Sure tony.

    First, its hard to reconstruct piece by piece all the changes that
    VARIOUS people made that result in the changes you see.

    But let me have a wack.

    First, understand that the GISS answers are the result of
    Data input and Algorithm.

    1. Data input.
    There are two principle causes. First is the change in the core dataset. The moves throuh various versions of USCHN will result in changes because the processing of that data changed. Essentially the big adjustments for TOBS and other bits in the US.
    By looking at datasets outside USCHN we can see that these adjustments are justified. In fact the adjustments are calibrated by looking at hourly stations close to the USCHN stations.
    Next, the GISSTEMP algorithm will change the estimates of the past
    as New data for the present comes in. This has to do with the RSM method. This seems bizarre to most folks but once you walk through the math you’ll see how new data about say 1995, changes what you think about 1945. There are also added stations so that plays a role as well.

    2. ALgorithm side of things. You have to walk back through all the papers to to get an idea of the changes. But they do impact the answer.

    The fundamental confusion people have is that they think that global indexs are averages. And so if Hansen average 1945 in 1987, then why does his average of 1945 change in 2012? Makes no sense right?
    Well, it does make sense when you understand that

    1. These algorithms do not calculate averages. They estimate fields.
    2. If you change the data ( add more, adjust it etc )
    3. If you improve the algorithm, your estimate of the past will change. It SHOULD change.

    I’ll illustrate this with an example from out work.

    To estimate a feild we have the climate field and a correlation field.
    When we go back in time, say before 1850, we make an assumption.
    The correlation structure of the past will be like the structure of the present. A good skeptic might object.. how do you know?
    well, the answer is.. we dont. thats why it has to be assumed.
    The structure could be different. I imagine somebody could say
    ” use this structure I made up” well, you could do that, you could calculate that. you could make a different assumption.. not sure how you would justify it. Therefore, if we get new data which changes our understanding of today that will cascade and reform what we thought the past was.. principly because of the uniformity assumption.

    What is kewl is that there are a bunch of data recovery projects going on.. WIth our method we dont need long records. So,
    I have predictions for locations in 1790. That prediction was made using a climate field and correlation field. There are no observations at that location. When the recovery data gets posted then I can check the prediction.

    http://judithcurry.com/2013/09/27/95/#comment-388617

    —– ——-
    tonyb

    • Bob Ludwick

      @ tonyb

      So If understand Mosh correctly, our global temperature history, which we use to justify trillion dollar political decisions, is actually the output of an algorithm rather than the output of an ensemble of thermometers? And that the algorithm that processes new instrument readings to produce the current Temperature of the Earth also retroactively adjusts historical temperature records? And the algorithm can produce precision temperature data 200+ years old for locations at which there were NO direct observations.

      As Mosh would say: kewl!

      Modern science knows no limitations.

      • Steven Mosher

        The data for this station is presented below in several columns and in
        % several forms. The temperature values are reported as “raw”,
        % “adjusted”, and “regional expectation”.
        %
        % The “raw” values reflect the observations as originally ingested by
        % the Berkeley Earth system from one or more originating archive(s).
        % These “raw” values may reflect the merger of more than one temperature
        % time series if multiple archives reported values for this location.
        % Alongside the raw data we have also provided a flag indicating which
        % values failed initial quality control checks. A further column
        % dates at which the raw data may be subject to continuity “breaks”
        % due to documented station moves (denoted “1”), prolonged measurement
        % gaps (denoted “2”), documented time of observation changes (denoted “3”)
        % and other empirically determined inhomogeneities (denoted “4”).
        %
        % In many cases, raw temperature data contains a number of artifacts,
        % caused by issues such as typographical errors, instrumentation changes,
        % station moves, and urban or agricultural development near the station.
        % The Berkeley Earth analysis process attempts to identify and estimate
        % the impact of various kinds of data quality problems by comparing each
        % time series to neighboring series. At the end of the analysis process,
        % the “adjusted” data is created as an estimate of what the weather at
        % this location might have looked like after removing apparent biases.
        % This “adjusted” data will generally to be free from quality control
        % issues and be regionally homogeneous. Some users may find this
        % “adjusted” data that attempts to remove apparent biases more
        % suitable for their needs, while other users may prefer to work
        % with raw values.
        %
        % Lastly, we have provided a “regional expectation” time series, based
        % on the Berkeley Earth expected temperatures in the neighborhood of the
        % station. This incorporates information from as many weather stations as
        % are available for the local region surrounding this location. Note
        % that the regional expectation may be a systematically a bit warmer or
        % colder than the weather stations by a few degrees due to differences
        % in mean elevation and other local characteristics.
        %
        % For each temperature time series, we have also included an “anomaly”
        % time series that removes both the seasonality and the long-term mean.
        % These anomalies may provide an easier way of seeing changes through
        % time.
        %
        % Reported temperatures are in Celsius and reflect monthly averages. As
        % these files are intended to be summaries for convenience, additional
        % information, including more detailed flagging and metadata, may be
        % available in our whole data set files.
        %

      • BL – using an algorithm is unavoidable unless you want only a tabulation of the temperature records. That’s kind of hard to make any sense of.

        Instead, you could use an algorithm called “The Daily Average” and use that output to create a chart.

      • Bob Ludwick

        @ Steven Mosher

        Thanks Mosh

        And from the described procedure we arrive at a multi-century time history of data points, temperature of the earth for year x vs year x, with a precision of hundredths of a degree, from which an anthropogenic CO2 signal can be extracted with enough certainty to conclude that anthropogenic CO2 will prove catastrophic if it is not curtailed via massively taxing and regulating ‘carbon signatures’?

        I am not a scientist and it may in fact be true that if tortured as you described the data will indeed reveal the truth, however reluctantly. As for me, having some experience in measuring and maintaining temperature in a heavily insulated heat chamber controlled by a PID controller and realizing that hundredths of a degree accuracy under even those conditions is imaginary, I would not be confident in using it in deciding anything more critical than ‘Do I want fries with that?’.

        By the way, I am confident that the data was indeed processed as described and would not attempt to duplicate your efforts, even if I had the expertise.

        As an ‘outside observer’ it appears to me to be an excellent example of how Rube Goldberg would demonstrate ‘GIGO’.

      • Steven Mosher

        Bob

        “And from the described procedure we arrive at a multi-century time history of data points, temperature of the earth for year x vs year x, with a precision of hundredths of a degree,”

        the precision is NOT to hundredths. You like many others do not understand what the Average represents.

        Let me give you an simple example.

        i have a scale.

        I have a rock

        the scale reports to the closest pound.

        I measure the rock 10 times:

        1,2,1,2,2,2,1,2,1,2

        I now estimate the weight given all the information
        the average is 1.6

        does this mean I have measured to 1/10th no.

        whats it mean?

        does it mean I know the weight to within 1/1oth

        No.

        It means my best estimate is 1.6, That is, I predict that IF you measured it with a more precise scale that 1.6 will be closer to the truth than 1 or 2
        1.6 is a prediction that minimizes the error.

        We can test this. given the data and what you know about the scale, bet me. do you have a better estimate of the true weight and how did you compute it?

        If I weighed it 100 times and came up with 1.55, then 1.55 would be the best estimate.

        we can actually test whether you have a better estimate of tempearture.
        its easy.

      • Bob Ludwick

        @ Steven Mosher

        “the precision is NOT to hundredths. You like many others do not understand what the Average represents.”

        Of course it isn’t. And while I don’t understand what the Average TOE represents, I DO understand that it is the output of a process, described by you, that would embarrass Rube Goldberg. And I am NOT challenging your position that it represents the ‘best estimate’ of the TOE. Emphasis on the ‘estimate’. The whole procedure that you described appears to me to be, for all practical purposes, the ‘Climate Science’ version of Isacc Asimov’s 1955 story ‘Franchise’.

        Yet that doesn’t stop the headlines from breathlessly declaring that ‘May 2014 shatters the record for the warmest May in history!’ (quoting-inexactly-a recent example provided by FOMD). Evidence of the shattered record: a difference, after the data has run the gauntlet described by you above, of somewhere around 0.02 degrees.

        I wouldn’t care about the obvious silliness of it all except for the fact that such ‘data’ is regularly cited as evidence of imminent catastrophe, proof that ACO2 is the culprit, and justification for essentially wiping out our energy and transportation infrastructure by forcing reductions in ACO2 of 90+ %. Now THAT I care about.

      • Mosh’s example (June 28, 2014 at 5:52 pm) for measurements using a rock and scale makes two critical assumptions for which I doubt he can give evidence is true for temperatures: 1) the error is random, 2) the measurement is unbiased.
        For example if the scale weighing the rock always measured accurately to the nearest tenth but gives the weight truncated to the proper unit (e.g., 1.7 to 1 such as a practice of reading a measurement to the nearest value under the mark), no amount of averaging would improve an estimate for a rock weighing 1.7 and the estimate of this rock would never be unbiased.

  10. An Steve Goddard is famous in my book for getting the triple point of water wrong, and more famously for refusing to acknowledge that he was indeed wrong.

    Sorry about that

    • Oh Bob, give your cult like rants a rest……..

      • A tribe by any other name would smell a cult.
        ===========

      • Teddi,
        Your cult want to worship Steve Goddard, go right ahead.
        He gets things wrong a lot.
        Point being, it’s not cooling, no matter what Kim believes.

    • Which error is more egregious not understanding triple points on phase diagrams or claiming pH’s >7, heck, >8, are acidic? Obviously it must be not understanding a phase diagram because climate scientists, experts in their field, get through peer review with the latter.

      • Or not progressing past the junior high definition of acidity?

        Somewhere along the line I learned that water was both an acid and a base.

        So then how come all the peer reviewed articles call it acidification rather than neutralization?

    • So no doubt you disapprove of Mann using a proxy upside down and then refusing to admit it, even after Kaufmann admitted he had done the same stupid thing and the original author confirmed what the right way up was. And that’s only one of many basic errors from mainstream climate scientists. Double standards!

      • Well I might have an opinion either way if it made a difference to the results.

        Newsflash: the orientation of the proxy didn’t have an effect on the results.

        Got that?

        Some of the proxies didn’t make it through the Mannomatic reconstruction algorerythim.

  11. “… the main point is that this group is rapidly self-correcting – the self-correcting function in the skeptical technical blogosphere seems to be more effective (and certainly faster) than for establishment climate science.”
    ___________________________________________________

    Would that the same self-correcting impulse existed within the climate clerisy. There would be no need of external auditors or even external skeptics. It is the absence of quality control, to say nothing of skepticism, among climate scientists that makes the “technical blogosphere” both inevitable and necessary.

  12. (I drafted what follows before the appearance of this morning’s post.  Perhaps the issues with surface temperature measurements bear on why climate model predictions seem to better track surface temperature measurements than they do temperature readings from satellites?)

    ” … there really isn’t reliable evidence of a nonzero trend since 1997, in a purely statistical sense.”

    This is from the blog at http://tamino.wordpress.com/2014/02/23/uncertain-t/

    After reading this, I looked again at the graph Roy Spencer posted, showing a comparison of model predictions and temperature anomaly measurements from both surface stations and from satellites:  http://www.drroyspencer.com/2014/02/95-of-climate-models-agree-the-observations-must-be-wrong/

    It occurred to me that a prediction of ZERO temperature change over the 1983-2013 interval might yield about the same average prediction error as would the prediction of the average of the climate models (black line.) I used the Spencer graph to create the different prediction error series (1984-2013): (1) Satellite observations (blue line) vs zero change; (2) Surface observations (green line) vs zero change; (3) Model predictions vs satellite observations; and (4) Model predictions vs surface observations.

    Mean prediction errors (1984-2014) are estimated as (1) 0.118; (2) 0.201; (3) 0.178; and (4) 0.095.

    Both parametric and nonparametric tests show the average prediction error in (1) is smaller than the average prediction error in (3), while the average prediction error in (2) is greater than the average prediction error in (4)  All tests are significant at p<0.01.

    This exercise suggests that a prediction of "no change in temperature" offers a slight improvement over the average climate model predictions for satellite data, while the average model has an advantage over the "no change" prediction for temperatures measured at the surface.

    • The logical conclusion from your assertion that “zero would be just as bad” (to paraphrase) is that the range of predictions is far too small.

      This is yet further proof that natural variation is far far bigger than has been allowed in the models.

      I would suggest this is the primary argument of skeptics.

      We do not predict zero warming. What we predict is that natural variation is significant (or I would suggest dominant).

      As such the whole approach of climate researches is wrong. There is no evidence in the climate record that would force any skeptic scientist to reject the hypothesis that the global temperature change is entirely natural variation. Therefore the only evidence we have of the effect of CO2 is the laboratory measurements of the CO2 greenhouse properties:

      http://scottishsceptic.wordpress.com/2014/06/25/my-best-estimate-less-than-1c-warming-if-co2-level-is-doubled/

      • I believe you may have misinterpreted my intent. It was NOT to push back at a straw man claiming no warming occurred 1983-2013. Rather, it was to satisfy my curiosity about whether climate models have performed any better in predicting temperature anomalies than would some arbitrary statement such as “no-change has occurred.”

      • Bill: “it was to satisfy my curiosity”. OK, I understand your point.

        I used pretty much the same analysis when the Met Office used to do yearly predictions of global temperature. I recall their average warming was 0.05C and their average error was 0.06C!!!

        I did check whether their estimate was better than a “same as last year” and unfortunately, it was about 0.05C better.

        Unfortunately, this can only be anecdotal, because, AND VERY RELEVANT HERE, when I went back to use my original analysis and started checking it, I couldn’t find my own data, and found that the Met Office data had changed so far beyond recognition, that it no longer corresponded to my original analysis.

        It appeared to me they had completely changed the data – I could find no way to get my original data (which was done direct from the HADCRUT data) to match.

        I still cannot understand how nine years of data in the very near past could have changed so much. If a company ran their accounts like that, the Tax people would be down on them like a tonne of bricks and the directors would be in prison.

      • Leave no trails, fingerprints, footprints, smoking guns or
        similar information that may incriminate.Tsk!

    • Spencer’s graph has some problems. The actual HADCRUT4 trend from 1983-2013 would be near 0.5 C. He started his curve from the 1983 El Nino peak that shifts his observation line downwards. Here is a HADCRUT4 plot with the actual 1983-current trend. The models are much closer to this trend (0.16 C per decade) than zero would be.

      http://www.woodfortrees.org/plot/hadcrut4gl/from:1970/mean:12/plot/hadcrut4gl/from:1983/trend

  13. I’ve now had time for a much fuller analysis.

    Certainly, the case of Luling seems to have been an outlier, and the explanation about faulty equipment rings true.

    However, I have now done a similar analysis across the whole of Kansas, using January 2013 data.

    Estimated data tots up to 8 out of 29 USHCN sites, a ratio of 28%.

    On all but one site, USHCN have adjusted up actual temperatures, by an average of 0.46C. This is in addition to the usual cooling the past by about half a degree.

    My understanding is that TOBS and other adjustments were always applied to temperatures, while present ones were left alone.

    http://notalotofpeopleknowthat.wordpress.com/2014/06/28/ushcn-adjustments-in-kansas/

  14. “A good skeptic might object.. how do you know?
    well, the answer is.. we dont. thats why it has to be assumed.”

    Have a nice day.

  15. USHCN data changes every day. They change data without error flags.

    An example of one days changes just for Jan 1998.

    http://sunshinehours.wordpress.com/2014/06/22/ushcn-2-5-omg-the-old-data-changes-every-day/

  16. When I started the petition asking for an investigation of the UEA, it was largely because I had grown very suspicious of the behaviour of the HADCRUT dataset. What I mean by that, is that it appeared to be manipulated consistently in a particular way.

    One of the most worrying aspects, was the way any cooling figures were delayed, whilst warming ones were hurried out. Indeed, I used to quite enjoy it when they were late, because I knew there would be something juicy in the figures.

    The big one was February — probably 2007 when after a series of “warmest ever” articles in all the papers, it was the “coolest in 14years”.

    However, for the two weeks before that figure came out, there was an absolute deluge of global warming articles. ABSOLUTE DELUGE.

    When the figure came out, I even tried producing a press release – but the press were sick to death of climate by then — and then I realised why there had been the storm.

    And, over the next few months, that “14 year coolest”, gradually disappeared.

    What was worse, was when I started finding that changes going back many years were mysteriously appearing in the dataset. At first I just accepted them. But eventually I realised that this was very characteristic of data fiddling. Unfortunately, I hadn’t kept earlier data – so I couldn’t prove something was going on.

    Let me put it this way. I used to work in a factory where people were constantly trying to pull the wool over your eyes. So, you learn to have a “sixth sense” for fiddled data. By the time of Climategate I was convinced something was going on – i didn’t know excactly what, but I knew that somehow the data was being “massaged”.

    Or as I now call it “upjusted”.

    After Climategate I stopped bothering the with figures. I knew the people couldn’t be trusted, so it was pretty pointless working in that area.

    However seeing how many stories keep coming out about tampering, I am if anything more convinced that the data is being fiddled.

    I’m now at a stage, that the only figure I will trust, is one where all the data and workings are open to view and where I can work through the procedure myself.

  17. The data is ridiculously bad, the people in charge of this very bad data cover up that it is so bad and in fact they invent methods to make the bad data worse.

    I cannot think of one area in science, medicine or engineering where such data and data manipulation practices would even be allowed let alone condoned or bragged about.

    We should thank our lucky stars the NASA/NOAA people do not work for Boeing or Airbus and were designing aircraft. They would be falling out of the sky like autumn leaves.

    Only in Climatology it seems this level of incompetence and cover up is considered “science”.

    • The irony in all this, is that the standards of engineering are far far higher than academia – yet our madmen in government call skeptic engineers “deniers” for pointing out that what the academics do wouldn’t be tolerated outside academia.

    • Funny you should say that, have they been involved in the X-51A Waverider debacle?

      • Debacle? You do know what the “X” means?

      • Yes I do, how many other X aircraft had 2 flight failures in a row. I also know that the earlier X series had failures, especially the X15 killing Major M. J. Adams in 1967.
        But you would have thought with all the Simulation and Pre-testing that goes on today that you wouldn’t get 2 in a row.
        It was good to see that the X51 finally proved the worth of the work done on the X43.
        It will be even better to see something based on it in flight, if we are ever allowed to that is.

  18. When you simply bend over backwards determined not to see something you don’t want to see, a certain orifice starts to come into sight.

    Pointman

  19. I think it is entirely up to the climate community to make sure that adjustments to the temperature data are appropriate. I wonder why they don’t seem to care. Any paper that uses temperature data that is adjusted in the future should no longer be cited if those adjustments make the paper’s conclusions irrelevant. Any paper that then cites the data in that paper and that attempt to build on the conclusions reached should also no longer be cited. There would be no quicker way to make sure the adjustments are supported than to tell a bunch of scientists the adjustments make their previous work worthless.

    • That actually sounds like a really good reason to make sure that any adjustments enforce the current climate concept. The climate community has a clear dog in the race of making sure the slope on the temp graphs remains clearly positive.

      • As an random example not knowing if adjustments would matter or not: “Annual and seasonal air temperature trend patterns of climate change and urbanization effects in relation to air pollutants in Turkey” was published in 1997. It cites papers from the 1980s. It is cited by papers one as recently as this year. If the earliest ones are now wrong due to adjustments and it matters to the conclusions of the latest ones, they are all wrong.

  20. I’ve just made a comment that is directly relevant but it is in moderation as it contains lots of links and seems to have reproduced itself. If anyone notices, perhaps they could delete the repeat.

    Would you bet your house on the accuracy of a temperature reading prior to the use of properly sited digital stations? No. Whilst many are individually good many more have a string of associated problems. Even the good ones have probably been substantially adjusted

    I wrote about some of the myriad problems with taking accurate temperatures here.

    http://wattsupwiththat.com/2011/05/23/little-ice-age-thermometers-%E2%80%93-history-and-reliability-2/

    The further back in time the more potential for problems there are. Thermometer accuracy, accuracy of readings, calibration, time of day, recording a true max and min, use of appropriate screens, there are many and varied ways of messing up a temperature. If you really want to try to get to the REAL temperature of a historic record then you need to spend millions of Euros and several years examining 7 historic European temperature records as Camuffo did..

    The result is a 700 page book which I have had to borrow tree times in order to read it properly

    http://www.isac.cnr.it/~microcl/climatologia/improve.php

    Do all historic temperatures get such five star analysis? No of course not. We should treat them all with caution and remember Lambs words about them that ‘we can understand the tendency but not the precision.’ Some will be wildly wrong and misleading, some will be good enough. Do we know which is which? I doubt it.

    I have no doubt that temperatures have ranged up and down over the centuries as there is other evidence to support this. Do we know the global temperatures to tenths of a degree back hundreds of years? Of course not. Do we know a few regional examples to an acceptable degree of accuracy. Yes, probably.

    Have temperatures been amended from the raw data? Yes. Has it been done as part of some deliberate tampering with some of the record, rather than as a scientific adjustment for what are considered valid reasons? I remain open to the possibility but am not a conspiracy theorist.

    Someone like Mosh-who I trust- needs to keep explaining to me why the past records are adjusted. With this in mind it needs clarification as to why the readings from the famous 1987 Hansen hearing differ in part to the ones Giss then produced (see previous post that was in moderation) . I am sure there must be a valid reason but as yet no one has told me what it was.

    tonyb

    • They enjoy the secret game they play.

    • Steven Mosher

      Tony since we use raw daily for the vast vast majority of our data and since raw isnt changed, I can only surmise. Plus there are various things people are talking about when they refer to changing the past.

      1. Changing an ACTUAL station record
      2. Recomputing the global average and coming up with a cooler past

      those are TWO different issues

      • Mosh

        Yes, these are two separate issues but somewhat interrelated. My prime interest is in 2) but obviously if 1) has occurred that would affect 2) if 1) was a historic record.

        tonyb

      • Steven Mosher

        yes tony they are related.

        But for the most part I find the discussion un interesting. These guys use adjusted data. As their data input changes (more data or less) as their adjustment code changes, you are bound to see changes in the adjusted data. that may drive a change in the global.

        I dont like the explicit adjustment approach.

        I prefer to take the raw data and calculate an estimate of what we expected to see given the data

  21. “…the self-correcting function in the skeptical technical blogosphere seems to be more effective (and certainly faster) than for establishment climate science.”

    This should be cause for sober self-reflection on the part of the establishment “experts.” Also the more rabid citizen warmists among us.

    Of course it won’t, which should be further cause for sober self-reflection on the part of the establishment “experts.” Also the more rabid citizen warmists among us.

    Of course it won’t, which should be…

    • The skeptical bench strength and farm team is weak, meaning meager opportunity for inertia in assertions made by skeptics. Hence a quick recovery

    • This is not just a phenomenon restricted to climate, or even wider science.

      The internet has fundamentally changed the power structures in society away from the “establishment” to the “peer-to-peer” communication networks (of which this is a good example – me speaking to thee).

      When printing was developed it fundamentally changed the nature of society – because whereas formally the catholic church was the prime authority, suddenly every Tom dick and sally could read the Bible for themselves and that became an alternative and often conflicting authority.

      Now the internet, means that you and me can go and find the data ourselves, can read the work of both academics and critics online and now rather like the protestants, we are now not as convinced of the omnipotence of the church of Science.

      Likewise, the press are also losing their place as the de facto source of news and the de facto source of public opinion. Now we have alternative sources of news and views.

      These are all fundamentally challenging the power of the “establishment” whether in science, or indeed history (http://mons-graupius.co.uk) or even in politics where across Europe the “non-establishment” parties made massive progress. (And the Arab spring may well be the same social revolution spread by social media).

      So, rather than “sober reflection”, I suggest it is time the science establishment recognised that it historical position as omnipotent judge of scientific “truth” is now at an end.

      http://scottishsceptic.wordpress.com/2013/12/28/the-citizen-scientist-a-paradigm-shift-in-science/

  22. Peter Webster

    Perhaps I am naive, but I have long believed in the sanctity of meteorological data. As a junior meteorologist in Australia, we were trained to be “observers” and followed the tried and true WMO methods of recording the state of the atmosphere. Thus, I have long believed in station data (as distinct from some form of reanalysis) as being a faithful rendition of the truth. If somehow these records have been rendered statistically then this needs to be laid out in the open. There is too much at stake for the data record to become questionable. NOAA’s, whose credibility is low to begin with, needs to get on top of this issue through an open and public examination. There is just too much at stake.
    PW

    • I didn’t know the credibility of NOAA was low.

      I at least give them credit for knocking down Trenberths unfounded speculation that the 2010 Russian drought and Pakistan flood were indicators of some kind of manmade-warming-induced weather weirding rather than a natural event from Jetstream blocking with no connection to CO2. Getting things wrong doesn’t seem to be as much of a sin for Trenberth as it is for Goddard somehow!

  23. ‘Remember the Saved Space’

  24. Judith, the problem is very real. Independent of Goddard, the past has been cooled and in some cases the present also warmed. (the opposite of what should be done to, and the opposite of what the NASA GISS site claims it does using Tokyo as the example.
    I documented this for specific places (Reykyavik Iceland, Sulina Roumania, Darwin Australia, for US states (California and Maine), for entire countries (US, Australia, New Zealand) and for NCDC (NOAA), GISS (NASA), and HADCrut4. GHCN v2 is worse than V1. HadCru4.2 is worse than 4.1.
    A lot of the warming (best guess is up to half) has been manufactured through improper homogenization. Steriou (ref below) estimate.
    One example of one of the problems can be seen on the BEST site at station 166900–not somempoorly sited USCHN starion, rather the Amundsen research base at the south pole, where 26 lows were ‘corrected up to regional climatology’ ( which could only mean the coastal Antarctic research stations or a model) creating a slight warming trend at the south pole when the actual data shows none-as computed by BEST and posted as part of the station record.
    Homewood posted on Luling Texas. Steriou and Koutsoyiannis posted on a sample of 163 GHCN stations at the European Geosciences Union 2012 General Assembly. Showed systemic warming bias in GHCN. Presentation is available online at. Itia.ntua.gr/1212. Rewarding read.

    That said, Goddard was wrong with the way he computed in filling consequences. But he is right about the big picture. All anyone has to do is get an older ‘official record’ and compare to a newer ‘official record’, or get raw data and compare to the latest homogenized data.

    • oops. NASA GISS claimed adjustment to compensate for UHI over time. They say you warm the past rather than cool the present (which would lead to discord between present ‘Actuals’.

    • Steven Mosher

      “One example of one of the problems can be seen on the BEST site at station 166900–not somempoorly sited USCHN starion, rather the Amundsen research base at the south pole, where 26 lows were ‘corrected up to regional climatology’ ( which could only mean the coastal Antarctic research stations or a model) creating a slight warming trend at the south pole when the actual data shows none-as computed by BEST and posted as part of the station record.”

      The lows are not Corrected UP to the regional climatology.

      There are two data sets. your are free to use either.
      You can use the raw data
      You can use the EXPECTED data.

      The regional expectation is the best estimate given the data.
      OF necessity all local stations will deviate from the expectation.
      Given 40000 stations you will and you must find any number
      of odd cases. Why, because the expectation is a optimal surface
      fit to the data and the fit is not perfect for a variety of reasons

      The ‘adjusted’ series shows you what the geostatistical model predicts for this station. In the case of Amundsen, there is a pattern of residuals that suggests one of two things

      A) A local climate issue related to inversion layers
      B) a poor model fit due to the closest station be far away.

      For example, in the US we have a super high density. At the south pole not so many. That means the drift model for the south pole is going to have worse residuals.

      if the problem is due to A) then adjusting the geo statistical model to account for inversion layers is an approach. Robert Way and I have been tinkering with various approaches. In the end the GLOBAL answer doesnt change, the LOCAL DETAIL does.

      • Mosher this is the BEST data for LULING

        http://berkeleyearth.lbl.gov/stations/26477

        You have 6 moves post-49 and a flat temperature change between 1939 and 2014 is adjusted to become almost a degree of warming

      • Steven Mosher

        the station moves means that IT IS NOT THE SAME STATION

        if I take a station at 0 feet ASL and move it to 1000 feet ASL I am
        measuring something different.

        rather than ADJUST the station for changes in location, we split the station record into 6 good parts

        A good station is one that doesnt move. If I move a station from point A to point B it is NO LONGER GOOD. instead, we split it into 2 stations
        and all calculations treat it as two stations both of which are GOOD with respect to a homoogeniety of location quality test.

        NEXT. the series are NOT ADJUSTED. we dont apply a unique algorithm to each station and bump it up or down. Instead we calculate WHAT WE EXPECT the station would have reported IF it gave measures consistent with all its neighbors.

        When you adjust stations, say for instrument changes, you DISCRETLY add or subtract a quanity to the record and create an adjusted series.

        Add up a bunch of adjustements and you have a problem. whats the right error propgation.

        Instead, we create the expectatation that sums all deviations from the minimal surface.

        Read the following VERY CAREFULLY. VERY CAREFULLY

        You have 3 choices of data from us
        1. RAW
        2. Expected ( we call it “adjusted” note the scare quotes)
        3. regional expectation.

        Depending on what you want to do, you pick the data you want.

        % The data for this station is presented below in several columns and in
        % several forms. The temperature values are reported as “raw”,
        % “adjusted”, and “regional expectation”.
        %
        % The “raw” values reflect the observations as originally ingested by
        % the Berkeley Earth system from one or more originating archive(s).
        % These “raw” values may reflect the merger of more than one temperature
        % time series if multiple archives reported values for this location.
        % Alongside the raw data we have also provided a flag indicating which
        % values failed initial quality control checks. A further column
        % dates at which the raw data may be subject to continuity “breaks”
        % due to documented station moves (denoted “1”), prolonged measurement
        % gaps (denoted “2”), documented time of observation changes (denoted “3”)
        % and other empirically determined inhomogeneities (denoted “4”).
        %
        % In many cases, raw temperature data contains a number of artifacts,
        % caused by issues such as typographical errors, instrumentation changes,
        % station moves, and urban or agricultural development near the station.
        % The Berkeley Earth analysis process attempts to identify and estimate
        % the impact of various kinds of data quality problems by comparing each
        % time series to neighboring series. At the end of the analysis process,
        % the “adjusted” data is created as an estimate of what the weather at
        % this location might have looked like after removing apparent biases.
        % This “adjusted” data will generally to be free from quality control
        % issues and be regionally homogeneous. Some users may find this
        % “adjusted” data that attempts to remove apparent biases more
        % suitable for their needs, while other users may prefer to work
        % with raw values.
        %
        % Lastly, we have provided a “regional expectation” time series, based
        % on the Berkeley Earth expected temperatures in the neighborhood of the
        % station. This incorporates information from as many weather stations as
        % are available for the local region surrounding this location. Note
        % that the regional expectation may be a systematically a bit warmer or
        % colder than the weather stations by a few degrees due to differences
        % in mean elevation and other local characteristics.
        %
        % For each temperature time series, we have also included an “anomaly”
        % time series that removes both the seasonality and the long-term mean.
        % These anomalies may provide an easier way of seeing changes through
        % time.
        %

      • SM, what is the “empirical break?”

      • The more I look at the Luling data, the less I’m buying these methods of “adjustment.”

      • Adjusting a station only because it doesn’t seem to match neighboring stations isn’t a good reason. You would have to have some good reason to suspect the data from the station isn’t right other than that.

        This “adjustment” based on other stations seems to be the commonality in the warming of the temperature record – not necessarily the warming of the Earth.

      • OK, based on mesoman’s comment, it seems this station DID need adjustment. I retreat from my earlier position. But still skeptical of course.

      • Steven Mosher

        jim

        First there are NO ADJUSTEMENTS

        the “adjusted” data represents what we EXPECT if
        A) the station had not moved, had not change TOB, had not had instrument changes.
        B) the station is like its neighbors.

        Imagine 10 stations. within 100 km.
        9 never move or have instrument changes
        1 has 6 moves and an instrument change

        First we split that one station into 6 different stations Because WHILE IT HAD TH SAME NAME ITS LOCATION CHANGED. its not one station

        next we fit a surface to all 10. Looking at all the data IF we had to predict what a station reported given ALL the information, what would we predict?

        thats “adjusted” data but the term is not the PRECISE DEFINITION of what happens mathematically

        Again, suppose you had 9 stations that reported 12C for for 10 years straight. and one station that reported 12 C for 5 years,But at year 5 it moved. After it moved it it recorded 11.5C

        And adjustment approach would go in and say
        “hey the altitude of the station went down up by 300 meters Im going to ADD an adjustment of .6C for lapse rate. 11.5 gets adjusted to 12.1

        We dont do that. We say, Given all the data we expected this station to report 12C. We fit a surface to all the data and minimize the error.

        the difference between the raw and the expected is due to ALL errors.

        Now, an emprical break happens when All the stations show a flat trend and one spikes up to warming or cooling.

        We DONT adjust this out. we simply split the record and say
        ‘it looks like their is a structural break in the time series here”

      • Steve, read this question very, very, carefully before answering;
        The BEST data assumes that the station in question has undergone SIX moves since 1949, now can you tell use how many physical changes this station has had that constitute moves since 1949?

      • SM – If you prefer to call it “expected” instead of “adjusted” that’s fine with me. In the case of the Luling station, the number attributed to it are not the numbers it produced, as I said, call that whatever you like.

        In the Luling case, the data cable apparently was damaged. So, I admit your method caught the fact that the Luling station was faulty. I see that as a good thing and make me have more confidence in the method.

        However, knowing there is a problem with the station, whether that is determined algorithmically or by boots on the ground, IMO it should just be dropped and not used. The same goes for any station that’s been discontinued. Attributing data where none exists adds no additional information. So why bother at all?

      • Steven Mosher

        jim

        why use it at all?

        There are two approaches.

        Approach 1. determine APRIORI what constitutes a good station.
        Approach 2 use all the data.

        people say well just use all the good stations. problem? this is a choice
        It is a choice that is SUBJECT TO ERROR.
        what error?
        CLASSIFICATION ERROR

        1. How do you know what counts as a good station?
        2. Do you have tests that confirm the characteristics you use are IN FACT
        salient?
        3. How good is your metadata?

        So, approach 1 has several un tested assumptions that lead to an non estimatable error.

        Approach 2 says use all the data and minimize the error.
        In the end you have the ability to test your assumptions
        How? take subsets of ALL data.

        people who suggest that we should only use good stations are bad skeptics. Why? because they never question their classification criteria or their classification error. Note steve mcintyre is included in this class of bad skeptics, so even smart people miss that that choosing only good stations is NOT an error free process since it presupposes a valid criteria and valid metadata

      • But Steve, your own code has flagged part of the stations data as invalid. A technician has confirmed it has a bad cable. Yet, you insist on using it? I don’t believe I’m a bad skeptic because I would throw out bad data. You are just replacing the bad data with data gathered elsewhere. It makes no sense.

        In science, if you know you screwed up a measurement, you don’t just shrug your shoulders and try to “fix” it. You don’t use it.

      • Steven Mosher | June 28, 2014 at 9:09 pm |
        There are two approaches.
        Approach 1. determine APRIORI what constitutes a good station.
        Approach 2 use all the data.

        Approach 2 is better. There can be an attempt to understand the trouble data or get rid of it. What data is perfect? How much do we get rid of and when do we stop? Did we get rid of the UAH data? It was fixed.

        I think some are getting what you’re saying.

      • Steven Mosher

        no jim there is no data added from another place.

        There is an ESTIMATE.

        that estimate says “if this station behaved as we expect, it would have recorded X, rather than Y”

        that expectation is based on

        Latitude
        Altitude
        Season
        past weather
        surrounding weather.

        it is nothing more than that.

        Given the model T = C + W +e

        the temperature at that station should have been X, but Y was recorded.

      • OK, I misunderstood. So you are using data from that station, the history of it, along with some physical parameters – lat, lon, altitude, past weather, and surrounding weather. Wasn’t that surrounding weather derived from measurements of temperature, humidity, etc of the surrounding area?

        I suspect it was, otherwise you would have a totally non-physical quantity representing “surrounding weather.”

        Even so, there is no need to “estimate” what the reading might have been. If there is bad data or no data there, there is no reason to attempt to estimate it.

      • Steven Mosher

        “Even so, there is no need to “estimate” what the reading might have been. If there is bad data or no data there, there is no reason to attempt to estimate it.”

        OF COURSE there is a reason to estimate it.

        How else do you do out of sample testing.

        do you think before you write

      • So, Steve, you seem to be saying you use the “estimated” data in some manner to do out of sample testing? It doesn’t make sense.

        Maybe you could elaborate?

        How do you do out of sample testing? You hold back some stations with data that appears to be good, and note I understand picking good stations will be problematic (but so are your estimates), and use those as the out of sample test set.

      • Steven Mosher

        Its simple jim

        You build your expected field with a sub sample, say 1000 stations,
        500? 5000, whatever.

        That field is a prediction. The prediction says

        At location x,y,z at time t, we predict the temperature will be 12.34 C

        This field is continuous so I can predict the temperature everywhere.

        Then I take the 35000 stations I have held out and I test how well my prediction works.

        So you ask,, why estimate? well because science is about making predictions and then testing them.

        as in duh.

        For example, we have 40000 stations. we use them to predict temperature everywhere.

        Now, somebody finds 2000 More stations. stations in places where we only have predictions. Now we get to test the prediction

        That would be science. why do science? why make estimates?
        I dunno, it was sunday and we were bored.

      • jim2, the surface temperature stations can be independently sub-sampled to give estimates for regional temperatures. The closeness of these independent estimates gives a measure of the uncertainty.

      • You’ve dodged the point Steven. You don’t need the estimated data to do out of sample testing.

      • Steven. You are slippery. We were discussing the estimated values that take the place of bad data. I do realize that your method produces a calculated value everywhere. But, again, I was referring to the known, bad data.

        You don’t have to replace the bad data with estimated data. You would use estimated data as test data because the estimate comes from you calculation. It would be the same number because it originates from the same calculation.

        To verify your calculation, you have to compare it to good station data.

        I think you are purposely obfuscating and being obtuse, that’s what I think.

    • Steven Mosher

      Read the efficiency manulThe data for this station is presented below in several columns and in
      % several forms. The temperature values are reported as “raw”,
      % “adjusted”, and “regional expectation”.
      %
      % The “raw” values reflect the observations as originally ingested by
      % the Berkeley Earth system from one or more originating archive(s).
      % These “raw” values may reflect the merger of more than one temperature
      % time series if multiple archives reported values for this location.
      % Alongside the raw data we have also provided a flag indicating which
      % values failed initial quality control checks. A further column
      % dates at which the raw data may be subject to continuity “breaks”
      % due to documented station moves (denoted “1”), prolonged measurement
      % gaps (denoted “2”), documented time of observation changes (denoted “3”)
      % and other empirically determined inhomogeneities (denoted “4”).
      %
      % In many cases, raw temperature data contains a number of artifacts,
      % caused by issues such as typographical errors, instrumentation changes,
      % station moves, and urban or agricultural development near the station.
      % The Berkeley Earth analysis process attempts to identify and estimate
      % the impact of various kinds of data quality problems by comparing each
      % time series to neighboring series. At the end of the analysis process,
      % the “adjusted” data is created as an estimate of what the weather at
      % this location might have looked like after removing apparent biases.
      % This “adjusted” data will generally to be free from quality control
      % issues and be regionally homogeneous. Some users may find this
      % “adjusted” data that attempts to remove apparent biases more
      % suitable for their needs, while other users may prefer to work
      % with raw values.
      %
      % Lastly, we have provided a “regional expectation” time series, based
      % on the Berkeley Earth expected temperatures in the neighborhood of the
      % station. This incorporates information from as many weather stations as
      % are available for the local region surrounding this location. Note
      % that the regional expectation may be a systematically a bit warmer or
      % colder than the weather stations by a few degrees due to differences
      % in mean elevation and other local characteristics.
      %
      % For each temperature time series, we have also included an “anomaly”
      % time series that removes both the seasonality and the long-term mean.
      % These anomalies may provide an easier way of seeing changes through
      % time.
      %
      % Reported temperatures are in Celsius and reflect monthly averages. As
      % these files are intended to be summaries for convenience, additional
      % information, including more detailed flagging and metadata, may be
      % available in our whole data set files.
      %

  25. You’ve been in touch with Watts, have you contacted Goddard/Heller. It would only seem fair to do so.

    http://stevengoddard.wordpress.com/2014/06/28/my-rebuttal-to-politifact/

  26. This is Jan 1895 to 2013 TMAX graphed (no gridding … but gridding doesn’t change much)

    The trend is -0.1C/decade raw.

    The trend goes to 0.2C with TOBS

    The trend goes to 0.5 with the rest of the adjustments.

  27. When I was in the service, I spent a few years at the USAF Automated Weather Network based at Carswell AFB, Texas(It has since moved). Few people realize, but the USAF has had a sophisticated process of capturing weather information on a global scale for many, many decades. This data is then relayed and stored either on the AWN mainframes or is sent to the USAF Global Weather Center in Nebraska. The kind of weather reports the AWN captures is of all types (METAR data, US CONUS data, ship and buoy data, pi-ball and other balloon data, etc….). I’ve always wondered why research scientists don’t tap this historical data. Why wait for bureaucracies to report their climo data when the USAF has it at their fingertips?

    Another bone I have to pick with NOAA and other agencies is their reliance on Max/Min temps and not hourly temps. They are missing out on the entire diurnal period. There would be no need for TOB adjustments, as the entire 24 hour period would be used. The hourly data would be much more accurate and lead to a cleaner statistical average than Min/Max. Personally, I would be much more interested in averaging the longer series of data (24 reports per 24hours than just a less accurate Min/Max).

    • The USN and RN have a LOT of data on polar ice thicknesses from the 60’s onward.

    • Steven Mosher

      trend is the same if you use hourly

    • I’ve wondered the same thing in the past. I worked on an Arctic project in the 1990’s and we scrounged for data to understand the ice and climate behavior, focusing on the sector from Murmansk’s longitude to the Bering Straight. I’m not a climatologist, so I can’t recall how we managed to obtain it, but we had detailed ice concentration maps I was told had been prepared using USA military satellite shoots. The data showed a clear tendency for improving ice conditions. But I never heard anybody mention the USA military, or the Russian military, as data sources (we also got some Russian data but their policy has been to scramble the geographic coordinates of their products, so what we got was fairly useless).

  28. I have been considering myself to be a rather low element on the Ivory tower wall, a retired high school teacher.
    I have been following the long term posts from Steve G., and analyzed the methods and at first realized, well he is extrapolating too much, from a non-specific method.
    But lately, I have been simply wondering what is wrong with his basic methods, what am I missing?
    In this article,just one site Texas site was indicated, but he has been all over the lower 48, and has suggested similar problems worldwide.

    The thing is he, and also Watts, have done a lot of leg work (for lack of a better term) for which the establishment has gotten lazy and defensive.

    I have been a long term coach. with two basic rules. One is you are good only when you are working to get better. which means at the end of the day one has to reevaluate everything.
    A large part of the climate community has become too satisfied and full of itself.
    Watts, Zeke et al in this case were simply lazy. and I am going to throw in a little barb, I have seen too many college/university instructors that if they were that lazy in high school they would be run out of the classroom. .
    (definitely not Judy C.)

    • Watts and Zeke were misled on this for reasons that Watts will write about. But they have done a quick turn around. Watts and Zeke don’t get paid for this stuff. Lazy (or worse) in this instance is reserved for NOAA/NCDC, IMO (with a multimillion $ budget).

      • Zeke Hausfather

        To my knowledge I wasn’t mislead on anything, apart from a missing flag in the USHCN data file. I can’t speak for Anthony.

      • “But they have done a quick turn around.”

        Really?

        I asked Zeke the same questions as Goddard. Got no answers. Zeke, Mosher, Stokes etc etc etc etc are just spreading confusion around.

        The curse of the temperatures is that they are numbers. Everyone who has a spreadsheet or a calculator starts messing with it and passing themselves off as though they are knowledgeable experts. A lot of them should not be allowed near raw data of any kind let alone temperature data on which, unfortunately, a lot of important things depend on.

        If your data handling and analytic philosophy is broken or non-existent, it does not matter that you are good at math. You should not do science.

      • Steven Mosher

        “I asked Zeke the same questions as Goddard. Got no answers. Zeke, Mosher, Stokes etc etc etc etc are just spreading confusion around.”

        really what confusion am I spreading?

        when every blasted idiot i assumed that we used ushcn adjusted, where was your ass to correct them? huh?

        which genius in any thread on the entire fricking web said “wait, Berkeley us raw data” where?

        spreading confusion. ya youre spreading confusion about who is spreading confusion

    • Isn’t the real problem that the only group seriously auditing the climate are woefully under resourced.

      We’ve got an industry with 10s of thousands of people, probably something like $1billion in funding for research, publicity, etc.

      And on the other side, we’ve got some guy with a PC.

      If a few guys with a PC can find problems which they appear to be incapable of explaining, then what on earth would a full time team of professional auditors find?

      The real truth is that what skeptics can find with our limited resources is probably the tip of the iceberg.

      Either we need properly resourcing so we can do the job more thoroughly, or we need another group of professional auditors set up.

  29. Judith,

    If you’re interested in this issue, why not go to the source?

    Why be “in email contact” constantly with Watts?

    This is Tony Heller (aka Steven Goddard’s) issue. He has the data, he has the analysis.

    Contact him.

    No need for second-hand sources.

    Thanks.

    Kent

    • The reason is this. Many do not consider Goddard to be a trusted source, and NOAA obviously did not respond to all the press surrounding Goddard’s claim. Zeke, Watts, Homewood, and Nielsen-Gammon are scrutinizing and clarifying the situation. And it now looks like NOAA is paying attention.

    • Kent – I got your book. I stopped at the chapters that purported to be a reprint of some letters from Communist Russia. It’s been a while and I don’t remember the exact chapter, but I couldn’t find any reference to back up the authenticity of the letters.

  30. Is Steve Goddard right? …. Suppose it doesn’t matter

    See http://en.m.wikipedia.org/wiki/Escalation_of_commitment

    Any upward revision (or measurement) of recent temperature data is akin to boosting “sunk costs”. You need to keep supporting and maintaining that upward warming trend.

    The hockey stick curve is a fine example. People are awed by exponential increase in global temperature. Surely it’s a terrible anomaly for the underlying physics as well. What physical process could bring about such a rapid sustained and accelerating rise in global temperature. Its even anomalous to increasing CO2 concentrations, no?

  31. “Who do I include in the technical skeptical blogosphere?
    “”Tamino” a skeptical blogger? In no shape or form Grant Foster skeptical about any claims from the “Team” side of things.

    • Sorry, he went after Hansen. He’s nasty. I’ll give anybody that. Sometimes extremely nasty.

      • Tamino makes really basic errors too. One I pointed out to him (out of several I noticed) was using a warming trend change-point starting at 1975 when on another thread he had utterly insisted to me that the 1945 to 1975 period had been artificially cooled by manmade aerosols. An honest man cannot maintain both positions simultaneously.

  32. A fan of *MORE* discourse

    The redoubtable Sou from Bundangawoolarangeera (mistress-editor of the celebrated climate weblog HotWhopper) weighs in with Sou’s encounter with Steve Goddard’s gish-gallop. Sou’s weblog is recommended to STEM students (especially) as a compendium of denialist examples of how *NOT* to do climate-science.

    Judith Curry asks “Why was Goddard’s original post unconvincing?

    Judith Curry asks, Sou from Bundangawoolarangeera answers!

    As for problems with temperature data, the consensus assessment — as affirmed by the Berkeley Earth Project, for example — is that (1) the foundations of climate-change theory arise in thermodynamics and radiative transport dynamics, whose robust predictions are affirmed by (2) large-scale datasets and large-scale global circulation models.

    The Berkeley Earth group concluded that the warming trend is real, that over the past 50 years (between the decades of the 1950s and 2000s) the land surface warmed.

    The Berkeley Earth results mirror those obtained from earlier studies carried out by the U.S. National Oceanic and Atmospheric Administration (NOAA), the Hadley Centre, NASA’s Goddard Institute for Space Studies (GISS) Surface Temperature Analysis, and the Climatic Research Unit (CRU) at the University of East Anglia.”

    The Berkeley Earth study also found that the urban heat island effect and poor station quality did not bias the results obtained from these earlier studies.

    Conclusion  Sou from Bundangawoolarangeera’s answer to Judith Curry’s question in regard to Steve Goddard’s dubious credibility is substantially supported by the Berkeley Earth Project analysis.

    *EVERYONE* appreciates *THAT* — young STEM students and young voters especially — eh Climate Etc readers?

    \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • Now if Berkeley Earth says they don’t use the tampered USHCN data, then your point would make some sense.

      • Steven Mosher

        The algorithm is set up to use raw daily FIRST, then raw monthly if no daily is available. For USHCN, last I looked we have raw daily and raw monthly. In the one case Anthony gave me to look at I confirmed we used raw data. That’s one case of course, but given the construction of the algorithm ( ALWAYS USE RAW DAILY FIRST), I would say what I have been saying for the past 2 years or so.

        We use raw data. not adjusted data. ( there are a couple corner cases, where we might.. and hopefully they will be eliminated )

      • … and then you adjust the raw daily.

      • Steven Mosher

        Sunshine read the frickin read
        meThe data for this station is presented below in several columns and in
        % several forms. The temperature values are reported as “raw”,
        % “adjusted”, and “regional expectation”.
        %
        % The “raw” values reflect the observations as originally ingested by
        % the Berkeley Earth system from one or more originating archive(s).
        % These “raw” values may reflect the merger of more than one temperature
        % time series if multiple archives reported values for this location.
        % Alongside the raw data we have also provided a flag indicating which
        % values failed initial quality control checks. A further column
        % dates at which the raw data may be subject to continuity “breaks”
        % due to documented station moves (denoted “1”), prolonged measurement
        % gaps (denoted “2”), documented time of observation changes (denoted “3”)
        % and other empirically determined inhomogeneities (denoted “4”).
        %
        % In many cases, raw temperature data contains a number of artifacts,
        % caused by issues such as typographical errors, instrumentation changes,
        % station moves, and urban or agricultural development near the station.
        % The Berkeley Earth analysis process attempts to identify and estimate
        % the impact of various kinds of data quality problems by comparing each
        % time series to neighboring series. At the end of the analysis process,
        % the “adjusted” data is created as an estimate of what the weather at
        % this location might have looked like after removing apparent biases.
        % This “adjusted” data will generally to be free from quality control
        % issues and be regionally homogeneous. Some users may find this
        % “adjusted” data that attempts to remove apparent biases more
        % suitable for their needs, while other users may prefer to work
        % with raw values.
        %
        % Lastly, we have provided a “regional expectation” time series, based
        % on the Berkeley Earth expected temperatures in the neighborhood of the
        % station. This incorporates information from as many weather stations as
        % are available for the local region surrounding this location. Note
        % that the regional expectation may be a systematically a bit warmer or
        % colder than the weather stations by a few degrees due to differences
        % in mean elevation and other local characteristics.
        %
        % For each temperature time series, we have also included an “anomaly”
        % time series that removes both the seasonality and the long-term mean.
        % These anomalies may provide an easier way of seeing changes through
        % time.
        %
        % Reported temperatures are in Celsius and reflect monthly averages. As
        % these files are intended to be summaries for convenience, additional
        % information, including more detailed flagging and metadata, may be
        % available in our whole data set files.
        %

      • Steven Mosher

        sunshine you STILL CANT READ.

        Again, read the readme CAREFULLY.

        % The data for this station is presented below in several columns and in
        % several forms. The temperature values are reported as “raw”,
        % “adjusted”, and “regional expectation”.
        %
        % The “raw” values reflect the observations as originally ingested by
        % the Berkeley Earth system from one or more originating archive(s).
        % These “raw” values may reflect the merger of more than one temperature
        % time series if multiple archives reported values for this location.
        % Alongside the raw data we have also provided a flag indicating which
        % values failed initial quality control checks. A further column
        % dates at which the raw data may be subject to continuity “breaks”
        % due to documented station moves (denoted “1”), prolonged measurement
        % gaps (denoted “2”), documented time of observation changes (denoted “3”)
        % and other empirically determined inhomogeneities (denoted “4”).
        %
        % In many cases, raw temperature data contains a number of artifacts,
        % caused by issues such as typographical errors, instrumentation changes,
        % station moves, and urban or agricultural development near the station.
        % The Berkeley Earth analysis process attempts to identify and estimate
        % the impact of various kinds of data quality problems by comparing each
        % time series to neighboring series. At the end of the analysis process,
        % the “adjusted” data is created as an estimate of what the weather at
        % this location might have looked like after removing apparent biases.
        % This “adjusted” data will generally to be free from quality control
        % issues and be regionally homogeneous. Some users may find this
        % “adjusted” data that attempts to remove apparent biases more
        % suitable for their needs, while other users may prefer to work
        % with raw values.
        %
        % Lastly, we have provided a “regional expectation” time series, based
        % on the Berkeley Earth expected temperatures in the neighborhood of the
        % station. This incorporates information from as many weather stations as
        % are available for the local region surrounding this location. Note
        % that the regional expectation may be a systematically a bit warmer or
        % colder than the weather stations by a few degrees due to differences
        % in mean elevation and other local characteristics.
        %
        % For each temperature time series, we have also included an “anomaly”
        % time series that removes both the seasonality and the long-term mean.
        % These anomalies may provide an easier way of seeing changes through
        % time.
        %

      • Steven Mosher

        Luling is not ONE STATION.

        when a station moves it is NO LONGER MEASURING THE SAME THING.
        so the station is split.

        The “adjustment” is our estimate of what WOULD HAVE BEEN RECORDED had the station remained in one spot.

        people who think Luling is one station need to understand this.
        if you move a station from the city to an airport its a different station
        EVEN THOUGH THEY DONT CHANGE THE NAME.

        when you move it from a roof top to the ground ITS A DIFFERENT STATION.

        You guys are claiming that its one station. its not.

      • Steven Mosher | June 28, 2014 at 4:30 pm |
        “The mistake is thinking that Luling is one station.
        its not.
        Its one NAME and at least 6 different locations”

        BEST thinks there have been a lot of station moves. But I don’t think it is true. It’s this place. tchannon has lots of detail. NOAA metadata here. It looks like they may be just conscientious about updating the accuracy of their coordinates.

      • Mosher, after adjusting all the raw data, what percentage of the stations show an upward trend (or a lower downward trend) versus which stations show a downward trend.

        It should be 50/50 right?

      • sunshinehours1 | June 28, 2014 at 8:20 pm | Mosher, after adjusting all the raw data, what percentage of the stations show an upward trend (or a lower downward trend) versus which stations show a downward trend.
        It should be 50/50 right?
        Give up Sunshine [why does that sound so good?? only joking]
        Mosh says there are 40,000 stations but if each station has 6 changes a year that might be 110x 6 x40000 stations close to 2.5 million new stations.
        still the more the better.
        On a more serious note I have pointed this 50/50 right out to him repeatedly over his mate’s Cowtan and Ways incredible Kriging feats which always show upward movement to any corrected temperature and fill in perfectly to any assessment on the spots on their Kriged maths they test. Heck it even works back in time perfectly when applied elsewhere . When you have 100% correctness for temperature estimations and only ever upwards movement in filling in you may have a product that sells but you sure do not have science. Go back and look at it logically, Steve.
        Use your Malarkey indicator.

      • Steven Mosher

        Nick the ncdc metadata inst the only source.
        Since the algorithm doesnt care if you split where there is no discontinuity (false move) the choice is to consider all sources of metadata.

        NCDC metadata in the past has been a wreck.

      • Steven Mosher

        sunshine

        THERE IS NO ADJUSTING.

        read my lips.

        you create an EXPECTED READING. thats a prediction.

        And no, you do not expect the difference between the raw and expected to be 50/50 split

        why?

        Because all the inhomengenietes introduce false cooling.

      • “you create an EXPECTED READING”

        lol

        Andrew

    • Fan

      I am very excited. After a tough day at the office yesterday arguing with Willis (not all of us agree with him you know) I decided to read Sou’s blog for some light relief and lo and behold she has done a demolition job on me! I am so proud to be honoured in this fashion.

      However, its difficult to know where to begin with refuting her material as so much of what she said was plain wrong and amply illustrates her lack of knowledge of climate. When I do get around to replying to her I will be sure to post my response directly to you as well. Not that she wouldn’t allow my post on her blog of course, as I am sure she is a fair minded person.

      Based on what she regularly writes I am not sure I hold out any great hope that her expose of Goddard will be that accurate but on your recommendation I will go and have a look.

      tonyb

    • Fan

      I have read Sou’s blog. You seem to link to a year old article and the one relating to Steve Goddard is from March 2014. It doesn’t seem to discuss the latest developments. I have obviously missed the correct place to go to at Sou’s celebrated blog-could you link to it please. Thanks

      tonyb

    • A fan of *MORE* discourse

      Answers to your questions reside in moderation tonyb!

      Whether they will ever appear, not even FOMD knows.

      However, a Google search for “Global Surface Temperature and Homogenisation” (2014) will find what is (to FOMD’s mind) Sou from Bundangawoolarangeera’s exceedingly constructive response to the inchoate “uneasiness”, in regard to data integrity, that Steve Goddard, Anthony Watts, and Judith Curry have been expressing.

      Sou’s analysis makes special reference to the recently-launched International Surface Temperature Initiative (ISTI), as referenced in the article “Concepts for benchmarking of homogenisation algorithm performance on the global scale” (2014), an initiative that is highly commended too by FOMD.

      Thanks for asking, tonyb!

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • Fan

        Thanks for the links. I knew about the principles of the ‘concepts’ document some time ago as I happened to meet the first named scientist on the paper -Kate Willett-at the Met Office a few months ago. I rate her highly. She was the phd student of Phil Jones and has gone on to great things.

        I am not claiming to have had ANY impact into the paper of course. :)

        Whatever we can do to improve the temperature record is to be welcomed. However, as I said earlier I wouldn’t bet the house on its accuracy prior to properly sited digital stations.

        The idea that we have any sort of GLOBAL land temperature accurate to tenths of a degree dating to 1860 or 1880 is a fallacy. Combining it with an alleged global ocean temperature to 1860 or so turns it into a fantasy

        tonyb

    • “the foundations of climate-change theory arise in thermodynamics”

      Except when they say that the deep ocean beyond 700m down is where the supposed ‘missing heat’ went, at which point they have thrown thermodynamics out of the window.

  33. Steven Mosher

    I will Note this.

    Anthony mailed me one of the USHCN stations wiith estimated data
    and surmised that BEST would have used this estimated data.

    Well, No.

    1. The estimated data is in USHCN adjusted.
    2. we dont use adjusted data when raw data is available

    The major sources are ghcn daily RAW, and Gsod daily. Other raw monthly sources get used when there is no daily. if there is no raw daily and if there is no raw monthly, then and only then would we use adjusted data. And in those handful of cases we are drawing from hadcrut.. and looking to toss that data entirely.

    to repeat. there are around 14 sources of data.

    In the first step “duplicate” stations are identified. at the end of this there are about 40000 unique stations

    Then for every station the data is collated.

    1. we use raw daily for the station If there is raw daily data
    2. if there is no raw daily data, we use raw monthly data.
    3. if there is no raw daily and no raw monthly, then we would use “adjusted” monthly.

    ( Hopefully R Rohde and I will be publishing a data paper in the near future
    complete with stats on every source etc. it’s a big undertaking )

    For USHCN there is raw daily and raw monthly.

    Its in the code guys.

    • I wish I had the money to properly check what you are saying. Because from where I sit, I can’t trust any of the data set until I see some group who haven’t made money from promoting the global warming scare doing it.

      And basically, that means taking it out of academia and taking it away from people like the UK Met Office who are no longer credible in this area.

      And yes! I’ve got no idea who we’ve got left.

      The alternative is to employ people specifically to do auditing of the data and methodology.

      What is not acceptable is the current nonsense.

      • Why do scientists get to make the big bets anyway?

        “In addition Fermi personally offered to take wagers among the top physicists and military present on whether the atmosphere would ignite, and if so whether it would destroy just the state, or incinerate the entire planet. This last result had been previously calculated to be almost impossible, although for a while it had caused some of the scientists some anxiety.”

      • ScottishSceptic

        I spend some time at the Met Office carrying out research and have met a number of their scientists. I would say there is a lot more scepticism there than I had previously realised but that the very top layer of Management e.g Julia Slingo et al are very much wedded to their cause and are unlikely to sanction any overt dissent. In that respect there is a political/activist problem rather than one at the scientific level
        tonyb

      • Climatereason. “A lot more skepticism at Met Office” (summary)

        When I went to see Judith at the Royal Society, the Met Office staff I met were largely in agreement with us skeptics about their own models. They largely agreed that there were no trends showing extremes etc.

        As a result, I assumed that it would not be long before they came out and told the public this.

        I think that was two years ago, and to be frank, when I saw the IPCC increasing their certainty of their headline figure I thought they were clinically insane or criminally fraudulent.

        The evidence shows me that the Met Office are being dishonest – certainly to the public, but perhaps even to themselves.

        They cannot be trusted on climate.

      • Steven Mosher

        The data for this station is presented below in several columns and in
        % several forms. The temperature values are reported as “raw”,
        % “adjusted”, and “regional expectation”.
        %
        % The “raw” values reflect the observations as originally ingested by
        % the Berkeley Earth system from one or more originating archive(s).
        % These “raw” values may reflect the merger of more than one temperature
        % time series if multiple archives reported values for this location.
        % Alongside the raw data we have also provided a flag indicating which
        % values failed initial quality control checks. A further column
        % dates at which the raw data may be subject to continuity “breaks”
        % due to documented station moves (denoted “1”), prolonged measurement
        % gaps (denoted “2”), documented time of observation changes (denoted “3”)
        % and other empirically determined inhomogeneities (denoted “4”).
        %
        % In many cases, raw temperature data contains a number of artifacts,
        % caused by issues such as typographical errors, instrumentation changes,
        % station moves, and urban or agricultural development near the station.
        % The Berkeley Earth analysis process attempts to identify and estimate
        % the impact of various kinds of data quality problems by comparing each
        % time series to neighboring series. At the end of the analysis process,
        % the “adjusted” data is created as an estimate of what the weather at
        % this location might have looked like after removing apparent biases.
        % This “adjusted” data will generally to be free from quality control
        % issues and be regionally homogeneous. Some users may find this
        % “adjusted” data that attempts to remove apparent biases more
        % suitable for their needs, while other users may prefer to work
        % with raw values.
        %
        % Lastly, we have provided a “regional expectation” time series, based
        % on the Berkeley Earth expected temperatures in the neighborhood of the
        % station. This incorporates information from as many weather stations as
        % are available for the local region surrounding this location. Note
        % that the regional expectation may be a systematically a bit warmer or
        % colder than the weather stations by a few degrees due to differences
        % in mean elevation and other local characteristics.
        %
        % For each temperature time series, we have also included an “anomaly”
        % time series that removes both the seasonality and the long-term mean.
        % These anomalies may provide an easier way of seeing changes through
        % time.
        %
        % Reported temperatures are in Celsius and reflect monthly averages. As
        % these files are intended to be summaries for convenience, additional
        % information, including more detailed flagging and metadata, may be
        % available in our whole data set files.
        %

      • So were there any skeptics there who admitted HadCrut4 is a pile of crap next to Gistemp? Lol.

      • Don Monfort

        Frank Lansner has done some interesting unpaid work:

        http://wattsupwiththat.com/2014/01/06/the-original-temperatures-project/

      • Steven Mosher

        well anthony confirmed that we dont use zombie data.
        good enough for you

      • Don Monfort

        Did Frank say that you used zombie data? My recollection is that he was pointing out that there is a lot of extant raw data that you didn’t use. He gave specifics. You dismissed Frank with a wave of your hand with this BS:” Its interesting to see Phil Jones approach ressurrected at WUWT.”

        You want to fault Frank for working hard to get data? It is either useful data or it isn’t. Has nothing to do with Jones. You failed to address Frank’s criticisms of BEST. I am sure you could do a lot better. Just say you don’t care.

      • Don

        Frank is carrying out a very useful project.

        I went to the met office on his behalf to try to gather some data for him. It was nowhere near as straightforward as I had hoped. I expect frank will update his findings at some point but it’s an uphill struggle to pull together a comprehensive data base with the limited resources sceptics have.

        Tonyb

      • Steven Mosher

        Don

        its simple.

        1. Frank has never provided me with a source of his new data
        2. Last time I looked at one of his posts he TOTALLY BOTCHED
        the download of data. There was no thank you steve.
        3. Many people email me data or data sources. from a dude who
        sent me his grandmothers diary to a researcher who uncovered
        a long forgotten record.

        The proceedure is simple.

        1. EMAIL ME THE FRICKING LINK TO THE DATA
        2. OR, have the owner of the data SUBMIT IT TO ISTI

        that way the data can be preserved and maintained

        But dont expect me to read a post by known data klutz
        ( sunshine hours did similar shit) and collect data from a blog post.

        Nobody remembers how we ridiculed hansen for using some data that wasnt in a proper archive.. why, because we were riducling somebody they hated

      • Don Monfort

        Steven,

        Frank Lansner wrote:

        “For all countries analysed so far, the BEST national data is nearly identical with the coastal trends and the Ocean Air Affected (“OAA”) locations. The data from the Ocean Air Shelter (“OAS”) stations appears to be completely ignored by the BEST project country after country after country. Just as we saw for HISTALP.”

        “BEST adjustments leads to the ignoring of the cold trended stations, the stations from valleys (OAS areas). So is it true when BEST claim not to use adjusted data?”

        Do you have anything to say about that other than making the ludicrous claim that Frank is using Phil Jones’ methods, that someone sent you his grandma’s diary, that sunshinehours did whatever, that Frank made a mistake downloading something once, that nobody remembers how you ridiculed Hansen and other irrelevancies?

        BEST either ignored the cool trended stations not affected by ocean air and favored the warm trended coastal and ocean air affected stations, or not. Do you need the data that Frank obtained from public sources to know what BEST did?

        Frank is trying to liberate data, not hide it. You are supposed to be the Pacman of data. The more the better. Right? You should be encouraging Frank. Have you asked Frank for his data?

        I have asked you about this, because it seems to me that Frank may be onto something significant. I respect your knowledge and honest opinions. I don’t see why you have an attitude on this one. As I was reading Frank’s post, I was anticipating a reasoned, factual rebuttal from Mosher. “Poor Frank,” I said to myself.

        I have been disappointed. I’ll drop it now.

      • It would be interesting to see Lansner’s own assessment of surface-land global or US warming since 1950. So far this is missing. Are we debating hundredths of a degree here? Without knowing that, there is no way to judge whether this is just petty nitpicking or if it affects the larger debate.

      • Don Monfort

        Frank is doing it on his own, jimmy. It might take him a while to get around to the U.S. But why don’t you read up on the considerable work that he has already done? And reserve further comment, until you have a clue.

      • Jimd

        Don is right, it is well worth spending ten minutes reading franks work.

        As I can testify, trying to unearth the information he wanted was more difficult than I expEcted. I spent three hours at the met office looking for the British information he wanted and eventually spoke to the librarian in order to try to discover where the missing information might be located

        If this difficulty is replicated it could be years before frank gets round to the US.He deserves some credit and funding for what he is doing.

        Tonyb

      • It is hard to tell what Lansner thinks he is doing. His so-called ocean sheltered stations are not distinguishable from continental interiors where the warming is currently fastest, but he says it is cooling at the ones he looked at in Europe. Also could he be making the same mistake as Goddard with pre-war raw data that he is emphasizing so much when comparing the 30’s to now? His conclusions kind of look like Goddard’s ones for the US.

      • As far as I can tell, reading between the words, Lansner has gone in with the thesis that all the warming is from the oceans, so that stations that are “sheltered” in some sense should not be warming. However he seems only interested in coastal sheltered stations to prove his thesis, not continental interiors where the warming is lately much faster than the oceans, and I don’t know why he ignored them, except that these may disprove his thesis.

      • Jimd

        It was an interesting exercise as I located the reports made at the time from a station, the report once it had made it into the monthly regional records, again when it went into the met office year book and again when there was an analysis of the decade.the figures continually changed which made it difficult to know what the real temperature was.

        I don’t think there is any conspiracy going on but undoubtedly there are lots of different versions of the same record. they varied by several degrees. Perhaps you can clarify why?

        Tonyb

      • It is possible for individual stations to change perhaps sometimes, but making groups of stations change in concert is much harder, which is why temperature records like CRUTEM4 tend to average over multiple stations in a 5 degree square to get regional values because that removes the noise of local station variations.

      • Jimd

        I wish I had thought of it then, but having gone to the effort of locating the original records I should have written doWn the different bersions of the daily temperatures for a specific month and year then I could have produced a variety of graphs for the same period that each would have looked quite different. This is the met office don’t forget, it’s not some badly kept record in a third world country.

        I was looking at pre 1980 figures which is when most stations seemed to have changed to automatic and the nature of the data changes

        Tonyb

      • Don Monfort

        It’s not that hard to tell what Frank is doing, jimmy. Maybe your problem is that you are reading between the words.

        “For each country analysed I have made comparison between national temperature trends as published by the “BEST” project and then the OAA and OAS temperature trends from original data. I want to know if BEST data use both the warm trended OAA data and the more cold trended OAS data. In addition, I have made comparisons of ECA&D data versus original for many countries and also HISTALP data versus original.
        More info can be found on:
        http://hidethedecline.eu/pages/posts/original-temperatures-introduction-267.php

        The detailed explanation of what Frank is doing is all there, jimmy. If you have read it and you still don’t get it, you should probably give up. Of course, your lack of understanding wouldn’t preclude you from borrowing the old ‘someone sent me his grandma’s diary’ criticism.

      • Don Monfort

        Here is Frank talking about what he found in the Alps data, jimmy. Is that far enough inland for you?:

        “Fig6
        However, the valley stations in best possible shelter against ocean air (OAS) have all been adjusted by ZAMG to show warm temperature trends.

        From Original data we can see, that the cold trended stations (OAS) are in fact in a comfortable majority in the Alpine area and I believe ZAMG should explain themselves.
        More examples of HISTALP/ZAMG adjustments from many countries:

        http://hidethedecline.eu/pages/posts/original-temperatures-histalp-264.php

        More on original Alpine temperature data:
        http://hidethedecline.eu/pages/posts/original-temperatures-the-alps-273.php

      • What is the significance he sees for his ocean shelter effect? I find that a strange idea from the outset. No one else talks about it. What about continental interiors? Are they “sheltered” too, or why doesn’t he count them?

      • Don Monfort

        You should move on, jimmy.

  34. ==> “There is most definitely no tribalism among this group.”

    Heh. Nice how Judith gets to determine where there is and isn’t tribalism.

    Just curious, Judith – are lukewarmers as a group immune from tribalism? If not, could you point to some tribalism that you’ve seen from lukewarmers? But if you can’t find any tribalism among lukewarmers, what do you think that makes them of a breed apart – humans who aren’t subject to identity-related biases? How is it that as a group, they are not subject to the kinds of biases one finds among (all?) other groups?

    Don’t forget your recent comments about SKS – related to their lack of “skepticism” if they can’t provide examples of criticizing the likes of Mann.

    • Joshua, you don’t get it. I would hardly put zeke and watts in the same box in terms of agreeing on very much, but they are working together to clarify this particular issue. there is no ‘group’ of lukewarmers that these individuals would self identify with

      • ==> ” there is no ‘group’ of lukewarmers that these individuals would self identify with

        Of course Zeke and Watts aren’t in the same group. That is unrelated to my point – which is that I doubt that you could identify anything you’d consider tribalism among lukewarmers – which not coincidentally is the group to which you belong (with the understanding that all the labels used in the climate wars are manipulated conveniently depending on whose ox is being gored or whose bias is being confirmed).

        Go to Lucia’s you will find a tribe. A tribe of lukewarmers, who partake in identity-aggressive and identity-protective behaviors.

      • I don’t go to Lucia’s very often; so I mustn’t be a lukewarmer, by your logic. One of the key tenets of tribalism is that an individual has to agree that they are member of a tribe. I am not a member of your lukewarmer tribe.

      • ==> “I don’t go to Lucia’s very often; so I mustn’t be a lukewarmer, by your logic”

        ???

        I’m not suggesting that lukewarmers only exist at Lucia’s.

        ==> “One of the key tenets of tribalism is that an individual has to agree that they are member of a tribe.”

        “Realists” (for the most part) don’t think that they are members of a tribe. They think that they are practicing valid science. They think that they are defending science against a tribe of “skeptics.”

        “Skeptics” ((for the most part) don’t think that they are members of a tribe. They think that they are practicing valid science and defending valid science against a tribe of “realists.”

        Can you point to ANY examples of what you’d consider tribalism from ANY lukewarmer?

        ==> :” I am not a member of your lukewarmer tribe.”

        Seems to me that one of the main tenants of the climate wars is that “tribes” are defined by others. Look at the ridiculous arguments about whether or not Muller is a member of the “skeptic” tribe. That argument takes place, as we have seen on quite a few Climate Etc. threads, irrespective of how he defines himself. People are absolutely convinced of his tribal orientation without his self-identification being considered in the least.

        Your whole construct of tribalism is subjectively defined. You define all these terms in ways that confirm your biases, and argue by assertion accordingly. You’re not alone in that – but it isn’t a scientific approach to the discussion.

      • Joshua

        As Marx-the funny one- almost said ‘I wouldn’t belong to a tribe that would have me as a member.’

        Sceptics are often highly individualistic with differing beliefs on cause and effect of co2 and its impact on our ever changing climate. It is at once their strength and their weakness. I profoundly disagree with many of my fellow sceptics. For example I have had a long running argument with Willis and continually say that Monckton, Heartland and the GWPF do not represent me or my views, which is not to say that I NEVER agree with them or ALWAYS disagree with climate alarmists..

        Many like me could only loosely be described as belonging to a tribe, in as much we are outside the big climate tent of consensus looking in at the inhabitants curiously and, in my case, perfectly happy to talk to them.

        tonyb

      • tony –

        ==> “Sceptics are often highly individualistic with differing beliefs on cause and effect of co2 and its impact on our ever changing climate. It is at once their strength and their weakness.”

        I see no particular reason to believe that “skeptics,” as a group, are quantifiably more individualistic than anyone else. Show me some evidence to that effect. Argument by assertion doesn’t impress me.

      • Joshua

        Comfortably pitched on the flat is a large well appointed climate tent in which are housed numerous people with broadly similar views that agree with the consensus.

        On the slope opposite are hundreds of mostly one person raggedy Tents with a few larger ones dotted amongst them that house the sceptics . I said that sceptics were more individualistic in our climate views . I didn’t say we were more individualistic in other ways.

        Tonyb

      • Tony, you might carve out an exception for the US Senate, which seems to have no room for a third tribe, let alone a raggedy tent.

    • I wonder how Big Oil is going to figure out who to pay?

    • I would also note that all this to-do about tribalism is misplaced. I’m sure tribalism is a product of evolution, and therefore has a survival advantage.

      • Agree that it is misplaced but disagree that it is a product of evolution. Tribalism is an interesting subject. It is a very effective form of government that is independent of the state; the state has no control over the tribes. The tribal leader assumes and maintains the position of leader from the tribe and must be diligent to the needs of the tribe. There is much more, some of it relevant to the middle east and possibly the security of the world ( the Anbar Awakening was all about understanding tribal dynamics).

      • One could say the tribe, in the primitive sense which you are using, IS a form of government.

    • Steven Mosher

      Judith isn’t a lukewarmer. She doesn’t fit the definition. See our tribal laws

    • tony –

      ==> “I said that sceptics were more individualistic in our climate views . ”

      We can see social/ideological group associations correlated with both broad orientations towards climate views (as well as with lukewarmism). There are exceptions, of course, but the larger pattern is very strong. On the whole, “skeptics” are just as strongly associated with ideology and social orientation as are “realists” – which I would say is what is most directly related to the question of tribalism.

      I’d say that if, perhaps it is true that “skeptics” are more individualistic in climate views, that diversity is subsumed by a greater group orientation. But even there, I suspect that you are projecting from an anecdotal feeling about a tiny subset of “skeptics” (those who participate actively on a few climate blogs) to the larger group (people who identify with “skepticism” related to climate change).

      Sure, it may be that the reasons for doubting the impact (or magnitude of the impact) of ACO2 on the climate are more numerous than the single view that ACO2 is extremely likely to have cause > 50% of recent warming, but that then becomes a circular observation of diversity. It tells us nothing related to tribalism. I’d guess that we can see at a site like this one just as much group identity-aggression and group identity-defense (the behaviors associated with tribalism) as we’d find at SKS. Thread after thread at this site are full of invective directed at the “other” (“realists). Just look at the tribalism evidence on basically any thread that drifts towards the political side of the debate. The group orientation emerges from perhaps a more diverse orientation towards climate change when you restrict the topic. The climate war is a proxy battle.

      Maybe I’m wrong about that, but the matter could be resolved with evidence. Argument by assertion doesn’t cut it. And it isn’t very skeptical.

    • Ah, yes, the next project for BEST will be to accurately classify tribes. There will be estimates. There will be geographic infilling. It will be of intense interest to people such as Joshua, the denizens of SkS, etc.

      In blogs where people write in shorthand, some political error will always be found. Hence those such as Joshua will always be able to find a reason to point a finger at those such as Judith. But both Joshua’s questions and the answers are only of interest to those such as Joshua.

      I’m probably the one who most frequently identifies as a lukewarmer in the English language climate blogosphere, precisely because I prefer a shorthand label to a lengthy explanation. I could construct an estimate of the Lukewarmer tribe, of course. I could use arbitrary classifications of each blogger/frequent commenter’s position on various issues and come up with a dataset. I could even proffer it with margins of error. But as it would only be used as a weapon by the alarmist brigade, I fail to see the point.

      Joshua’s post here (as with so many of his others) is part of his own ongoing effort to ‘prove’ that opposition to the consensus is fraught with examples of human frailty, of which tribalism I suppose is one.

      Opposition to the consensus does exhibit signs of tribalism, Joshua. But most of those signs are revealed to the Climate Elect such as yourself from fevered over-examination of casual phraseology in the comments sections of blogs such as this. Not all, just most.

      And I will give you the common language explanation for these tribal tendencies. It is a direct result of sustained attacks on their intelligence, good faith, scientific output and political affiliation.

      People such as yourself constructed a wall that became an enclosure and shoved your opponents into that kraal. You counseled against debate, censored writings that appeared outside the enclosure and created the tribal insult of ‘denier’ to refer to us in all debate and conversation.

      It should not surprise you if the inmates talk amongst themselves as members of a class or even tribe. It is the result of your hard work.

      • Proud to be in the same tribe as the man who wrote this.

      • Kraal’s good but I like beyond the pale and beating drums.
        =================

      • Richard: +1

      • > It is the result of your hard work.

        Joshua made Denizens do it.
        Another lukewarming guilt trip

      • Naw, you and Joshua are just useful idiots. But speaking of guilt trips.
        ==========

      • Ah, yes, the lukewarmers persuaded people to verbally assault skeptics for 10 years. I confess. It was me and those like me.

      • Tom –

        ==> “Joshua’s post here (as with so many of his others) is part of his own ongoing effort to ‘prove’ that opposition to the consensus is fraught with examples of human frailty, of which tribalism I suppose is one.”

        Prove? No, I have no need to “prove” it. It stands to reason. My attempts, feeble as they are, are directed at asking “skeptics” to accept the obvious and to be accountable for their “frailties” – tribalism among them.

        ==> “But most of those signs are revealed to the Climate Elect such as yourself”

        Heh. “Climate Elect.” Nice example of what I’m talking about. What is the “climate Elect,” and what makes me part of it? Something that I believe? What do I believe, Tom? Do you know? You often attack me in these threads – all kinds of personal insults you throw my way. Do you even know what I believe, Tom? If so, do tell. And again, do tell how my beliefs (or my actions) make me part of some “Climate Elect.”

        This should be good. I’ve asked you similar questions before, and you haven’t answered. I firmly expect that you’ll do so this time. Given the high drama of your comment – you seem quite motivated, as it were.

        ==> “: It is a direct result of sustained attacks on their intelligence, good faith, scientific output and political affiliation. ”

        That’s very dramatic, Tom. Dramatic indeed. So do tell, is your argument that two wrongs make a right? Is your argument that “They did it first?” Is your argument that you should label me and insult me because of some slights in insults directed your way by someone else? Is your argument that your actions should be based on guilt-by-association? Is your argument that justifying your behaviors, or those of some other “skeptic” or “lukewarmer” is being accountable? Do tell.

        ==> “People such as yourself constructed…

        Ah. There we go again. “People just as yourself…”

        Reminds me of this:

        http://www.urbandictionary.com/define.php?term=You+People

        ==> “You counseled against debate, censored writings that appeared outside the enclosure and created the tribal insult of ‘denier’ to refer to us in all debate and conversation…

        Really? Did I do that? Or is that a truly magnificent example of tribalism on your part. I think it is the latter. Show some evidence otherwise. Show some evidence that I have ever done as you described….

        I love me some unintentional irony, Tom. And you don’t disappoint.

  35. Amazing -sometimes there are faults in equipment.

  36. I agree with Anthony’s take on the creation of a temperature series from the weather stations. Drop the ones that have issues, find ones that have the VERY BEST records and use them to create the average.

    Of course, we might not have enough of those globally to get a good average global temperature beyond a certain point back in time, but still, that would actually be the VERY BEST global measurement we can get.

    All these Herculean computing tricks apparently can and do lead to spurious output.

    • Steven Mosher

      The problem is creating a apriori set of rules for
      A good site

    • Granted, that. But with 50 stations, examination becomes a manageable task.

    • The problem is largely that those people who have “charge” of the global temperature speak and act like greenpeace activists.

      In a survey done by no other than Lewandowski the data (but not the conclusion) shows that those who believe in global warming almost double their estimate for future trend when told it is global warming, whilst did not change their prediction irrespective of what they were told the graph showed.

      THIS IS THE PROBLEM!

      Global warming believers are highly gullible and change their perception of the data depending what they believe it should show.

      In contrast skeptics are highly immune from alternating their perception of the data just because they are told it shows something.

      Based on this survey by one of the most arch alarmists, who would you sugest should compile the data? The skeptic engineers who just want good data they can trust. Or the alarmist academics who can’t even admit that skeptics are more trustworthy when interpreting data.

  37. Reblogged this on ScottishSceptic and commented:
    I rebloged Steve Goddard’s post when I saw it (with the proviso I had not checked it). Judith now has a “there’s no smoke …. without rubbing two skeptics together” type article.

  38. When the “adjustments” can account for a high percentage of the “global warming” one has to be very suspect. Steve Goddard may be wrong on some details, but he has done us all a service by forcing government scientists out of the woodwork. I only hope the debate is open, and the public informed.

  39. George Turner

    I find the automatic adjustments to decades old data more problematic. A few weeks ago WUWT had a post arguing that the flaw was in correcting discontinuities to retain a climatic trend when the discontinuities are caused by correcting a station that’s drifted warm due to the slow accumulation of site problems, ranging from trees growing up and blocking wind, to fading paint on the Stevenson screen, to occasional “improvements” to the station like adding decorative rocks around it, to the usual UHI. This tends to make all the station data into periodic saw-tooth waveforms, and if you remove all the down tics because they don’t match neighboring asynchronus upward sloping trends, you’ve turned all the sawtooth waveforms into a giant triangles. Since you can’t make a large adjustment to the present, their routines automatically readjust the past.

    But the data isn’t just a graph, it represents real-world macroscopic measurements, and the implication of the adjustment procedure is that temperature data can ripple backwards through time and change the climate of the past over entire regions. That violates the big rule in physics which says macroscopic changes in the present can’t cause macroscopic changes in the past.

    So in the comments, Zeke Hausfather from Berkely Earth, who is on the new homogenization benchmarking working group, said the benchmarks they develop will have to be able to test homogenization algorithms with sawtooth waveforms.

    • I said a long time ago that the real problem is that these academics try to measure global temperature on the cheap.

      The result is we get appallingly bad data with a host of problems because they just don’t get the idea that the job of measurement starts with getting rid of all the site problems.

      YOU CANNOT AND SHOULD NOT modify the temperature to accommodate poor sites. instead you should ensure the sites are good so that one does not have to change the data coming out.

      If that costs money – then that is what it takes.

      The answer is not to do it on the cheap.

      And when I say “money” I fully expect the bill to be in the $billions.

      I’m not talking about sending phil jones of a course to learn excel.

    • George Turner

      Well, the problem is that no matter how many billions you spend, you can’t spend it back in the 1920’s to “improve” old raw data. The paper records are what they are, and yet there are frequent adjustments backed up with all sorts of mathematical justifications, yet require us to posit that the original monitor was afflicted with double-vision, was legally blind, or didn’t know how to read a thermometer.. Yet that was the only person who was there to actually record the data, so it’s not like there’s a better witness we’ve called to the stand to testify about the temperature on the evening of April 28, 1927.

      I don’t know of anywhere else in physics where you can just go back and adjust all the data that was carefully collected, nor do I see many asking why, if the temperature was X, everyone in the region kept writing down X-2.5.

      So now we have a case were the consensus is in pretty good agreement on the surface temperature, with a few outliers who happen to be all the people who actually measured the period temperatures.

      It might be similar to the case of the Millikan oil drop experiment that measured the charge of an electron, which he got wrong by about one percent because he had an incorrect value for the air’s viscosity. As Feynman noted, subsequent experimenters only slowly shifted the number to the correct value, possibly because they were afraid to buck the consensus, or possibly because they really doubted their procedures that produced a number different from the accepted (and incorrect) value, but wouldn’t re-examine the procedures that produced incorrect results closer to the consensus.

  40. The COOP network was established in 1891 mostly for agricultural. Yes, it has undergone changes in instrumentation, data collection procedures, observation times, station movement, etc. but it is one of the few long-term terrestrial national networks we can use to assess climate.

    It is managed by the NWS, not NCDC, and it is chronically underfunded. It was never designed to detect climate change over 100 years ago.

    If we keep complaining about the way the data are handled, then congress will be pleased to take away all the funding and we can use climate generators to create the climate we want to verify any model we create.

    If some of those who spend hours and hours on the Climate Etc. blog would write to their representatives about data network funding problems, that would be time well spent.

    • Philbert, it is a bit rich of you to talk of “under-funding” when a lot of skeptics like me work for free (I’ve not been paid for six years).

      And what has been the result? If is that we’ve been abused, insulted called deniers and every other kind of attack under the sun.

      I would personally support funding – but only if we are not funding another group of academics who spend their time writing to the press spreading climate scares or insulting people like me.

    • It is managed by the NWS, not NCDC, and it is chronically underfunded.

      That is because they spend to0 much of their budget on trying to prove Alarmist Climate Change that no actual data supports. They must buy huge computers and hire lots of people to generate scary output to hide the data that is well inside the bounds of the past ten thousand years.

  41. I would also like to add if Heller is blocked from commenting on your comments of him, you should give him the platform here to comment on your comments.

  42. “Steve Goddard” has a history of getting things badly wrong, like this 2008 article in the Register, where he had to retract:

    http://www.theregister.co.uk/2008/08/15/goddard_arctic_ice_mystery/

    • But the question is also does he have history of gets things right where other’s have not dared to comment?

    • Yes David, Goddard really annoy people. And the people he annoys are using every trick in the book to smear him.

      But at the core on this issue he is right.

      But he has less retractions than the multi-billion dollar IPCC has.

      • Because “Steve Goddard” never smears people, does he?

        He’s annoying only because he repeatedly makes big claims that are wrong. And not just wrong, but badly wrong, embarassingly wrong. But he’s useful to people who don’t care about the science, as long as he gives them an answer they want.

      • Thanks for making my point David. The kooks are coming out of the woodwork to try and exact revenge on real/imagined slights.

        Try to pretend to be interested int he science for once.

      • “He’s annoying only because he repeatedly makes big claims that are wrong. And not just wrong, but badly wrong, embarassingly wrong. But he’s useful to people who don’t care about the science, as long as he gives them an answer they want.”

        The British Met Office scientists published predictions in the Journal Science, back in 2007.

        http://www.sciencemag.org/content/317/5839/796

        “…predict further warming during the coming decade, with the year 2014 predicted to be 0.30° ± 0.21°C [5 to 95% confidence interval (CI)] warmer than the observed value for 2004. Furthermore, at least half of the years after 2009 are predicted to be warmer than 1998, the warmest year currently on record.“ –

        So why is Goddard ‘embarassingly wrong. But he’s useful to people who don’t care about the science’, but not Doug M. Smith, Stephen Cusack, Andrew W. Colman, Chris K. Folland, Glen R. Harris and James M. Murphy?

        Moreover, taxpayers don’t pay Goddard to provide scientifically backed analysis, but they pay for the Met Office.

      • Goddard is a smart-ass but I often find him funny even if I disagree with him. He does seem to be a bit bull-headed sometimes but I think he does make valuable contributions.

      • @DM: The British Met Office scientists published predictions in the Journal Science, back in 2007.

        http://www.sciencemag.org/content/317/5839/796

        “…predict further warming during the coming decade, with the year 2014 predicted to be 0.30° ± 0.21°C [5 to 95% confidence interval (CI)] warmer than the observed value for 2004.

        Not following, Doc. You didn’t say whether the subsequent data after 2007 proved the Met Office right or wrong. Which is it, and by how much?

        For definiteness let’s go with HadCRUT4 in making that call.

    • catweazle666

      But unlike yourside, at least he had the good grace to retract and apologise, whereas all you lot do is reach for your (publicly funded) lawyers.

      The day a Warmist scientist comes even close to that, we’ll be making progress.

      I’m not holding my breath.

      • My publically funded lawyers? Who??

      • catweazle666

        “My publically funded lawyers? Who??”

        I was thinking more of the likes of the Hokey Team, Mann in particular.

        But the fact is, I’ve never seen a single Warmist admit to error or retract a claim, no matter how egregious and discredited. Just look at hos doggedly the infamous “Hockey Stick” is defended for example.

  43. We, on the Skeptical side, are skeptical of the Consensus Side and we are Skeptical of each other. Of course Skeptics disagree with each other. We are, correctly, even skeptical of ourselves.

  44. catweazle666

    and the comments at Goddard’s blog can be pretty crackpotty.

    Yes, Judith, so they can!

    There’s a lot of it about, isn’t there?

    • The hater nikFromNYC spent a lot of time at blogs trying to clai Goddar was a kook for saying the CIA drugged and brainwashed people.

      MKultra was real.

      “The published evidence indicates that Project MKULTRA involved the use of many methodologies to manipulate individual mental states and alter brain functions, including the surreptitious administration of drugs and other chemicals, sensory deprivation, isolation, and verbal and sexual abuse.

      Project MKULTRA was first brought to wide public attention in 1975 by the U.S. Congress, through investigations by the Church Committee, and by a presidential commission known as the Rockefeller Commission. Investigative efforts were hampered by the fact that CIA Director Richard Helms ordered all MKULTRA files destroyed in 1973; the Church Committee and Rockefeller Commission investigations relied on the sworn testimony of direct participants and on the relatively small number of documents that survived Helms’ destruction order.”

      https://www.princeton.edu/~achaney/tmve/wiki100k/docs/Project_MKULTRA.html

      • And rockefeller money was funding the scientist/university that was working on this.

        How convienent that the mkultra scheme was investigated by a rockefeller…

      • Perception is reality and it’s not hate behind my warning the skeptical crowd from a big city perspective that such conspiracy theories are exactly what skeptics need to strongly shun as a practical strategy now that only the left wing of politics remains unconverted. Singing to the choir is useless when the loudest choir members are trying to excuse and thus promote conspiracy theories about school shootings justified by an old Cold War era project that briefly tested psychedelic drugs for spy versus spy projects. It’s bloody obvious to *normal* people who swing elections that skeptics have little interest in joining civil society, so the Gore smear machine carries on quite successfully. Goddard claims school shootings are a CIA plot and you cheerlead him on?! It’s madness and an utter PR disaster. We already have proof of a scam but instead of focusing on it, fanaticism reigns. That proof is the bladeless input data of the latest hockey stick sensation:

        Your attitude in the mix means we lose. You attack those of us who live closest to dupes, who are strongly trying to fill you in on what is needed now to help turn the tide with the only remaining demographic that isn’t yet *already* convinced: left leaning urban professionals and left leaning scientists. It’s worthless to even try when skeptics fail to shun a conspiracy theory site chock full of unmoderated crackpots. So far Goddard plays the role of Al Gore’s negative stereotype of a skeptic only too perfectly.

        MKULTRA was a slapstick example of bungling overreach since it not only turned potential enemy soldiers into enhanced killing machines but helped usher in the rebellious psychedelic movement.

        When I correctly pointed out the late data reporting artifact of Goddard’s two year old adjustment hockey stick he egged on a crew in labeling me crazy. His over zealous promotion of zombie stations as purposeful fraud has already resulted in a serious skeptic bashing news cycle. His future output should be tempered, not further inflamed. His fanaticism isolates him from technical feedback too long at a time.

  45. Ben Vorlich

    How much of Watts, Zeke, Nick Stokes and the rests reluctance to consider in detail what Steven Goddard had be saying for months was more NIH (Not Invented Here) as to the fact he’d been “wrong” before.

    Just to pre-empt your Watts etc are better than that, we all fall prey to NIH and protecting our corner from someone who has found something in an area we regard as our balliwick

    • That is not fair to any of the parties, you may not agree with their positions, but all three are honorable men. Estimating a change in the Earths average temperature is non-trivial and full of pit-holes. Anyone who has looked at the problem can spot quite a few of the obvious pitfalls, and it so happens that Goddard has fallen into a few.
      To generate a ‘global’ ‘average’ ‘temperature’ based on a set of records that are less than ideal means making choices, many of these choices demand judgement and judgements introduce an element of bias. You can go down the decision tree and make the judgement call for warmer, warmer, warmer or go the other way.
      One thing is for sure, you can examine the absolute, raw, daily max/min temperatures at any local and there is no statistical difference in the past and the present.

      • Well said. Goddard’s method was wrong for infilling. he should stand down on that point.
        But his simple, this is what they said then, this is what they say now (about the same data), is irrefutable. All one needs to use is the Wayback Machine and a graphical overlay. No arguments about TOBS, grinding, infilling, station paint… Any jury would convict any of the major climate agencies of ‘perjury’.
        And that has been documented over time for NCDC, NASA, Hadcrut, Aus BOM, and on and on. Multiple times, multiple places, multiple ways.

      • Bob Ludwick

        @ DocMartyn

        “To generate a ‘global’ ‘average’ ‘temperature’ based on a set of records that are less than ideal means making choices, many of these choices demand judgement and judgements introduce an element of bias.”

        It also guarantees that you will arrive at a number, defined as the Temperature of the Earth for Year X, that is meaningless for any real world purpose other than justifying political action. (See ” ……judgments introduce an element of bias. above)

      • @BL: It also guarantees that you will arrive at a number, defined as the Temperature of the Earth for Year X, that is meaningless for any real world purpose other than justifying political action.

        Quite right. 20 °C hotter next year is meaningless for any real world purpose, other than the conveniences of not having to dress as warmly in winter and boiled eggs in 2.5 minutes instead of 3.

  46. A fan of *MORE* discourse

    Please allow me to thank Steven Mosher, for concretely allaying Judith Curry’s concerns in regard to the Berkeley Science team’s analysis methods.

    Good on `yah, Steven Mosher!

    Please allow to me also, to commend to Climate Etc readers yet another outstanding temperature-related post by Sou from Bundangawoolarangeera.

    This one is titled Global Surface Temperature and Homogenisation, in which Sou draws our attention to a sustained multinational effort to address precisely the data integrity issues that concern Steve Goddard, Anthony Watts, and Judith Curry:

    Concepts for benchmarking
    of homogenisation algorithm performance
    on the global scale

    The International Surface Temperature Initiative (ISTI) is striving towards substantively improving our ability to robustly understand historical land surface air temperature change at all scales. A key recently completed first step has been collating all available records into a comprehensive open access, traceable and version-controlled databank.

    The crucial next step is to maximise the value of the collated data through a robust international framework of benchmarking and assessment for product intercomparison and uncertainty estimation.

    The value of this framework is critically dependent upon the number of groups taking part and so we strongly advocate involvement in the benchmarking exercise from as many data analyst groups as possible to make the best use of this substantial effort.

    Good on `yah, Sou from Bundangawoolarangeera, and Steven Mosher, and the entire Berkeley Science team, and now too the International Surface Temperature Initiative … for all of you working so hard — and so effectively — to concretely allay the inchoate “uneasiness” that Steve Goddard, Anthony Watts, and Judith Curry have been expressing.

    *EVERYONE* appreciates — young scientists especially! — the immense value of this work in affirming the confluent observational and theoretical integrity of the scientific community’s consensus understanding of climate change.

    \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

  47. I expect Obama’s climate scientists to be every bit as honest and competent as his IRS, his DOJ, his EPA, his Commerce Dept, his State Dept, and his Veterans Adminstration.

    [Your 'shovel-ready' job should be created any day now. So long as his stuff is what you are ready to shovel. Ask the doctor you got to keep.]

    If honest climate scientists (assuming there are any) object to being lumped in with the liars and frauds who dominate the Obama Left, they need to stop acting them and start cleaning out the stables.

  48. Bob Ludwick

    Well, the good news for the consensus is that the above proves once and for all that ‘Global Warming’ is indeed anthropogenic.

    • “Well, the good news for the consensus is that the above proves once and for all that ‘Global Warming’ is indeed anthropogenic.”

      Nice

    • Well, the good news for the consensus is that the above proves once and for all that ‘Global Warming’ is indeed anthropogenic.

      Skeptics aren’t human? Interesting.

  49. All I really want is a data set which I can use to draw a graph without putting a huge “this isn’t data I trust” kind of comment all over it.

    I want a dataset that doesn’t change every time I view it – and usually upjusted.

    I want a dataset done by people who demand quality and don’t try to do things on the cheap by fudging the data.

    I want a dataset, that has controlled revisions.

    I want a dataset which is audited by people who a known to be ruthlessly critical.

    in short, I want a dataset, that I know that if I and a team of 100 Goddards spent all our time looking for problems – we wouldn’t find any.

  50. This was not the hottest May ever! What a bunch of numb nuts.

    From the UAH data:
    5/14: 0.33
    5/10: 0.46
    5/98: 0.56

  51. ‘right answer, wrong method equals bad science’

    ~Wegman

    Looks like Mann and Heller should commiserate over a beer.

  52. “Who all rely on the data prepared by his bunch of scientists at NOAA.”

    And in the same way, the 97% of climate scientists who say the world is warming get their data prepared from NOAA, GISS, or HADCRUT.

    “I have seen this happen before, of course. We should have been warned by the CFC/ozone affair because the corruption of science in that was so bad that something like 80% of the measurements being made during that time were either faked, or incompetently done. ”

    James Lovelock

    http://www.theguardian.com/environment/blog/2010/mar/29/james-lovelock

  53. Temperature is such a simple finite thing. It is amazing how complex people can make it.

    • I had an infamous encounter with a “parrot incubator”. As you suggest, temperature is simple – it’s just finding a way to get an average value for a real life space that is so difficult.

      PS, what is the “average” value of an incubtor with holes through which air must go, in which there is a chick, heaters, and heat loss.

      And what do you do when the chick eats the temperature sensor?

  54. It is probably little consolation to the temperature-record skeptics, or even irrelevant to them, that independent UAH satellite data and HADCRUT4 parallel each other since 1983, both with trends near 0.16 C per decade.

    http://www.woodfortrees.org/plot/hadcrut4gl/from:1983/trend/offset:-0.3/plot/uah/from:1983/trend

  55. It suddenly struck me….most CO2 based global warming acivists, due to their investment in their position, actually WANT the world to be warming. badly enough to beat their heads bloody against a wall in an effort to prove it so.

    From a humanitarian perspective, what is up with that??

  56. Heller/Goddard’s comments are still blocked.

  57. A fan of *MORE* discourse

    A note of consensus  Appreciation, respect, and thanks are extended to Judith Curry …

    Judith Curry’s commitment “I have tried to make this [Climate Etc] a safe place for debate by a broad spectrum of people.”

    Yes.

    The norms and discourse that Climate Etc supports — and Judith Curry’s personal example sustains — are themselves a significant contribution to 21st century climate-change research.

    Thank you — from *EVERYONE* — for a forum well-run, Judith Curry!

    \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

  58. Suggest Heller re-log into WordPress account; I’ve seen those kinds of messages pop-up once in awhile when I posted on WUWT. Thought I was banned, wasn’t ‘banned’ after all, just some quirk with WordPress ‘security’ and cross or inter-Wordpress posting … (yes, I have a couple obscure WP blogs)

    Not everything is a conspiracy, and not everything happens as a result of ‘ill will’.

    .

  59. The irony in all this is that the temperature data is of diminishing importance and increasing liability for the AGW community. The forecasts are tied to the monotonically increasing global temperature forced by accumulating CO2 emission. Departures from the linear upward trend (both negative and positive) discredits the GSMs and the GHG “law” which underlays such. It indicates that the models are poor predictors.

    The greater irony is that the dominant threat to the (C)AGW hypothesis is revealed by the hockey stick argument along with the melting glaciers and polar ice.

    The hockey stick and ice melts are strong evidence for feeding confirmation bias. “Look how bad it is! … Far worse than our models predicted” To be sure, glaciers and polar seas can melt quickly … But even fast melt for big glaciers and ice caps is in the order of hundreds to thousands of years. With CO2 increases rising substantially in the past 50 years there is the problem of time scales. The tipping point ignores the preparation which has gone into the priming. Even a butterfly wing beat is sufficient to upset this apple cart. Neat trick to blame the butterfly for the catastrophe

    One signal concerning the earth’s climate is becoming clear and reliable. The ocean’s charging and discharging of heat is enormous and in timescale of a century or greater. The paper which claims “Krakatoa lives” (ref on req) demonstrates this clearly via an ensemble of OGCMs.

    The karma argument of heat that gets hidden, get released later is all very well …. So easy to forget the cycle is 100+ years (could be 500+ years too. … Upward bound is?)

    100+ years of hidden unknown natural variability is going to overshadow and blow the GCM model prediction to pieces, no matter what they might be or however well the GCMs prove to perform. By the time what goes around comes around, we will all be dead and it will most likely be beside the point regardless. A overly poor prediction delivered too late to be meaningful. That’s the hard reality here

  60. I don’t agree with this post. I’ll focus on the central point of disagreement:

    Further, I think there was an element of ‘boy who cried wolf’ – Goddard has been wrong before, and the comments at Goddard’s blog can be pretty crackpotty. However, the main point is that this group is rapidly self-correcting – the self-correcting function in the skeptical technical blogosphere seems to be more effective (and certainly faster) than for establishment climate science.

    It’d be a big point in favor of the skeptical blogsophere if this were true. Rapid correction of mistakes is a great thing. It’s also non-existent. The skeptical blogsophere does not quickly correct mistakes skeptics make. Steven Goddard has made tons of stupid arguments in the past. Some have even made it into the media. There’s been little to no pressure to correct those. If anything, the pressure has been toward getting people to look the other way.

    Goddard has been making the same stupid arguments for years. If this self-correction were quick, it would have happened prior to the media calling Goddard out. The fact people respond when the media practically forces them to hardly deserves much credit. You can’t even say they deserve credit for not stonewalling like people on the other “side” do in response to criticisms. Even the people criticizing Goddard mostly do so while saying things like:

    In responding to Goddard’s post, Zeke, Nick Stokes (Moyhu) and Watts may have missed the real story. They focused on their previous criticism of Goddard and missed his main point.

    Or other things which downplay the stupidity of the arguments that were widely promoted. It’s not stonewalling, but it’s just as unhelpful. All it is is another tactic to avoid calling out problems in a direct manner, and it sabotages discussions every bit as much as anything done by the other “side.”

    In my experience, skeptics as a whole aren’t self-correcting. They are every bit as guilty of willful blindness as anybody else. They just like to claim otherwise. There are a handful of exceptions, but by far and large, their reaction to any criticism depends entirely upon who and what is being criticized.

    • In my experience, skeptics as a whole aren’t self-correcting. They are every bit as guilty of willful blindness as anybody else. They just like to claim otherwise.

      Agreed. Nevertheless skeptics have thin arguments and a small constituency of expert players. This small mass has low inertia. As such it corrects more rapidly and easily than a body with large expert inertia

      • Raving, that might be true, but if so, some other factor is counterbalancing it. If I had to guess, I’d say skeptics have so little pressure to change it doesn’t matter if they’d change more easily. They don’t have the incentive to.

        That’s just a guess though. All I know for sure is skeptics aren’t skeptical. A person who criticizes mainstream views with arguments that are wrong, stupid and dishonest won’t be corrected and/or scoffed at (unless they’re of a certain type, primarily skydragon arguments). They’ll either be ignored and allowed to continue without rebuttal, or they’ll be praised and heralded as a hero. You can provide clear-cut documentation proving the person is wrong and even dishonest, and pretty much nobody will care. I’ve gone through the process multiple times. Even the few people who will speak up when the problem is obvious have to be goaded into it.

        The worst part is skeptics aren’t just unskeptical. It’s telling to compare the reactions I’ve gotten when criticizing Richard Tol. Richard Tol criticized a skeptical paper (by Ludecke et al) here, and I pointed out he was saying incredibly stupid things. Skeptics cheered. Later, Richard Tol criticized Cook et al, a mainstream paper, I pointed out he was saying incredibly stupid things. Skeptics jeered.

        In other words, skeptics as abusive toward people who are actually skeptical as anyone else. There are a small number of exceptions, but for the most part, skeptics act as tribally as warmists.

      • Brandon, I am making a distinction between people who make technical posts in the blogosphere, versus people that merely comment. Yes there is a lot of tribal jeering and cheering in the comments, but the people doing the technical work are much more objective for the most part. Not every stupid thing that gets posted in a technical analysis on a blog is worth commenting on or debunking. If it makes it into the MSM, then people should definitely take a closer look

      • Self-correcting would be if Goddard admitted he was wrong. This almost never happens when a skeptic is caught with a Pants on Fire judgement. Instead it is more of a “look squirrel” response, as with this Texas station with an instrument problem, completely different from the original story. This is the more typical pattern.

      • The problems in TX are widespread, stay tuned.

      • Steven Mosher

        The mistake is thinking that Luling is one station.
        its not.

        Its one NAME and at least 6 different locations

        why dont people get these basics

      • Judith, I get that, but the distinction doesn’t help your case. The only distinction between the two groups you describe is the overtness of their tribalism. The jeering and cheering you describe is a reflection of how the bloggers your praise behave. Commenters take their cues from bloggers.

        On the issue of what should be addressed, I agree not every random thing said on these blogs needs to be addressed. That’s irrelevant though. None of the examples I have in mind fit what you describe. For example, this isn’t the first time Steven Goddard has made it into the media. The first time I examined anything Goddard said, it was regarding work of his that had been promoted on live television. The primary difference is the MSM didn’t respond to it.

        In other examples, the bias was demonstrated on blogs where the issues were brought up. You can say you don’t need to respond/debunk every stupid thing people say, but people can’t say that while promoting those stupid things.

        About the only example that might fit is how people responded to me accusing Richard Tol of abusing the IPCC process to completely rewrite one section (and drastically edit a second) of the report to change its conclusions to fit his views, giving focus almost entirely to his own work, all done outside the normal IPCC process. I’ll admit there was no MSM coverage of that or blog discussions about it, but I think the issue was clearly important enough to merit at least some attention.

        Instead, the most anyone did is one blogger made a casual post referring to my claims. Most shrugged their shoulders. A couple flat-out said they wouldn’t cover it. One went so far as to ask me to stop talking about it. That would never have happened if it had been Michael Mann, Keith Briffa or any of a hundred other people. But Richard Tol? It was made clear. Hands off him.

        It doesn’t matter if a case is indisputable. It doesn’t matter if a case clearly proves a point people have been making for years (the IPCC process is susceptible to corruption), or that it’s the only proof offered for the newest IPCC Report. People like the guy who did it, and they like the things he says, so they won’t speak up.

      • Don Monfort

        True, but trivial:skeptics act as tribally as warmists.

      • bob droege | June 28, 2014 at 11:55 am |
        Weedwhackers and Troybuilts causing a confirmed bias to temperature measurements in the cool direction.

        Perhaps some more pvc pipe is in order, we must protect the cables for data integrity’s sake.
        -—

        Am ignorant about the situation but isn’t this a systematic sort of failure which pertains further afield than just 6 stations in Texas? More upward readjustments seem immanent

      • Brandon, Curry, Jim, … Agreed agreed agreed.

        Sabotaging my own argument: people will go further for the sake of pride than of money. It means that the tail distribution is almost everything. Not even scientists are good at swallowing their pride.

        Way too many unstated implicit assumptions in this discussion .. Example: to put it kindly, skeptics can be flyweight experts.

        I would counter with another undeclared/unsubstantuated assuption … “why is if left up to marginal flyweight skeptics to provide critique? Subsequently ridiculing them for being flyweight experts is particularly unfair … Etc, etc

        I just suppose that climate change is a setttled science and only crackpots would be so foolish as to push back against the status quo. The scientist can be as objective as they desire. They are going to get eaten alive by their colleagues for bucking the trend.

        Any estimate of the number of heavyweight climate change skeptics. Enough to count on the fingers of one hand or even less so ?

      • Raving,

        looks to me like the temperature stations can get damaged and produce unreliable readings. Detection and correction seems a significant issue.

        If you identify a cool bias in the instrument, seems to me you have to correct for that. That seems to be the case, the instrument in question was reading cool due to the damage.

        We will see what happens.

      • @SM: It [Luling] is one NAME and at least 6 different locations. why dont people get these basics

        Not following. Are you saying that data was being reported from all 6 locations simultaneously all under the same name, or merely that one station was moving around?

        There’s a big difference between concurrency and sequentiality.

      • @DM: True, but trivial:skeptics act as tribally as warmists.

        That would improve their influence with the swing voters. 50 raggedy tents each with a different reason why the warmist tribe is wrong are hard to focus on.

      • @raving: people will go further for the sake of pride than of money.

        Even if no one has ever heard of them?

        The question of fame vs. money is answered unanimously at

        https://answers.yahoo.com/question/index?qid=20130201031837AA1fPWf

        All six respondents preferred rich over famous.

    • Brandon said;

      ‘In my experience, skeptics as a whole aren’t self-correcting. They are every bit as guilty of willful blindness as anybody else. They just like to claim otherwise. There are a handful of exceptions, but by far and large, their reaction to any criticism depends entirely upon who and what is being criticized.’

      Agreed. well said. The difference being of course is that sceptics bad science, when it occurs, isn’t affecting Govt policy.
      tonyb

      • I don’t know Tony. Judith’s post was mid-morning and Brandon has corrected it by mid-afternoon. That is pretty quick.

      • climatereason, I mostly agree about that. Not entirely though. While there’s no direct effect, bad skeptical arguments do make their way into the public view where they influence people, including policy makers. It’s not as direct or as significant, but it is real.

      • mwgrant, I’m flattered!

        But it’s only chance I even commented. In part, I’m tired of this global warming debate. I feel like I should create a pseudonym who would be a “warmist.” He would post all the criticisms I have about people on the “skeptic” side, and he’d be embraced and celebrated by “warmists.” Brandon Shollenberger would then be free to post criticisms only of “warmists,” and he’d get tons of praise for it from the “skeptics.” I bet it’d work.

        The other part is I didn’t even intend to get on blogs today. It’s my birthday, and I meant to take this weekend off of frustrating things.

      • Happy Birthday, Brandon. Enjoy it.

        Note: you idea might be quite sane. I also understand being tired of the debate–it really is kinda like WW I … everybody is in the trenches and the interesting science is in no-man’s land. Again, Happy B-day.

      • Thanks! I’m afraid my commenting will probably be limited for the rest of today and tomorrow. I’m having a movie night tonight which starts in ~15 minutes, and tomorrow I’m running a dart tournament for a large part of the day. Most of my activity will be from my phone.

        As for my idea, I pretty much gave up on it when I realized there are too many neutral, technical issues I’d like to examine. Neither name could do that effectively because both would be associated with bias. I’d have to create a third name to handle it, and three identities would be too silly. It would be funny to run a debate between all three though. I can imagine the neutral name raising an issue while the other two duke it out. It’d be like my internal conversations.

        I think I may have just ruled out the idea I’m sane :P

      • Many happy returns of the sanity.
        ================

      • Seconded and thirded.

      • Happy birthday, Brandon!

        I meant to take this weekend off of frustrating things.

        Hi, my name is Vaughan, and I comment at CE as an escape from non-climate things.

  61. The problems with the existing temperature data sets are intractable. Not because of any adjustments or data fiddling, but because:
    i) Not enough global data coverage over a long enough time span
    ii) Problems with siting, calibration and maintenance of many surface stations (especially those in the developing world) that make a mockery of the claimed accuracy of the data that does exist.

    It follows that the biggest problems with the data occur long before Mosher or Goddard or anyone else gets their hands on it. It simply isn’t good enough to accurately understand our climate or predict what it will do next. I’m not saying that there isn’t worthwhile climate science that needs doing, just that trying to do the whole thing in one go is beyond us at present.

    Having said this, Goddard is a crank who damages the anti-Alarmist cause with every post he makes. But at least a large number of sceptics call him out on this. How many believers in damaging climate change publicly attack Mann, Gleick, Lewandosky, etc? Come on, say your piece.

    • But he is RIGHT, so you are WRONG, why can’t you admit it, Anthony Watts has.

      • Goddard is not ‘RIGHT’. In his manic attacks on the US surface stations record he happened to stumble across a genuine error. Go and read the details of the thread at WUWT. He’s still a crank who posts up stuff that deserves to stay in the darkest recesses of the internet.

      • Steven Mosher

        goddard used the wrong tool (absolute averaging) to find a problem.
        His method exagerates the problem, and he attributed bad motives.

        So, thanks for being an asshole.

        guys mail me problem cases all the time. The problems get fixed with a nice thank you.

      • Name calling is not a good substitute for science.

      • So Mr Mosher you are now resorting to calling me an asshole.

        That shows just how upset you are by this whole thing.

        Well just wait until BEST gets torn apart as well, because it is coming.
        You keep bragging about how good it is, well the Summaries are absolute CRAP with exactly the same adjustment induced upward trends that bear no relationship to the actual data.
        Your time will come.

  62. I have asked this question on another blog, but I didn’t get an answer.

    My question is about the Max/Min and the TOBS adjustments.

    What is the need for the Max/Min determination? If I take all the daily readings and create a mean average, what effect does the high and low have on the average?

    I live in Oklahoma and I look at the Stillwater daily readings from the two stations on the USCRN. They calculate an average mean temperature along with the Max/Min readings. Sometimes I average the Man/Min just to see if it is different the average mean and it it usually very close and quite often the same average.

    So my question is what is the importance and necissity of the Max/Min temperatures to the temperatures of the climate?

    • To compare new electronic equipment readings with the old Max Min Mercury Thermometer Readings.

  63. “Now, with Homewood’s explanation/clarification, NOAA really needs to respond.”

    I did a post here on Luling, which clearly shows there was an inhomogeneity there. A commenter, mesoman, has actually worked on that site. He explained that there was a faulty cable which caused low readings to be transmitted. The fault was fixed on Jan 18, 2014.

    So what do we have here? The NOAA software correctly detected that there was a problem, and quarantined the data, replacing it with data from neighbours. Exactly what it should do. And a cacophony from skeptics, hollering that data had been “altered”.

    • Nick, the problems in TX are widespread, according to John Nielsen-Gammon. This issue is still playing itself out . . .

      • There may be widespread cable maintenance problems, as mesoman describes here. That isn’t a climate science issue. All scientists can do is to analyse the data and try to detect and deal with problems when they arise. And that is exactly what happened successfully here.

      • I assume the reference to John N-G is to the email quoted by WUWT. John did not say there were widespread problems. He listed 13 stations that, like Luling, have had some data ruled inadmissible in recent times. On my count, Texas has 188 stations in USHCN.

      • This is not one station. 5-10% of U.S. stations, apparently, and then there is the large number of ‘zombie stations’

      • 118??

        For May 2014:

        raw 35
        tob 35
        FLs.52i 49

      • “Now its well past your bedtime is Australia. Maybe that is why you aren’t thinking clearly”
        When it’s afternoon in California, the sun is over the Pacific somewhere. It’s 8am here.

        Zombie stations are an arithmetical device required to keep the absolute temperature system of USHCN working. I don’t think hat is a good system; anomalies are better, But USHCN does make it work. With absolutes, if stations leave an average, their differing climatologies affect the results. That was behind Goddard’s famous spike. If you keep the stations in the network, you avoid that. The set of climatologies in the average is constant, and the anomaly component, via infilling, is averaged just as you would if you dropped the station.

      • Opps, sorry. The first para was from my last comment at WUWT. Cut and paste slip.

      • That was from Tmax.

      • Steven Mosher

        Judith

        it is 5-10 of ushcn.. or 60-120 stations.

        there are 20000+ stations in the US.

        In fact you can totally pitch ALL of uschn out the window if you like.

        answer doesnt change.

        for the life of me I dont know why NCDC persists the USHCN collection.

        GHCN Daily is all you need.

      • Easier to make malleable men of the machines, moshe.
        ===============

      • Steven Mosher

        kim you have no idea how maddening this is.

        Perhaps zeke and I should do a series of posts called step by step.
        where we go through every blasted step in mind numbing detail.

        In the end you can bet, that people will shout, look a squirrel

        because nobody wants to understand. i used to think people did actually care about using the best methods to find out the limits of our knowledge.

        naive.

        the sad truth is something quite different.

        it gives citizen science a bad name. and depresses the hell out of me.

        im going to chop wood and carry water now.

      • nobody wants to understand

        Not quite true but dialogue has been coarsened to the extent that it often looks that way. Have to clean that up too. Hard road.

      • A C Osborn | June 28, 2014 at 6:38 pm |
        “Nick, as I have said on WUWT, San Antonia one of the near Stations was also using estimated data for that period, did you check them all to see if they were real or estimated?”

        FILNET in USHCN is the last major step. None of the earlier steps use infilled data, nor does FILNET itself.

        But as to checking, you can do that yourself here. Just choose a month and ask it to show stations. It is using GHCN unadjusted, which for US us USHCN, and it shows only stations reporting (and what they said).

      • Nick and Steven,

        Some appreciate what you do.

        thanks

      • Steven Mosher

        Nick is a pure joy to work with. Humble. Fast. And wicked smart. Really wicked smart.

        We disagree else where about Shit that doesn’t matter.

      • Nick Stokes | June 28, 2014 at 5:21 pm |
        “On my count, Texas has 188 stations in USHCN.”

        I counted the wrong list. Texas has 188 Coop stations, but only 49 are in USHCN.

    • So Nick, the blue station is ‘clearly’ an outliner, but the two red stations to the north east aren’t.

    • The detailed steps would make a good CE post :)

      • Steven Mosher

        I will see.

        But there will be ground rules

        One thing zeke and I have talked about is showing incrementals

        1. using raw only with no slicing and no qa
        2. what happens when you qa
        3. what happeens when you slice
        a) changing slice parameters.
        4. The whole enchalada.

        It will be hella boring which means commments will go off the rails into
        Muller is not a skeptic, c02 is not a gHG, blah blah blah.

        people wont like my rules

      • I like this idea, and people can then look at the impact of these changes on individual stations and regions to see whether it makes sense

      • Mosh

        Your 12.13.

        Sounds good. How far could you go back, and what reliance could you give on the reliability of the raw data?

        Camuffo took 7 million Euros to spend 2 years looking at 7 Historic European stations. How does 1million Euros for each station you examine sound? Just send the bill to Judith.

        tonyb

      • Steven Mosher

        Look at it this way. It is the prototype core for a valuable addition to BEST QA documentation. Such calculation packages, routine in environmental projects, can actually be quite useful. No doubt, it would tedious at times and most likely unrewarding but that is nature of those things.

        In any case thank you for your sustained effort.

      • While I see no reason to supply “estimated” values for a given station that has some bad data, your algorithm could be valuable for pinpointing stations that merit a visit from a technician.

        I was shocked to find that technical difficulties are not made part of the station record. I mean knowing if a station is actually working properly or not seems to be critical information.

        Finally, I suppose there could be a dominant failure mode that, when the temperature is corrected, or those bad readings are dropped, the temperature trend goes up. But that needs to be demonstrated.

        Otherwise, someone has to explain why processing the data makes the temperature trend go up.

      • Steven Mosher

        Judith I will see if I can get Zeke and Robert to agree.

        Presently, Robert ( with Zeke and me assisting) is compiling a first of its kind database and is 100% committed to getting a paper done. I’m almost done with lit review so time will free up. and I’m working on the data paper along with my other projects of collecting out of sample test data and more satellite work.Zeke has another paper ( maybe two) in the works.

        Obviously some of the work overlaps for me especially with respect to the data paper and I do have a new volunteer who is helping.

        Assuming I can get agreement and some sort of schedule ( compute time is a killer with only one box ) it would probably be at least 10 posts
        maybe more.

      • Thanks Steve. Another key issue is local/regional differences between NOAA’s adjustment and Berkeley Earth adjusted. I’ve looked at the example D’Aleo provided of Maine, with NOAA raw and NOAA adj. Zeke then send me the link to the Berkeley version of same. Pretty major differences. I work to follow the details of the adjustment processes, and they seem reasonable; then when I step back and look at the differences between raw and adj, and NOAA vs Berkeley, I wonder if all this makes sense, when the adjustments are the size of the signal we are looking to detect.

    • “because nobody wants to understand”

      Mosher is the same guy who says science has always been messy and problematic and driven by flawed human behavior.

      Now he’s whining about it. He should take his own medicine like a man.

      Hard.

      Andrew

      • Steven Mosher

        You don’t want to learn.
        Yes it’s messy all the more reason for you not to drop random turds

  64. You have totally exaggerated what Dr. Curry said. For what purpose? A single exaggeration/lie from a person is worthy of total distrust of that person!

  65. While I sometimes get the impression that climate science suffers from some kind of compulsive thinking that where data are missing, there must be ways to model them, I see the need and justification for adjusting and homogenising data. However, isn’t it fair to ask, as people on the “warm” side often do when confronted with skeptical arguments: Can it be backed up with peer reviewed articles?

    So if temperature records have been adjusted, estimated and homogenised – where is it documented and when was it peer reviewed?

    • Steven Mosher

      See the papers. Read the code

      • Mr Mosher: In my work, which isn’t related to climatology we have to summarize physical properties by estimating the “weighted averages” of data distributed over a geographic area (this is done trying to define the value of a fossil fuel deposit). Because the data coverage is uneven we use kriging to interpolate…

        I was wondering if the result (in this case lets say it’s the lower 48 average temp anomaly) changes appreciably if you drop all the “questionable” stations and contour the data after infilling with a kriging technique the community finds acceptable and bias free. I take it the key is the temperature anomaly change per decade, and have a hunch the trend won’t change much….unless there has been significant meddling with the underlying data set. Did you guys run initial trials using a coarser grid? Was there any difference at all?

  66. The elephant in the room is the land/ocean divergence.

    http://www.woodfortrees.org/plot/uah-land/mean:12/plot/uah/mean:12/plot/hadcrut4gl/from:1979/mean:12/plot/crutem4vgl/from:1979/mean:12

    The reason for the divergence is changes in latent heat from the surface due to changing water availability – and correction for this is not remotely on the radar. It adds up to a record that is obsolete for climate purposes – and an argument that is pointless.

  67. I have a suspicion that temperature stations suffer from siteing bias; they are placed in places for human reasons and not randomly distributed.
    The reasons for their relocation are obviously complex, but I have a suspicion that a major reason for movement is the rising price of the land they sit on; due to encroachment of people. A rural station starts away a long way from people, then the people crowd it, warming it, and then relocate it away from the UHI.
    It is not a problem unless you convert the saw-tooth pattern of readings into a straight line by removing the relocation drop.

  68. A fan of *MORE* discourse

    Question  Could any amount of curve-fitting and cycle-seeking predict the amazing Fujiwhara Effect that current dynamical models are predicting?

    Conclusion  Steve Goddard’s data-quibbles are peripheral to the <a href="Question  Could any amount of curve-fitting and cycle-seeking predict the amazing Fujiwhara Effect that current dynamical models are predicting?

    Conclusion  Steve Goddard’s data-quibbles are peripheral to the thermodynamical, energy transport, and fluid mechanical foundations of 21st century climate-science.

    \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

  69. A fan of *MORE* discourse

    Question  Could any amount of curve-fitting and cycle-seeking predict the amazing Fujiwhara Effect that current dynamical models are predicting?

    Answer  No. Climate-science has to be much more that curve-fitting and cycle-seeking.

    Conclusion  Steve Goddard’s data-quibbles are peripheral to the thermodynamical, energy transport, and fluid mechanical foundations of 21st century climate-science.

    \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

  70. Pingback: The scientific method is at work on the USHCN temperature data set | Watts Up With That?

    • I had a quick look. First great that Anthony admits a mistake (I was going to say apologizes but I would have to check). But I can’t see this addresses this curve: http://stevengoddard.wordpress.com/2014/06/28/one-more-time/#comment-378857

      • Scottish Sceptic | June 28, 2014 at 5:14 pm”

        You asked the right question:
        “Is there a simple one-to-one correspondence between each station in the raw data set and the final.”

        And he said yes, which is false. I could call it a lie, but I think he just doesn’t have a clue.

        Here’s why it’s a key issue.

    • Steven Mosher

      unfortunately anthony trusts the NCDC metadata when it comes to station moves.

      nobody picked that up. well I just did

      • OK Steven, when did the station move?
        You have 6 moves since 1949 in BEST. Have you got the documentation to compare with your movement estimates?

      • Steven Mosher

        Doc,

        the data chart shows when the station moved.
        There are 16 different sources that supply both metadata and temperature data.

        There are two approaches:

        1. Pick a source that you cant check and trust it.
        2. Look at all the information

        And recall, imputing a move when none is made is harmless
        Missing a move introduces a bias, if the move was significant.

        Just logic

    • OK, I’ve finally worked out that the graph Steve Goddard has problems in that it is comparing real stations with the single final figure.

      I still think it is symptomatic of a problem (possibly related to Anthony’s post with station numbers). But I also doubt whether gridding is working. Longer comment on my blog:

      More comment on my blog: http://scottishsceptic.wordpress.com/2014/06/28/one-more-time/

    • I find that post baffling. Anthony Watts says:

      the best thing about all this hoopla over the USHCN data set is the Polifact story where we have all these experts lined up (including me as the token skeptic) that stated without a doubt that Goddard was wrong and rated the claim “pants of fire”.

      They’ll all be eating some crow, as will I, but now that I have Gavin for dinner company, I don’t really mind at all.

      Based upon the fact Goddard was right about a bug existing. However, the Polifact story in question had nothing to do with that bug. The Polifact story was about Goddard’s claim that the data had been severely adjusted to alter the results. There’s no indication that bug had anything to do with that claim.

      In effect, people said Goddard was wrong about issue A, and Watts is saying they need to eat crow because Goddard was right about issue B. It’s incoherent.

      (And if you look at my comments on the post, you’ll see it took me a while to figure out the trick in the post.)

    • Judy,
      Picking up from what Brandon just said, I read in the Politifact article that the main complaint Gavin had was time of day. Is that a factor in Goddards data or not? I didn’t see that mentioned by Watts.

  71. I remember when I first became interested in finding out about climate change. I looked at a lot of web sites and most often wondered whether or not it was true or partly true or flat out misleading. When I followed Drudges link and saw that oz picture I was right away dubious. I read things there before and had no reason to believe or disbelieve, but I just didn’t really trust it. I waited for the response from consensus, it didn’t take long. Now there is more to the story and it’s really getting muddled.

    Perceptions can often supercede reality at least for awhile anyway. Some may believe the original story. Some would have thought BS and got satisfied by pants on fire. How many now dwindle down to: ‘Oh guess what there is something fishy here!’ Finally everyone ‘retires to side’ everyone lines up where they were before. Can’t teach an old climate warrior new tricks or can you?

  72. Pingback: Offering Critical Thinking To An Acquaintance on Global Warming | Religio-Political Talk (RPT)

  73. The ultimate folly is the bald presumption that the many faults of station records can be reliably identified and corrected by various ad hoc adjustments, homogenizations, and statistical massages. Meanwhile there’s hardly any serious validation of station records done by well-established signal analysis methods.

    What various manufacturers of climate time-series have done with the Luling TX data is incredible and is hardly confined to a recent year. It pervades the entire time-series, turning a negative century-long trend into a positive one. And this legendremain is by no means an exception in the the latter versions of the USHCN data base

    • Steven Mosher

      “Meanwhile there’s hardly any serious validation of station records done by well-established signal analysis methods.”

      ah, somebody who didnt read the code.

    • Ah, yet another self-styled “expert” who has no concept of what constitutes serious validation by well-stablished signal analysis methods.

  74. So this whole thing went from Goddard’s accusation of messing with the whole US temperature record, to an individual Texas station with a big problem that was discarded anyway, to now the practice of infilling, which, if it needs correcting, could go either way and hardly matter regarding the effect on US warming. Each step has been more towards what we already have as the US temperature record. Just one big diversion that ends up where we started. Keeps the skeptics off the streets, I suppose.

  75. John S. | June 28, 2014 at 5:22 pm | Reply
    “The ultimate folly is the bald presumption that the many faults of station records can be reliably identified and corrected by various ad hoc adjustments, homogenizations, and statistical massages. “

    You don’t want to cite this case. In Luling the process did in fact identify what turned out to be a real electrical problem, and acted correctly to quarantine it.

    • I became engineering manager in a factory where the temperature control was so appalling – that even the faults had faults – so often even though they were faulty, the many faults compensated each other out.

      That is what your network sounds like.

    • Anthony Watts

      Nick, when you quarantine a person, you remove them from the general population so that they don’t infect others. In this case the individual station data was pulled, and replaced with imposter data, data that is “infected” (infilled) from other stations nearby.

      Your quarantine analogy fails, just like so many of your “racehorse” machinations of defense of the indefensible.

      • So we just blow off the quality control then eh?

      • ” defense of the indefensible”
        It’s perfectly defensible. USHCN, with FILNET, has been a prominent product of one of the world’s major scientific bodies for 25 years. They have a large body of published literature explaining their methods. You have decided in recent days that you don’t like infilling – you can start to make a case, but it’s way early to describe this well-established process as indefensible.

        Whenever you create an average, if you remove a datapoint, that has the same effect as replacing it with the average of the rest. That is simple arithmetic. Infilling replaces it with the average of neighbours. That has to be better. And it certainly isn’t imposter data. It’s the data you were already dealing with.

      • NS – no one has justified why the bad data from a station needs to be replaced with anything at all. It simply isn’t necessary to replace it. Just drop the bad data. End of story. There is zero justification to do anything else.

      • Yes, the dude doesn’t like the concept of infilling. But it so happens that the idea behind infilling predicted the majority of well sites for oil extraction.

        Drill, baby, drill.

        Infill, baby, infill.

      • Thanks, Anthony.

        Steven Goddard aka Tony Heller has issued an open invitation to debate those who disagree with him.

        The critical comments here would be more credible if posted on that site:

        http://stevengoddard.wordpress.com/2014/06/29/the-scumbags-are-out-in-force/

      • The quarantine analogy works. If you leave bad stations in, they infect the average. I don’t think even most of the skeptics wanted the Luling data left in when they saw its problems.

      • jim2 | June 29, 2014 at 8:12 pm
        “NS – no one has justified why the bad data from a station needs to be replaced with anything at all. It simply isn’t necessary to replace it. Just drop the bad data.”

        You weren’t following what I said. Dropping has exactly the same effect as replacing with the average of remainder:
        Year’s monthly temps
        2 5 10 15 20 22 20 15 10 8 5. Average = sum/12= 11
        Drop Feb, average is (2+10+…)/11=11.55
        Just dropping Feb raised the average. It’s the same as
        (2+11.55+10+…)/12=11.82 ie replacing Feb by 11.55

        But you can see 11.55 is a bad replacement. How about using average of neighbours. Replace Feb by (2+10)/2=6
        (2+6+10+…)/12=11.08
        Much closer.

        You can’t avoid choices. You just have to make the best choice that you can.

      • Oops (2+11.55+10+…)/12=11.55 ie replacing Feb by 11.55
        should be (2+11.55+10+…)/12=11.55 ie replacing Feb by 11.55

      • Whenever you create an average, if you remove a datapoint, that has the same effect as replacing it with the average of the rest. That is simple arithmetic. Infilling replaces it with the average of neighbours. That has to be better. And it certainly isn’t imposter data. It’s the data you were already dealing with.

        What I think Stokes is saying is that the better answer is to derive the average from a better source of nearby stations, not the worse source of all the stations.
        Infilling one day a year does not create a zombie. Nor does one day a month. One day a week it might start to have a few zombie traits, but that data is still valuable. Think useful to farmers and growing degree days, and weather forecasters using North Canada data.
        Is there any useful information from full zombie stations? I think there is, though perhaps they should be placed in a lesser tier.

      • omanuel | June 29, 2014 at 8:25 pm |
        “Steven Goddard aka Tony Heller has issued an open invitation to debate those who disagree with him.”

        I remember an earlier invitation
        “Again, I am happy to debate and humiliate anyone who disagrees with this basic mathematics.”

        I’ve tried. His motto seems to be “never explain, never apologize”. And his basic mathematics is hopeless.

      • NS – I get that. I’m saying just drop it, don’t replace it. So you might have a discontinuity, the BEST technique will just treat it as a separate series. There is no law that says a bad data point has to be replaced by any other number of any kink.

      • Hmmmm … slip of the finger, I think. One man’s kinky temperature series is another man’s sleazy one.

      • Ragnaar | June 29, 2014 at 8:47 pm |
        “Is there any useful information from full zombie stations?”

        Actually, no. The information is in the real datapoints. Infilling doesn’t add or subtract information. It can give you a more accurate average. The fraction of infilling is irrelevant; what counts is having enough real data in total.

        Basically, if you have an accurate weighting scheme set up, and data goes missing, you can use the same weighting scheme with appropriate infill. Otherwise, you have to revise the weighting scheme. In my annual example, with all data you can weight equally. Missing a known winter month, you should either estimate (as a winter month) or reweight.

      • jim2 | June 29, 2014 at 9:02 pm |
        “There is no law that says a bad data point has to be replaced by any other number of any kind.”

        Yes, there is. Law of basic arithmetic. Not replacing is exactly equivalent to replacing with average of remainder. It doesn’t matter what you call what you’ve done. And if that replacement is bad, then dropping is exactly equally bad. And if you know the annual average is a bad replacement for Feb, then bad is what you get.

      • If dropping it and replacing it with the average gets the same result, and I believe that’s true, then there is no reason to replace it. Sheesh!

        Maybe this will be more clear.
        1. Collect the raw temp data
        2. Go through all the processing steps to obtain the temperature field.
        3. Do the QA step.
        4. Drop the raw data points flagged as bad.
        5. Repeat step 2 with the QA’ed data set.

        I’m sure I’m missing something because what you and Mosher keep saying makes no sense. But hopefully, the steps above will communicate what I had in mind better.

      • jim2, so when they sum all the stations to get the US or state mean, you think it is OK to just drop some zombie (for example) Maine stations for future years, so less Maine stations figure into the average, making it warmer? I suspect that the purpose of keeping the zombies is to do the best to keep a uniform station distribution for state and national means from year to year, otherwise that can get very hard to interpret with stations just dropping out of the means. Another way is to go back and remove the zombie from the whole record, which changes past years too, but some don’t like that happening either, and a few might even freak out when they see past US means changing.

      • Jim D | June 30, 2014 at 12:00 am |
        …so when they sum all the stations to get the US or state mean, you think it is OK to just drop some zombie (for example) Maine stations for future years, so less Maine stations figure into the average, making it warmer?
        So is each station weighted differently by the NOAA so that a station in the sticks has more weight as it covers more area?

      • jim2 | June 29, 2014 at 9:52 pm |
        “If dropping it and replacing it with the average gets the same result, and I believe that’s true, then there is no reason to replace it.”

        I’ve expanded on the annual average simplified version here. I think it shows that dropping is far from cost free.

      • OK, Nick. I still don’t like using bad data, and maybe you don’t, I don’t know. I downloaded the code but don’t have Matlab and have never programmed it. I have Octave so I might get it to run, but no help file, so …

        How about this?
        Maybe this will be more clear.
        1. Collect the raw temp data
        2. Go through all the processing steps to obtain the temperature field.
        3. Do the QA step.
        4. Drop the raw data points flagged as bad.
        5. INFILL THE DROPPED BAD DATA TO YOUR HEARTS CONTENT.
        6. Repeat step 2 with the QA’ed data set.

        Is this in fact what happens?

    • Nick Sokes:

      I wrote: “What various manufacturers of climate time-series have done with the Luling TX data is incredible and is hardly confined to a recent year. It pervades the entire time-series, turning a negative century-long trend into a positive one. ”

      What part of the above statement did you miss?

  76. I’ve been reading up no anomalies again, and Steve Mosher’s comment. And I don’t understand why there is any need at all to adjust for stations moves if the trend for that station is used rather than an absolute.
    Using the trend/differential or difference (whichever you prefer) totally removes any need to adjust new station data.

    If we are interested in temperature change – then all we need to know is the temperature change. The absolute temperature is totally meaningless. All we want is the average temperature change – so why not average the temperature change, instead of adjusting the temperature then averaging and then only at the end working out the trend?

    The only adjustment that would be needed is to homogenize coverage (but not just geographically but in other ways).

    • Suppose you had 10 stations and 2 of them moved and started reporting colder temperatures. Don’t you expect that to affect the trend if you just flat ignored the moves? They account for moves by treating these as new stations, and their trends before and after the move are separated as if they were different stations. Makes sense.

      • I was suggesting treating them as different stations when they moved. So why is there any adjustment if a station moves?

      • My understanding of BEST is that they don’t have to adjust because they insert break points when they detect a large quick change at a station.

      • Steven Mosher

        SS

        there is no adjustment if the station moves.
        The record is split.
        IF the move causes no change in the station, then nothing happens.
        IF the move caused a Divergence, then the two stations are treated as two different stations.

        its simple. In fact, skeptics thought of it FIRST

        Why? because I beat the drum pretty heavily about the error of adjustment and skeptics suggested just splitting the station

        they spoke, I listened.

      • Is it easier to re-vivify an earthworm split in two pieces or a hundred?
        ================

      • I think splitting station data at discontinuities was a great advance. I don’t know who thought of it, but it makes sense.

    • Steven Mosher

      There are two approaches

      Example.

      Station Zebra, lat = 40, lon =-80, alt =1000m ASL

      At year 5 station zebra is moved from 1000ASL to 0m ASL

      Station Zebra, lat = 40.02, lon =-80, alt = 0m ASL

      in the data the station will have the same name. but at year 5
      it moves down from 1000m asl to 0

      here is the temperature for 10 years

      6 6 6 6 6 0 0 0 0 0

      Note the drop. Now, what do you do?

      Approach 1. LAPSE RATE adjustment

      we know that temperature goes down as you go up in elevation.

      So, ONE approach is to adjust the station. the adjustment is made by
      a lapse rate adjustment. an AVERAGE lapse rate of say 6.6C is made

      6 6 6 6 6 6.6 6.6 6.6 6.6 6.6

      This is prone to error since lapse rate is season dependent and location dependent. But it is hoped that errors will be offsetting

      Second approach.

      Split the station. Why, because its two diffferent stations that just happen to
      be NAMED THE SAME.

      its that frickin simple

      if you dont adjust, you get a spurious cooling trend
      if you do adjust you introduce the ERROR OF ADJUSTMENT

      so split the station and you get neither error

      • I’m on board with splitting stations. Not on board for using data known to be bad. And still wondering why after all the … errr … calculations there is no net change in trend – instead the trend goes up. THAT does not make sense.

      • Zebra old 1900-1990 trend up
        Zebra new 1991-2014 trend flat
        Can we say anything about the trend for Zebra old combined with Zebra new?

      • Steven Mosher

        ragneer . you dont combine the two. PERIOD.

  77. I think some people are being a bit too obtuse, in this instance. (intentionally?)

    First of all, Tuling is an example! Forget the specifics of that individual case. Simply understand there is an endemic problem here. Some have stated that the people acted correctly in response to the stations problems.
    Did they? Does their actions allow for an accurate assessment of our temps? What of the others, and the zombie stations? Infilling? We absolutely know this isn’t precise. How precise do we have to be? Well, considering the people are wetting themselves over a supposed 0.8 deg C, I would submit we need to be closer than that in estimating some temps going into our global data set.

    Further, knowing this is a problem in the US, which probably sinks more resources in this question, than say, China, any nation in Africa, or South America, the question must arise, how pervasive is this problem (station drop-off and zombie stations) globally?

    • Well said. And you need not have been rhetorical. The answer is in the GHCN station data raw v. “homogenized”, and it is not pretty. See Steriou’s 2012 EGU meeting presentation on same, using a 163 station global random sample.

    • Heh, we can apply the ‘Luling Test’ to see if any given thermometer is a machine or something more merely human.
      =============

    • er, H/t Suyts, through ‘Tuling’.
      ==========

  78. I’m just a simple engineer, but it strikes me that if you do not have the raw data, then you do not have a valid measurement. So don’t include “estimated”, or any other “golly gee, I think it ought to be this” excuses. Plugging-in “guesses” invariably leaves the door wide open for abuse, which seems to be exactly what has occurred.

    In the broader sense, the totality of vast numbers raw data should suffice to establish broadly general trends, recognizing that precision may not be possible. You may just end up with inconsistencies that do not support much of anything. Some-days-you-eat-the-bear, other-days-he eats-you.

    PS I wish to lodge a complaint about WorldPress. I strongly suspect they are engaged in stifling those who do not tow the Climate Establishments’ views.

  79. One remaining question I have has to do with the fact that adjustments, expectations, or whatever seem to make the temperature trend more positive. It seems that some problems would make the trend lower, and some higher. It seems to me those should be somewhat randomly distributed and therefore when all is said and done, there would be no increase or decrease in the trend.

    Anyone got an explanation for that?

    • jim2,
      One particular factor that Gavin Schmidt pointed out in the politifact article was time of day. Before 1940 temperature was taken at sundown. After 1940 it was gradually changed to sunrise. In that case, anyway, adjustments would have to be up.

      • ordvic, “After 1940 it was gradually changed to sunrise. In that case, anyway, adjustments would have to be up.”

        Doesn’t much matter since the thermometers were Max/Min LIG which recorded the actual max and min. Changing the time of day the numbers were recorded only have an impact every time that changes. BEST doesn’t need TOB adjustment because they treat each obvious break as a new station start.

      • capt,
        It’s all news to me. I only rehashed what Gavin Schmidt pointed to as a problem. If it was a strawman, I’d like to know.

      • ordvic, “If it was a strawman, I’d like to know.”

        Okay, it’s a strawman. Think about it for a second. The liquid in glas thermometers automatically recorded the actual max and min for the day. Since the data is averaged over each month, recording one day early or late per month is virtually nothing error wise. BEST doesn’t use TOB adjustments because they don’t need to since they treat every daily max/min just like the old LIG. Most of the real TOB correction is required for the newer digital instruments that record the actual TOD readings were max or min. So TOBs is fudging the past to fix the present. It all works out the same but TOBs is required due to poor procedural planning.

      • Obviously Gavin Schmidt would know he is midleading, I can only think he did it on purpose. I’m wondering why know one has called him out on it.

      • ordvic, “I’m wondering why know one has called him out on it.”

        Because he is one of the certified geniuses that have become comfortable with doing things the most screwed up way possible. Any real business would have streamlined and simplified the processes but old school climate science is dedicated to not stepping on the toes of the ones that came before. So TOBs “is” needed if you use the screwed up SOP but not if you do things somewhat normal.

      • Indeed, it makes me feel like a fool for even citing him. In the future I think I’ll refrain from his muses. As GW Bush says; “don’t fooled again’. I’m starting to think that whole scene is just a circus act.

    • ordvic, I thought there was always a (an attempt at least) at TOB correction.

      • Yeah, I don’t know whether or not Goddard adjusted. I only know that that is what Gavin Schmidt pointed out as being erroneous in that Tampa Bay Times article.

    • With all the adjustments shouldn’t there be a recognition of some small level of ambiguity, what electromagnetic engineers call “grass” or “noise”?

    • j2, recognize the great need for a rise. There is homage due to the fear, and to the madness.
      ===========

  80. Pingback: Say ……. “Steve Goddard Just Might Have A Point!!!!” ….. Says The Lukewarmers | suyts space

  81. Robert of Ottawa

    I’m sorry. I have followed these stories for some years. Although Goddard’s methods may not be perfect, he does bring into light the fact, as does Watts, that all adjustments, for whatever reason, do appear in the same direction.

    I do not call this a bug in the Warmista methodology, I call it a feature.

  82. I left a message at WUWT of relevence to SG’s comments based on asking questions at the blackboard
    June 26, 2014 at 4:46 pm

    ANTHONY “Goddard is right to point out that there is increasing data loss in USHCN and it is being increasingly infilled with data from surrounding stations. While this is not a new finding, it is important to keep tabs on. He’s brought it to the forefront again, and for that I thank him.”
    Zeke has had 3 posts up at Lucia’s since June 5th 2014, the first had 284 comments.
    I made several requests to Zeke re the USHCN figures with little response
    So to be clear I said
    there were “ 1218 real stations (USHCN) in the late 1980s
    There are now [???] original real stations left-my guess half 609
    There are [???] total real stations – my guess eyeballing 870
    There are 161 new real stations , all in airports or cities added to the graph
    There are 348 made up stations and 161 selected new stations.
    The number of the original 1218 has to be kept
    Nobody has put up a new thermometer in rural USA in the last 30 years and none has considered using any of the rural thermometers of which possibly 3000 of the discarded 5782 cooperative network stations.[intended sarcasm]
    june 7th Zeke” As I mentioned in the original post, about 300 of the 1218 stations originally assigned to the USHCN in the late 1980s have closed, mostly due to volunteer observers dying or otherwise stopping reporting. No stations have been added to the network to make up for this loss, so there are closer to 900 stations reporting on the monthly basis today.”

    yet he also said
    Zeke has a post at SG where he admits that there are only 650 real stations out of 1218 . This is a lot less than only 918 that he alludes to above. Why would he say 650 to SG ( May 12th 3.00 pm) and instead #130058 at the Blackboard about 300 of the 1218 stations have closed down.

    Anthony 650 real stations is a lot more than 40% missing in fact it is nearly 50%.
    Would you be able to get Zeke to clarify his comment to SG and confirm
    a. the number of real stations [this may be between his 650 and 850 [last list of up to date reporting stations early this year had 833 twice but presumably a few more not used as missing some days]
    b. the number of original real stations remaining this may be lower than 650 if real but new replacement stations have been put in c in which case the number of real original and the number of real replacement stations in the 650

    REPLY: Links to these comments? Don’t make me chase them down please if you want me to look at them -Anthony
    No smoke without fire it seems

  83. This is precisely what Michael Crichton noticed and objected to in his presentation to the United States Senate. Fake global temperature values are not just tolerated but utilized by the global warming gang. I ran into it in 2010 while doing research on my book “What Warming?” It turned out that HadCRUT3 was showing warming in the eighties and nineties when satellite data showed that global mean temperature did not change for 18 years (Figure 24 in the book).. They gave it an upward slope of 0.1 degree Celsius per decade. The same fakery is still going on. I put a warning about it into the preface of the book and two years later they, along with GISTEMP and NCDC, decided to not show it any more and aligned their data with the satellites without telling anybody anything. But looking at present temperature records that seems to have been a passing thought – they still show warming where none exists. Further examination of their data revealed that all three of these data sources had been subjected to identical computer processing that left its traces as an unanticipated consequence of some kind of a computer screw-up. These traces consist of sharp upward spikes that look like noise but are found at exactly identical sites in HadCRUT, GISTEMP, and NCDC temperature datasets. These are supposedly independent data sets from two continents. These spikes are prominently visible at the beginnings of years 1980, 1981,1983,1986,1988,1990,1998,1999,2002,2007, and 2010.This you can check yourself simply by comparing them to parallel UAH or RSS satellite temperature measurements. Clearly all three databases were computer processed by an identical software not more than four years ago. We were told nothing about it but since their data show a greater upward temperature slope than satellites do since 1979 I associate their screwed-up data processing with illicit co-operation among the three data sources in order to create a greater global temperature rise than justified by observed temperature measurements. And this triple alliance allows them to refer to each other to confirm fake warming.

  84. Tampering with temperature is inherent in the system. Take any of the average temperatures – daily, monthly and yearly, They are all based on daily average temperatures which are not true averages. Why?. Because average daily temperatures are usually calculated as the arithmetical average of the daily maximum and minimum of the 24 hours. But the accuracy of this method clearly depends on the distribution of temperatures during the day. When max. and min; mercury thermometers were the sole means of doing this, it is now possible to have a continuous record of measured temperature together with automatic averaging over any required period.

    Despite political assertions to the contrary. Australian temperatures or CO2 emission have no measurable affect on world average temperature. yet spatial averaging can course large errors in global averages. Consider this. If someone asked you to provide average global surface temperatures, how would you do it? Well. you could divide the world up into ,say, 50km squares with a thermometer at the centre of each square, and make simultaneous global measurements, clearly this would be impossibly expensive, at least because of the vast stretches of the southern oceans. So you could decide not to use direct and rely on some type of interpolation between the possible measurements, but which type? Linear interpolation? Some will say, study the isobars. Eifher way, a huge amount of human judgement is necessary,

    So a lot of human accountability is at stake. so it is government’s job to audit and regulate this and not to say ‘the science is settled’ when it clearly isn’t.

  85. jennifermarohasy

    Hi Judith, Here is a much updated and more comprehensive assessment of the methodology the Australian Bureau of Meteorology uses to corrupt the official temperature data… http://tinyurl.com/lcgk68v

    • Hi Jennifer, thanks for the link

    • Excerable cherry picking, but no one in Oz familair with Jennifers history of dismal politicalised ‘science’, will be even slightly surprised by this latest garbage.

      But hey, Judith couldn’t give a flying f……, as long it seems to promote her ‘preferred story line’, critical thinking is held in abeyance.

      • I just read Marohasy’s memo, maybe the issue involves the fact that cherries are available and can be picked? If the cherries refers to improper data correction methods then those cherries ought to be picked.

      • fernando, I can pick 2 different sites, and come up with the exact opposite story – how the adjustments are downplaying warming and that warming is actually much greater than we are being told.

        And Jennifer hasn’t done even the basics in looking at the history of those sites to see why adjustments might have been made – readily available information.

        Hopeless.

  86. June 7th Zeke” As I mentioned in the original post, about 300 of the 1218 stations originally assigned to the USHCN in the late 1980s have closed, so there are closer to 900 stations reporting on the monthly basis today.”
    yet he also said
    Zeke has a post at SG where he suggests there are only 650 real stations out of 1218 . This is a lot less than only 918 that he alludes to above. Why would he say 650 to SG ( May 12th 3.00 pm)

    Once we sort out the percentage of real, raw original stations from 1970 in the 1218 stations we could move onto the Cowtan and Way Kringe [sorry Kriging]. Has the warmth in Arctic temperatures they claim been based on infilling data as well?
    Why Nick and Mosher should we rely on Infilling from neighboring stations [one of Stevens absolute untouchable postulates of science] when Mr Cowtan says in Skeptical science that this notion is seriously flawed and it is better to take data from sites that are not immediately adjacent.
    Have you corrected him ye,t Steven? No? didn’t think you could.

  87. Scott Scarborough

    Does Berkley Earth rely on NOAA data also?

  88. Are you still arguing about ‘homogenisation’ when all land surface records – are running almost half a degree too hot?

    Because of….

  89. jennifermarohasy

    No. If you read my recent paper, and see the Twitter discussion I had with Gavin Schmidt, you will see that GISS (NASA) and Berkeley both apply the same algorithms to the ‘raw’ temperature data that can have the affect of changing a perfectly good (but politically incorrect) temperature series from one of cooling to warming… I show how this is done for a place in Australia called Amberley … read the paper here http://jennifermarohasy.com/wp-content/uploads/2014/06/Marohasy_Abbot_Stewart_Jensen_2014_06_25_Final.pdf

  90. Kristen Barnes (Ponder the Maunder) at 15 years old could figure it out — event the systemic bias introduced by corrupted data due to the erroneous readings of official instruments — and, instead of celebrating her perspicacity, here we are 7 years later and so many still fail to realize our society is fundamentally dishonest. How is that possible? Duh! Part of being a skeptic is suspecting fundamental dishonesty. But, being clueless is when you refuse to acknowledge fundamental dishonesty when it’s right there in plain sight for everyone to see –e.g., like government scientists placing an official temperature instrumentation in a asphalt parking lot by walls and cars and under the exhaust vents of an air conditioners.

    • Interesting point. One reason why I read the climate debates is my belief that a lot of what are told is somewhat distorted. My initial suspicions were aroused while living in a communist dictatorship in which all the official information was censored and made to fit a story line. In such societies most information comes from the government, it’s illegal to attempt to distribute information which doesn’t conform, and so on.

      Eventually I managed to escape and I arrived in a freer society. But I found out even there the information was chewed over and distorted. This problem was quite serious in fields such as history, politics, and foreign policy. For example, if we want to have a deadly debate we can introduce subjects such as “the reasons why the confederate states tried to secede”, or “what about those Iraqi WMDs? or “was there a genocide in Kosovo before Clinton gave orders to bomb in 1999?”….

      The climate debate should never have degenerated this way. But right now I sense it’s about 50 % science and 50 % politics. And when it comes to politics we tend to get distorted as heck.

  91. Let me fix my formatting.

    The Bureau has “corrected” this inconvenient truth by jumping-up the minimum temperatures twice through the homogenization process: once around 1980 and then around 1996 to achieve a combined temperature increase of over 1.5 degree C, Figure 2. This is obviously a very large step-change, remembering that the entire temperature increase associated with global warming over the 20th century is generally considered to be in the order of 0.8 degree C.’ Maroshy

    ‘Amberley (040004)
    This site is on the grounds of the Amberley RAAF base, west of Ipswich. The instrument enclosure itself is bare ground (black soil) with natural grass surrounding.

    History
    The site has been operating since August 1941. No significant moves are evident in documentation but the data indicate a substantial change of some kind at the site in or around 1980. An automatic weather station was installed on 3 July 1997. Manual observations continued under site number 040910
    until September 1998.’

    I might note that the ‘step changes’ relate to changes at the site and instruments and comparing this to warming is not at all relevant.

    The entire methodology is here – http://www.bom.gov.au/climate/change/acorn-sat/#tabs=ACORN%E2%80%90SAT

    The methodology seems pretty much above board – but usefulness of the data for climate seems overstated.

    http://www.woodfortrees.org/plot/rss/plot/uah

    Assuming a new station identity when data does something odd doesn’t solve the problem either. Different data at the same point does something different to the krigging.

    The essential problem of land surface records – changes in moisture availability – remains. However – tropospheric warming in the 1979-1998 period remains as well.

    Just what is it we are quibbling about? Adjustments that are justifiable to an obsolete record?

  92. Methods? Adjustments?

    Really these are just more Climate models. Feed in some raw data, add some parametrization and out pops your adjusted data. Your estimated temperature.

    And like all models the outputs are totally dependant on the assumptions used. While the various agencies claim that these represent a valid number with respect to reality there doesn’t appear to be any reason to have a higher confidence level than models make predictions about future temperatures.

    And like other models there does not appear to be any way to falsify these models any more than the usual models.

    • Read the methodology – several volumes worth. These are not automatic adjustments based on a computer program but rules based data quality assessment for data that is sadly lacking in homogeneity.

      • Bob Ludwick

        @ Rob Ellison

        “These are not automatic adjustments based on a computer program but rules based data quality assessment for data that is sadly lacking in homogeneity.”

        Been tried before; remember the pigs ear to silk purse converter? Piece of cake compared to producing multi-century planetary temperature trends, suitable for establishing ‘climate policy’, from actual, recorded thermometer outputs.

    • Does it matter? The warming we are interested in is 1979 to 1997. And the surface record is obsolete at any rate. It is running half a degree warm from moisture artifacts.

  93. Even assuming Goddard is correct about errors in calculating the US surface temperature record, doesn’t its relatively close agreement with the satellite measurements of the lower troposphere (UAH & RSS) put an upper limit on the magnitude of the error?

    • True but they not only increase the present they lower the past which satellites cannot do

    • ….and the satellites have been in operation how long?

      • The point of the debate, if I correctly understand it, is the result of adjustments in the station population — and estimates for lost or missing data — which affects current data. If the current data were grossly misadjusted, they would no longer agree with the satellite data.

      • Good time to ask a question I’ve never heard satisfactorily answered. Was the calibration of the satellites any way dependent on Phil Jones’ early ’90s UHI data, which is corrupt, er, not likely correct?
        =============

      • Completely left the adjective ‘China’ out of that comment. It’s not nice to neglect Mama Middle Kingdom.
        ========================

      • Yikes, ‘Chinese’. Green tea and monkeys, with ruby eyes.
        ==========

      • Steven Mosher

        kim NO

        satellite data is not calibrated to the ground.

      • There is an on-board standard used to calibrate the satellite sensors.

      • TNX.
        ===

      • Bob Ludwick

        @ Steve Mosher and jim2

        “kim NO

        satellite data is not calibrated to the ground.”

        “There is an on-board standard used to calibrate the satellite sensors.”

        I am guessing that the satellites are actually microwave radiometers and that the on-board standard is a precision noise source, which calibrates the radiometer so that absolute receive power can be measured.

        How is the precision received signal level measurement translated into surface temperature? Is it directly related to surface temperature, i. e. signal strength x=surface temperature y, or is the received level dependent on cloud cover, type of clouds, humidity, precipitation in the area of interest, whether the surface is desert, forest, grassland, water, snow, etc in addition to the absolute surface temperature? If the signal strength is NOT directly translatable into surface temperature, independent of surface and/or variations in the transmission path, there would seem to be a bit of ‘sausage making’ between receiving the downlinked radiometer signal strength and the production of surface temperatures under the path of the satellite.

    • David Wojick

      Globally the UAH shows no warming 1978-1997 while the surface statistical models show almost all of the warming in the last 75 years during that period. Not exactly close agreement. Rather UAH seems to falsify the statistical models.

    • David Wojick

      RSS adjusts (fiddles?) the data in ways that UAH does not.

  94. angech,

    Yes, but adjustments to the past seem no longer relevant to the current discussion about the US surface temperature record. There seems to have been substantial topic drift, not clearly recognized, from the charges in Goddard’s original post.

    Also lost in all this are the articles leaping from Goddard’s post to accuse the US government of “rigging”, “fabricating”, and “fiddling” the data. Has anyone here commented on that, or the impression it has left in those reading the accusations?. Has Goddard?

  95. yes, but adjustments to the past seem no longer relevant to the current discussion about the US surface temperature record.

    the adjustments to the past are being done currently to drop the past real levels and at the same time current readings are being adjusted upwards and you say this is not relevant?
    Time for a reality check, troll.

  96. Berényi Péter

    There is an important report online.

    Energy and Climate: Studies in Geophysics (1977)
    Geophysics Study Committee
    Geophysics Research Board
    Assembly of Mathematical and Physical Sciences
    National Research Council

    NATIONAL ACADEMY OF SCIENCES
    Washington, D.C.
    1977

    In Fig. 2.5, bottom of page 55 (recorded changes of annual mean temperature of the northern hemisphere) we can see a 0.87°C drop from 1938 to 1964. That was 37 years ago.

    In current datasets this mid century cooling is reduced to roughly one third of its original value, to 0.31°C (GISTEMP) or 0.28°C (HadCRUT4).

    I do understand a substantial cooling from 1938 to 1964 (26 years) is inconvenient, as it is next to impossible to reproduce it in GCMs without compromising their ability to postdict the rest of the record. Therefore this cooling, larger than all purported 20th century warming, had to be tamed to a reasonable level retrospectively at all costs, otherwise theory would have suffered.

    Unfortunately that’s not the way science is supposed to work, quite the opposite.

    It was known since 1859 that Mercury had an anomalous perihelion precession, the difference to Newtonian prediction being estimated to be 38″/year by Urbain Le Verrier, corrected to 43″/year later. As it was less than 10% of the entire perihelion advance, one can imagine lots of efforts going into adjustments to save Newtonian celestial mechanics. Which was the case indeed, from supposing an as yet undetected planet, Vulcan to solar oblateness and beyond. However, these were theoretical attempts, while original observational records (going back to 1697) were left alone.

    It is quite fortunate 19th century astronomers were honest &. such meticulous record keepers, otherwise we would not have general relativity even today.

    Therefore surface temperature records should be considered utterly useless until this unbelievably large retrospective adjustment is explained in detail, a monumental work ahead for science historians.

  97. Judy: “Maybe it is a tempest in a teacup, but it looks like something that requires NOAA’s attention.”

    Watts: ““There are quite a few “zombie weather stations” in the USHCN final dataset, possibly up to 25% out of the 1218 that is the total number of stations.”

    Yikes. why wasn’t this picked up by people who’ve been studying RAW compared to ‘ADJUSTED’ data before now? Hasn’t Watts had a paper ‘in the works’ for the last 3 years?

  98. “GHCN v3.2 adds not one, not two, but three whole degrees of warming to the Alice Springs record since 1880.”

    http://tallbloke.wordpress.com/2012/10/11/roger-andrews-chunder-down-under-how-ghcn-v3-2-manufactures-warming-in-the-outback/

  99. Skeptics doing what skeptics do best . . . attack skeptics.
    …..
    Backyard flying feline fur

  100. nobodyknows

    I have a question: Do individual stations with dayly reliable data follow the adjusted trends for thousand of stations. It should be easy to check, taking the mean dayly raw temperatures. Has anyone done that?

  101. Yep storm in a teacup, as usual. This is obvious actually.

    “I think there was an element of ‘boy who cried wolf’ – Goddard has been wrong before, and the comments at Goddard’s blog can be pretty crackpotty”

    You aren’t going to ponder why the boy keeps crying wolf?

    Why is it that Watts and Goddard keep screwing up on temperature records and you keep falling for it?

    When supposedly they are spending all their waking lives looking at temperature records they still can’t understand the most basic things about them. Like “what is a baseline” and “why we shouldn’t average absolute temperatures” or “time of observation bias”.

    “In responding to Goddard’s post, Zeke, Nick Stokes (Moyhu) and Watts may have missed the real story. They focused on their previous criticism of Goddard and missed his main point.”

    Of course this is how it works. The “real story” changes as they get found out. The “point” is never pinned down and will keep changing, because the “point” they want to make is that the US wide or global records are all wrong, but that cannot be substantiated other than by cherrypicking individual stations and arguing from ignorance that maybe, if we just wish hard enough, this means the whole record is substantially affected.

    Why don’t we check the NOAA adjustments get the nation and global wide records correct by starting over? Take the raw data ourselves and do the adjustments from scratch? Lets call the effort “BEST”. Oh look nothing changes. But sorry such a logical approach that yields the wrong answer isn’t good enough for climate deniers.

  102. nottawa rafter

    Surely devoting resources to finding the most pristine non-corrupted sites over the last150 years would generate a more valid record of our climate than the mish-mash, adjusted crazed, assumption laden, highly debated system we depend on now. Even if the network had only 1% of the current sites, what is gained in validity should compensate for what is lost in spatial representation. What a goofy way to gain knowledge.

    • You can bet that if someone did this it wouldn’t show a substantially different result than the current records.

      And so the deniers would find some excuse to ignore it.

      • If you can get the same results from a tiny subset of stations then why don’t they do just that?

      • You can bet that it would.

      • Cold case detectives are often successful; it works.

      • “If you can get the same results from a tiny subset of stations then why don’t they do just that?”

        But then the complaint would be that they’ve “dropped stations”.

        Can’t win.

      • Ah, so it’s the ‘deniers’, you know, those whose opinions count for so little that they’re called all sorts of names, who are stopping them from using best practise?
        I see!

    • Steven Mosher

      Define pristine and non corrupted in a way that is

      A) objectively verifiable,
      B) tracable to some field study where corrupting influences were studied.

      Simple. BUT define your criteria BEFORE you data snoop.

      • This is made infinitely more difficult now that we know equipment malfunctions and repairs were not noted in the station record. Maybe state climatologists could launch and coordinate an effort to document those sorts of problems.

        Or, Anthony could do yet another crowd sourcing project. The surface stations audit was pretty successful. I would participate again.

    • nottawa rafter:

      Few are really interested in finding the least-corrupted, century-long station records, because that would necessarily eliminate the great majority of the grossly inadequate records. Instead the pretense is maintained by global index manufacturers that their results, based on thousands of largely questionable data snippets, are by sheer numbers alone somehow reliable enough to settle trillion-dollar questions.

      Unable to address those questions through scientifically rigorous methods, they resort to a spate of bald assumptions justifying various ad hoc algorithms. Never mind that spatially homogeneous “fields” of temperature and correlations vaguely defining “regional expectations” cannot be robustly estimated nor circumscribed from available data throughout much of the globe. The band plays on, trumpeting a GIGO product as manna from the great god of naive academic thinking.

  103. nobodyknows

    “Even if the network had only 1% of the current sites, what is gained in validity should compensate for what is lost in spatial representation.”
    I think it is to the point, nottawa.
    And who would find excuses to ignore it, I don`t know.

  104. Upthread Ragnaar said that UAH data had been “fixed”. Is that correct? Is urban data adjusted to remove the fact that the urban area is warmer than the rural?

    I don’t understand that.

    Is the thermometer not providing accurate readings downtown?

    Instead of adjusting the perfectly fine urban measurements, why not deal with it by an area weighting method. If 5% of USA is urban than only 5% of the thermometers in the data set can be urban.

    If the entire country was urban (someday it may be) would we still adjust the data?

    That would be silly.

    • RickA

      Whilst the thermometer may not be urban it may have been urbanised. That is to say it may exist in an environment whereby it is affected by factors such as a small number of buildings or tarmac whereas previously it may have been a genuinely rural station. Moves to airports or stations engulfed by cities may show more uhi than those merely ‘urbanised’ but whether the overall allowance for this factor is correct is another matter.

      CET has made an allowance for UHI since 1976.

      tonyb

      • Climatereason:

        If a location used to be rural and now is urban – and is warmer because of the urbanization – is it not still warmer in that location?

        Why are we adjusting for UAH?

        If a given location has become warmer because of blacktop, is that location not still warmer?

        I am wondering about the philosophy of adjusting for UAH in the first place.

        I would rather take each location for what the thermometer is accurately reading and call that the data.

        Then make sure the ratio of rural thermometers to urban thermometers was accurate based on the percentage of land mass that was urban versus rural.

        Taken to the limit – in a world which is 100% urban, what would be the purpose of adjusting for UAH?

      • RA, sometimes I wonder if that’s the flaw in the whole BEST method, concealed by the slicing of the earthworm in to so many bits.

        Bet moshe’s got a good answer; fershur he, Robert, and Zeke have thought of that.
        ==============

    • David Wojick

      Rick A, UAH does not use thermometers. It is based on actual satellite measurements. Only the surface statistical models like GISS, BEST, etc., use thermometers and these statistical models only provide rough estimates of regional or global temperatures. Basically the models are kriging (http://www.kriging.com/whatiskriging.html). Ironically kriging is widely used in estimating oil reserves, but no one there is foolish enough to think it is accurate to three significant figures, like the temperature modelers claim.

      • David – Sorry about that. I am using the wrong initials.

        I am referring to urban island effect (whatever those initials are).

        So I am wondering about adjusting a temperature record which is accurately taken – just because it happens to be located in the middle of an urban area.

        If it is really warmer in the center of the urban area – why adjust it?

        Why not just take it for what it is and include a mix of urban and rural stations which account for the % of urban area versus rural area?

      • David, we also use kriging for gas, and I even suggested we use it to estimate the bulk aquifer properties (we don’t drill into water on purpose, but we do need to pin down the properties of the water soaked rocks sitting next to, amongst, and underneath the hydrocarbon soaked rocks). I tend to think the kriging works ok, but if they are weighing stations very far and the model is goofy then there’s a problem. At least in our case it can make a difference,

    • http://en.wikipedia.org/wiki/UAH_satellite_temperature_dataset#Corrections_made

      “As the satellites’ orbits gradually decayed towards the earth the area from which they received radiances was reduced, introducing a false cooling trend.”
      I should have said improved instead of fixed.

    • There’s a simpler technique, I think. One can define the urban island to have a horizontal temperature gradient based on measurements and a model of some sort. That urban heat island effect can be massive, when I lived in Russia, my dacha was in Barvikha, quite far from downtown, and we had at least 3 to 4 degrees c lower minima than my friends who lived say near the USA embassy. That Moscow heat island must be 20 to 30 km radius.

  105. A fan of *MORE* discourse

    BREAKING NEWS

    Steven Goddard claims Arctic Sea Ice Continues To Recover (Sep 12, 2011)

    Oh wait … verifying … hmmmm, it appears that Steve Goddard has “404’d” his own research claims.

    Data asserts Arctic Sea Ice Decline Persists (as of 2014)

    Judith Curry, do your students ever express concern regarding Goddard’s dubious track-record of initially cherry-picking, then aggressively propagandizing, and subsequently burying, his “concerns” regarding climate-change trends … concerns that again-and-again have proved to be ill-founded and/or inchoate and/or conspiracy-centric?

    Conclusion  Brands of climate-change skepticism that offer poor role models for students are fated first to rejection, then to irrelevance, and finally to extinction. And deservedly so.

    That’s what thoughtful students (and thoughtful voters too) appreciate, eh Climate Etc readers?

    \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

  106. Steven Mosher (June 28, 2014 at 3:56 pm) says:

    its a calculation to give you an estimate.

    If you take 40000 raw records and want to create a global average you MUST calculate.

    The question is what calculations give you the best estimate

    A simple goddard style average will NOT give you the best estimate because of sampling inhomogeniety.

    a simple average is the worst method.

    Just what are you trying to estimate?

    Go back to the beginning: thermodynamic considerations suggest that if the Earth loses less heat, all other things being equal, its “temperature” must rise until it’s losing as much as it gets. What “temperature”? Average surface temperature? Why, when changes to the temperature at different heights will also change the IR radiative profile? And when changes in absolute humidity will also change the IR transparency of the atmosphere? Leaving aside the major issue of clouds and their effect on albedo.

    The Earth’s actual effective temperature for IR radiation is determined by a vast number of factors, and AFAIK there’s never been a good justification for using any type of “average” of surface temperature as a proxy for it.

    • Steven Mosher

      of course there is a justification.
      the best one is that the proxy can be used to TEST MODELS and reject them. duh

      • Doesn’t that introduce a circularity? If models fail to find a correspondence between GAT and outgoing radiation (or delta same), won’t they be rejected? Given the prior expectation, that’s not science, it’s begging the question.

  107. Steven Mosher | June 28, 2014 at 11:19 pm |
    sunshine THERE IS NO ADJUSTING.read my lips.
    you create an EXPECTED READING. thats a prediction.
    And no, you do not expect the difference between the raw and expected to be 50/50 split why? Because all the inhomogeneites introduce false cooling.

    So every one did their records in the afternoon and had to be adjusted?
    Surely some did them in the morning or at midnight or at midday. Some may have made them up after a week. Some may have had poor eyesight and misread them as low. All are inhomogeneities, most are unprovable but when you claim an unproven inhomogeneity is always positive you are stepping off line.
    And that is what you are doing by defining all adjustments for TOBS upwards. You are putting up some that were taken at the right time and did not need adjusting on a pure assumption. You are putting up records as fact when not all the records are reliable.Thermometers can under read as well as over read for all sorts of reasons [removes cold coke tin from keyboard thats better]
    Now Zeke says you do have to adjust the past temperatures and that they do adjust the past readings every time there is an unexpected break.
    That is not a prediction. That is a postdiction and it is a fact that contradicts your confident assertion totally.
    He does not say that these breaks always introduce false cooling. He leaves it open for downward adjustment when the break is sufficiently large in the right direction to do so.
    expecting better please.

    • Steven Mosher

      “So every one did their records in the afternoon and had to be adjusted?”

      On the whole the moves in TOB introduce a false cooling.

      This is provable by comparing the USHCN stations with HOURLY STATIONS close by.

      That is how the correction algorithm was created and verified. twice.

  108. Wow! Get a load of this!
    From the article:
    A fractured Supreme Court on Monday largely upheld the Environmental Protection Agency’s radical rule designed to shut down the power plants that produce the most affordable electricity. The justices continue to accept the EPA’s labeling of carbon dioxide as a “pollutant.” This harmless gas, the agency insists, is melting the planet.

    Only the brave deny man’s responsibility for super-heating the globe in precincts where the wise and wonderful (just ask them) gather to reassure each other than they know best. “We know the trends,” President Obama told the graduates at the University of California at Irvine the other day. “The 18 warmest years on record have all happened since you graduates were born.”

    The charts and graphs devised by NASA and the government’s other science agencies back up the president’s words. And well they should, because the charts, like the “science,” were faked.

    The “Steven Goddard Real Science” blog compares the raw U.S. temperature records from the Energy Department’s United States Historical Climatology Network to the “final” processed figures, to demonstrate how the historical data have been “corrected,” using computer modeling.

    The modifications made to the past temperature record had the effect of cooling the 20th century, which makes temperatures over the last 14 years appear much warmer by comparison. Such changes don’t square with history, which shows the decade of the 1930s the hottest on record. The Dust Bowl storms were so severe they sent clouds of debris from Texas and Oklahoma to the East Coast, even darkening the skies over the U.S. Capitol one day in 1934.

    http://www.washingtontimes.com/news/2014/jun/23/editorial-rigged-science/

    • Oops! I missed that in the main post :(

    • Data? The rent seekers and true believers don’t need no stinking data?
      They have their models, and they will make darn certain the models produce the pre-ordained results.

  109. Another self-administered black-eye for the ‘skeptics’.

  110. It’ll be interesting if this finding stands.
    From the article:
    SINGAPORE: Covering some 130 countries and territories around the Equator and situated between the Tropic of Cancer in the north and the Tropic of Capricorn in the south — the tropics is expanding as climate change heats up the earth, turning more and more countries into a hot zone.

    http://www.channelnewsasia.com/news/lifestyle/hotter-and-larger-tropics/1219108.html

  111. Fareed Zakaria is doing a piece on “climate change” on CNN.

  112. Goddard being wrong does not make the status quo correct.
    Allowing the AGW fanatics and rent seekers to control the agenda only means the vast waste of resources that the CO2 obsessed demand to promote their obsession continues longer. And that real solutions for real problems will continue to go wanting.

    • If skeptics can’t get their facts straight on the surface temperature records then what hope do they have understanding the complexities of modelling, paleo-research or sea level rise?

      I think it wise to take what skeptics say about science with a pinch of salt!

      • I think it wise to take what skeptics say about science with a pinch of salt!

        Good scientists take what everyone says about science with a pinch of salt.

  113. Quite an interesting and excellent morning read. I’m amazed that station data were estimated when the data existed and the folks keeping that seemed to be surprised by the fact. Is the quality control really that poor?
    I can find no justification of continuous adjustment of historical data based on current data. Maybe the data moguls who make these adjustments or calculate “expected” values should certify under a paragraph that starts “I certify under penalty of law” as you do with environmental data and reports. My guess is that they might be a bit more circumspect.

  114. May I offer up a suggestion, with regard to a prior decisions as to when a station is good; Steve and Zeke have all the CUS stations where they have assigned breaks due to moves of thermometer updates. It is axiomatic that when stations are moved or updated they are at their ‘best’.
    Look at all the Tmax, of all station segments that are greater than a decade long. Prepare an absolute, monthly, Tmin contour field, only adjusting for height, using the first 30 months of each segment.
    Then do the same thing but using 60 years, and then for 120 months.
    If using longer segments causes a heating effect, then the possibility that stations moves are linked to urban encroachment is a cause of station moves.

  115. Pingback: Comment threads about global warming show the American mind at work, like a reality-TV horror show | Fabius Maximus

  116. Pingback: If All You See… » Pirate's Cove

  117. Pingback: NOAA’s temperature control knob for the past, the present, and maybe the future – July 1936 now hottest month again | Watts Up With That?

  118. Ah such an enormous fuss about a piddling 0,6K rise per century that is most likely entirely natural. Money well spent?

  119. Pingback: What is happening with USHCN temperature data? | The right-wing liberal

  120. Pingback: What is happening with USHCN temperature data? | Virginia Virtucon

  121. I just posted a comment along with an animated gif comparing 8 stations raw vs adjusted in the GHCN database. Some may like to take look.

    http://judithcurry.com/2014/06/29/open-thread-12/#comment-602489

  122. Pingback: Weekly Climate and Energy News Roundup | Watts Up With That?

  123. Steve Goddard sent me an email saying for some reason he can’t comment here. He has a new post on this topic at

    http://stevengoddard.wordpress.com/2014/06/30/infilling-is-massively-corrupting-the-us-temperature-record/

    • It must be possible to login here, Judy:

      Even I can do it, and WP gives me problems.

    • One of the criticisms coming for the BEST crew (Mosher,Stokes, Zeke, Brandon) is that because of station dropout at lower latitudes you can’t just graph all the USA data.

      Gridding would solve that. And it makes little difference.

      Gridding the data first changes trends slightly if you use a 1×1 Lat/Long grid.

      For example, TMax 1998 to 2013 – Month of December only

      Gridded raw = -1.14C/dec and not gridded = -1.1C/decade

      Gridded tob = -1.09 and not gridded -1.05

      Gridded Final = -0.84 and not gridded -0.78

      However, the ratio from raw to tobs to final barely changes by gridding.

      • What!? I’m not sure I’ve ever made that criticism, but even if I had, there is no way anyone could possibly consider me part of “the BEST crew.” I’ve criticized BEST on many occasions, and I’m not sure I’ve ever said a good word about it.

        What a strange first comment to see in my RSS reader upon waking up.

      • Sunshine: Just curious. Can you describe your gridding? People on this blog use that term so casually. And just what is USA data? It still a long way to Tipperary.

        Brandon: BEST fledgling! Birthday present? heh, heh.

      • Brandon: I’ve got to add, “I can envision you and Mosher on a three-week cross-country road trip in a PT-Cruiser to promote BEST!”

      • I said I used a 1×1 grid.

        So I took the floor of the Latitude and Longitude and added .5 to each (so I can map it better) and then average all the stations by the new Lat/Long.

        USA data = USHCN Tmax from this datafile: v2.5.0.20140627

        and these extensions: “raw”,”tob”,”FLs.52i”

    • Brandon: Would you prefer Blackboard Crew or Goddard Haters or something different?

      • Given there’s basically no association between me and anyone else you mentioned, I’d prefer you just not group me with them. Short of discussing climate related matters, I’m not sure what similarities there are supposed to be between the people you listed.

    • Interesting that Goddard “can’t” post here. It a good thing actually, and banning him from WUWT was one of the smartest things Watts ever did.

    • Must have an app that blocks him from posting on a site that is remotely scientific. No problem for him on his own site though.

  124. Knight crickets:

  125. Pingback: Stevengoddardista, oikaisuista, virheistä ja zombeista | Roskasaitti

  126. Upthread, I said:

    In my experience, skeptics as a whole aren’t self-correcting. They are every bit as guilty of willful blindness as anybody else. They just like to claim otherwise. There are a handful of exceptions, but by far and large, their reaction to any criticism depends entirely upon who and what is being criticized.

    In what is a remarkable coincidence, a day or so later, I got censored at WUWT for the first time. Anthony Watts had written a post defending Steven Goddard, saying a Polifact article was wrong in its criticisms of him. I disagreed, saying it appeared Watts was simply misrepresenting the article.

    This led to a disagreement. When I continued to insist upon this point, providing quotes to back up my argument and pointing out Watts was ignoring them, he made a petty response in a moderation note then told me I wouldn’t be allowed to talk about the issue any further. He then deleted my next comment which highlighted the absurdity. When I talked about this on Twitter, he made things up about what I had said to justify his actions.

    Watts has repeatedly had me as a guest author,* even as recently as last week. In all this time, I’ve publicly disagreed with him once. The one time I did disagree, he wound up censoring me. I think that’s fascinating. I wrote a post about it which provides the details. Read it if you want.

    *I actually haven’t submitted a post to him in over a year. I sometimes e-mail people to alert them to posts I’ve written. Somehow that gets mine labeled as guest posts instead of reposts. I never did figure out why.

  127. Whatever the result of all the study of the numbers today and the way in which calculations are done, what errors were made the thing you are all missing is that the fundamental ongoing issue is data quality! Assuming we debate and eventually conclude what the correct methodology for handling the data are in terms of computing averages, etc the fact remains that every day as new data are entered and things change (however those changes may come about for whatever reasons) if you are depending on those numbers for serious work you need to have tools to insure data quality.

    What does that mean? It means that the NOAA and other reporting agencies should add new statistics and tools when they report their data. They should tell us things like:

    a) number of infilled data points and changes in infilled data points
    b) percentage of infilled vs real data
    c) changes in averages because of infilling
    d) areas where adjustments have resulted in significant changes
    e) areas where there are significant number of anomalous readings
    f) measures of the number of anomalous readings reported
    g) correlation of news stories to reported results in specific regions
    h) the average size of corrections and direction
    i) the number of various kinds of adjustments, comparison of these numbers from pervious periods.

    What I am saying has to do with this constant doubt that plagues me and others that the data is either being manipulated purposely or accidentally too frequently. We need to know this but the agency itself NEEDS to know this because how can they be certain of their results without such data? They could be fooling themselves. There could be a mole in the organization futzing with data or doing mischief. Even if they don’t believe there is anything wrong and everything is perfect they should do this because they continue to have suspicion of their data by outside folks who doubt them.

    This is standard procedure in the financial industry where data means money. If we see a number that jumps by a higher percentage than expected we have automated and manual ways of checking. We will check news stories to see if the data makes sense. We can cross correlate data with other data to see if it makes sense. Maybe this data is not worth billions of dollars but if these agencies want to look clean and put some semblance of transparency into this so they can be removed from the debate (which I hope they would) then they should institute data quality procedures like I’ve described.

    Further of course we need to have a full vetting of all the methods they use for adjusting data so that everyone understands the methods and parameters used and can analyze, debate the efficacy of these methods. The data quality data can then insure those methods appear to be being applied correctly. Then the debate can move on from all of this constant doubt.

    As someone has pointed out if the amount of adjustment is large either in magnitude or number of adjustments that reduces the confidence in the data. Calculated data CANNOT improve the quality of the data or its accuracy. If the amount of raw data declines then the certainty declines all else being the same. The point is that knowing the amount of adjustments, the number of adjustments helps to define the certainty of the results. If 30% of the data is calculated then that is a serious problem. If the magnitude of the adjustments is on the order of magnitude of the total variation that is a problem. We need to understand what the accuracy of the adjustments we are making is too. We need statistical validation continuing (not just once but over time continuing proof that our adjustments are making sense and accurate).

    In academia we have people to validate papers and there is rigor applied to an extent for a particular paper for some time on a static paper. However, when you are in business applying something repeatedly, where data is coming in continuously where we have to depend on things working we have learned that what works and seems good in academia may be insufficient. I have seen egregious errors by these agencies over the years. I don;t think they can take many more hits to their credibility.

  128. José Tomás

    Dr. Curry, could you please explain why – after saying here that Goddard may have a point and pointing everybody to Watts’ discussion and eventual confirmation of it – you twitted that his (Goddard’s) analysis was “bogus”, and that without any further explanation?

    Sorry, but this does not seem very nice, nor coherent, nor illuminating.

    Thanks.

    • Goddard’s actual analysis (including averaging, etc) has been shown to be highly problematic. His point about ‘estimated data’ and zombie stations is well taken (which was tabulation rather than involving any mathematical analysis.) It is not very easy to convey complex points on twitter

      • David Wojick

        Problematic (controversial?) and bogus are two very different concepts. EPA’s claims regarding the per ton cash value of future damages done by CO2 emissions is bogus. Goddards analysis may be wrong but it is not bogus.

  129. José Tomás

    Dr. Curry, that is precisely the problem.

    While this post of yours here is written in a factual, “unadjectivated” prose, your tweet came across as a definitive rejection of all things Goddard. And in a very brutal fashion too. “Bogus” is a very strong word, and you are surely aware that most people will read that and infer that “the sentence of Science was pronounced”. Few of those readers will come here to read your calm and reflective text.

    OTOH, those who for whom “the science is settled” will (actually they already are) trumpet your tweet as “proof” that Goddard is a good-for-nothing nut. From what you said above, I would conclude just the opposite, that Goddard raised a very important issue, despite some of his analytics may be the object of further controversy.

    Watts himself also used harsh words to refer to Goddard, only to be forced to retract later and concede that Goddard was right as a whole.

    I have a deep respect for you, and it is out of this respect that I endear you to consider clarifying your position on Tweeter, since it is clearly at odds with your attitude here, and being used for ends surely different from yours.

    As a last – perhaps minor – issue, your tweet was an answer to a person who was exposed at a later post at WUWT as an “unscrupulous [person who] bring [Goddard's] family into [the discussion]. That you choose to dissociate yourself from Goddard by associating yourself (by deference) to this “unscrupulous person” does not look very beautiful on you.

    Take this comment as a contribution to the subject of “Sociology of the technical skeptical blogosphere”, please :)

    • point taken, thx

    • José Tomás

      Actually, an apology to Goddard on Tweeter for your use of the “B-Word” would be nice, chivalrous, proper and completely in accordance with the image that I – and I am sure many others – have of you. And would restore balance to this – so far – very ungallant whole affair.

  130. Pingback: NCDC responds to concerns about surface temperature data set | Climate Etc.

  131. Pingback: More Globaloney from NOAA - Page 2 - US Message Board - Political Discussion Forum

  132. “Last week, the mainstream media was abuzz with claims by skeptical blogger Steve Goddard…

    For examples of MSM coverage, see: Telegraph…Washington Times…RealClearPolitics…Further, this story was carried as the lead story on Drudge for a day.”

    Has nothing been done to alert The Peshawar Frontier Post, The Jonestown Executive intelligence Report and The Woolagong Times?

  133. Pingback: NetRight Daily» Obamacare's part time economy

  134. Chiefio’s Blog in Australia is quite interesting ..
    He’s been analysing many weather stations all over Australia state by state for many years.
    Well worth reading.

  135. C J Orach also has a good blog site that explain the rise and fall of cheap, abundant energy – before and after WWII.

    http://orach24463.wordpress.com/2014/07/16/the-key-solution-to-the-worlds-problems-is-cheap-energy/#comments

  136. FOOD, FUEL & ENERGY POVERTY

    Food, fuel and energy are different forms of the same commodity, as explained on the C J Orach site.

    We humbly accept reality, or
    We arrogantly enter insanity.
    Pain is the price of recovery.

  137. If you want to improve your knowledge only keep visiting this web
    page and be updated with the hottest information posted
    here.