Watts et al.: Temperature station siting matters

by Judith Curry

30 year trends of temperature are shown to be lower, using well-sited high quality NOAA weather stations that do not require adjustments to the data.

Anthony Watts has presented an important analysis of U.S. surface temperatures, in a presentation co-authored by John Nielsen-Gammon and John Christy.  Here is the link to the AGU press release.  Watts has a more extensive post [here].  Excerpts:

SAN FRANCISO, CA – A new study about the surface temperature record presented at the 2015 Fall Meeting of the American Geophysical Union suggests that the 30-year trend of temperatures for the Continental United States (CONUS) since 1979 are about two thirds as strong as officially NOAA temperature trends.

Using NOAA’s U.S. Historical Climatology Network, which comprises 1218 weather stations in the CONUS, the researchers were able to identify a 410 station subset of “unperturbed” stations that have not been moved, had equipment changes, or changes in time of observations, and thus require no “adjustments” to their temperature record to account for these problems. The study focuses on finding trend differences between well sited and poorly sited weather stations, based on a WMO approved metric for classification and assessment of the quality of the measurements based on proximity to artificial heat sources and heat sinks which affect temperature measurement.

Following up on a paper published by the authors in 2010, Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends which concluded:

Temperature trend estimates vary according to site classification, with poor siting leading to an overestimate of minimum temperature trends and an underestimate of maximum temperature trends, resulting in particular in a substantial difference in estimates of the diurnal temperature range trends

A 410-station subset of U.S. Historical Climatology Network (version 2.5) stations is identified that experienced no changes in time of observation or station moves during the 1979-2008 period. These stations are classified based on proximity to artificial surfaces, buildings, and other such objects with unnatural thermal mass using guidelines established by Leroy (2010)1 . The United States temperature trends estimated from the relatively few stations in the classes with minimal artificial impact are found to be collectively about 2/3 as large as US trends estimated in the classes with greater expected artificial impact. The trend differences are largest for minimum temperatures and are statistically significant even at the regional scale and across different types of instrumentation and degrees of urbanization. The homogeneity adjustments applied by the National Centers for Environmental Information (formerly the National Climatic Data Center) greatly reduce those differences but produce trends that are more consistent with the stations with greater expected artificial impact. Trend differences are not found during the 1999- 2008 sub-period of relatively stable temperatures, suggesting that the observed differences are caused by a physical mechanism that is directly or indirectly caused by changing temperatures.

Key findings:

1. Comprehensive and detailed evaluation of station metadata, on-site station photography, satellite and aerial imaging, street level Google Earth imagery, and curator interviews have yielded a well-distributed 410 station subset of the 1218 station USHCN network that is unperturbed by Time of Observation changes, station moves, or rating changes, and a complete or mostly complete 30-year dataset. It must be emphasized that the perturbed stations dropped from the USHCN set show significantly lower trends than those retained in the sample, both for well and poorly sited station sets.

2. Bias at the microsite level (the immediate environment of the sensor) in the unperturbed subset of USHCN stations has a significant effect on the mean temperature (Tmean) trend. Well sited stations show significantly less warming from 1979 – 2008. These differences are significant in Tmean, and most pronounced in the minimum temperature data (Tmin). (Figure 3 and Table 1)

3. Equipment bias (CRS v. MMTS stations) in the unperturbed subset of USHCN stations has a significant effect on the mean temperature (Tmean) trend when CRS stations are compared with MMTS stations. MMTS stations show significantly less warming than CRS stations from 1979 – 2008. (Table 1) These differences are significant in Tmean (even after upward adjustment for MMTS conversion) and most pronounced in the maximum temperature data (Tmax).

4. The 30-year Tmean temperature trend of unperturbed, well sited stations is significantly lower than the Tmean temperature trend of NOAA/NCDC official adjusted homogenized surface temperature record for all 1218 USHCN stations.

5. We believe the NOAA/NCDC homogenization adjustment causes well sited stations to be adjusted upwards to match the trends of poorly sited stations.

6. The data suggests that the divergence between well and poorly sited stations is gradual, not a result of spurious step change due to poor metadata.

Lead author Anthony Watts said of the study: “The majority of weather stations used by NOAA to detect climate change temperature signal have been compromised by encroachment of artificial surfaces like concrete, asphalt, and heat sources like air conditioner exhausts. This study demonstrates conclusively that this issue affects temperature trend and that NOAA’s methods are not correcting for this problem, resulting in an inflated temperature trend. It suggests that the trend for U.S. temperature will need to be corrected.” He added: “We also see evidence of this same sort of siting problem around the world at many other official weather stations, suggesting that the same upward bias on trend also manifests itself in the global temperature record”.

The full AGU presentation can be downloaded [here]. 

JC reflections

This looks like a solid study.  The participation of John Nielsen-Gammon in this study is particularly noteworthy; Watts writes:

Dr. John Nielsen-Gammon, the state climatologist of Texas, has done all the statistical significance analysis and his opinion is reflected in this statement from the introduction

Dr. Nielsen-Gammon has been our worst critic from the get-go, he’s independently reproduced the station ratings with the help of his students, and created his own series of tests on the data and methods. It is worth noting that this is his statement:

The trend differences are largest for minimum temperatures and are statistically significant even at the regional scale and across different types of instrumentation and degrees of urbanization.

The p-values from Dr. Nielsen-Gammon’s statistical significance analysis are well below 0.05 (the 95% confidence level), and many comparisons are below 0.01 (the 99% confidence level). He’s on-board with the findings after satisfying himself that we indeed have found a ground truth. If anyone doubts his input to this study, you should view his publication record.

This paper has been a long process for Anthony, but it appears to have produced a robust and important analysis.

The extension of this analysis globally is important to build confidence in the land surface temperature records.

It will certainly be interesting to see how the various groups producing global surface temperature analyses respond to the study.

 

 

 

885 responses to “Watts et al.: Temperature station siting matters

  1. Pingback: Watts et al.: Temperature station siting matters | Enjeux énergies et environnement

  2. They will embrace it like they would a porcupine.

  3. I want to see whether BEST can replicate it.

    • I asked for the data back in July of 2012

      Steve Mcintyre commented

      “Steve: I agree that there is little point circulating a paper without replicable data – even though this unfortunately remains a common practice in climate science. It’s not what I would have done. I’ve expressed my view on this to Anthony and am hopeful that this gets sorted out. Making the data set publicly available for statistically oriented analysts seems far more consistent with the crowdsourcing philosophy that Anthony’s successfully employed in getting the surveys done than hoarding the data like Lonnie Thompson or a real_climate_scientist.

      It would have been nice if you’d spoken out on any of the occasions in which I’ve been refused data. You are entitled to criticize Anthony on this point, but it does seem opportunistic if you don’t also criticize Lonnie Thompson or David Karoly etc.”

      • From the beginning of the SurfaceStations project all of the individual station surveys with documenting photographs were publicly available online at the project website (although they’re currently offline because of server issues). Temperature series were available from other sources. Anybody at any time could have analyzed them too while this project was in process. It wasn’t a secret.

      • Gary, telling other people that they can do their own research is no longer considered sufficient. The right way to do it is a turnkey R script (or the equivalent) so that anyone can immediately duplicate one’s results exactly. There is really no conceivable excuse for not doing that. I spent too much time watching McIntyre reverse-engineer Mann’s results – for years! – because whenever McIntyre would produce a result different from Mann’s, Mann would respond that he had probably done it wrong…

      • I asked for the data back in July of 2012

        And it still isn’t available.

        That was kinda my point, Steven. Until they publish their code and data, and somebody who can be trusted to make a good-faith effort to replicate their results has done so, I’m just as skeptical as I am of Mann’s papers.

        But I’m also interested to see what BEST will do with their (Watts et al.‘s) list using their own methods. Once they’ve replicated their results, so we all know BEST is starting with exactly the same thing they started with.

      • AK
        The problem is a bit deeper since they did use adjustments. And the whole station rating system has never been properly field tested. That is why I pushed for a data paper first. Publish the ratings first so we can assess that interpretation of the meta data.
        But people have taken the Leroy stuff at face value.
        When we asked for backup from Leroy the answer came back that there was no solid objective field test data. There was some small amount of testing done, reported at Lucia years ago.

      • David L. Hagen

        Steven Mosher
        Watts responds:

        Sorry Mosh, no can do until publication. After trusting people with our data prior to publication and being usurped, not once but twice, I’m just not going to make that mistake a third time.

        Take that up the data access issue with Richard Mueller who breached his confidentiality agreement with Anthony Watts. Twice burnt, Watts is thrice shy.

      • Michael Aarrh,
        You miss the point. While the study was IN PROGRESS the station reports were available for anyone to do examine any way they like. There was NOTHING ELSE to release until Anthony et al. finished his analysis. That will be forthcoming when official publication is assured. He’s learned from experience that early release only causes harm, not progress.

      • Sorry David that was not the promise. Every promise made was kept.

      • David Springer

        The weasels weaseled out of their promises by rationalizing. Weasels is what weasels does… depends on what the definition of “is” is. Mosher knows the drill.

      • David Springer: “Weasels is what weasels does…

        Are you sure you don’t mean stoats?

      • Until they publish their code and data, and somebody who can be trusted to make a good-faith effort to replicate their results has done so, I’m just as skeptical as I am of Mann’s papers.

        I rootin’-tootin’ agree.

        (Key word: “Until”.)

        You can look at what we measure and discuss our basic methods here and now (QED). But you cannot run the numbers to see if we did our sums right (or check our ratings to see if they were done right) until we release the data itself.

      • Sorry David that was not the promise. Every promise made was kept.

        Mosh, let us both grant each other a little leeway in this. And let us understand each other. For in many ways, we are much alike, you and I. Or so I’d like to think.

        We are both insiders who made it in from the outside; we have both faced the hazing of peer review, and stood it well. Both of use vary between loquacious and two-word terse. We both regard scientific method in almost a more childlike, sincere way than many of the old, cynical hands. We look at it almost in awe, more seriously in almost a more childlike way than most hardened veterans.

        And we both suffer the analog attitude of British army officers raised from the ranks. Didn’t purchase our commissions like proper gentlemen. Not considered proper officers by our peers and even those merely attending in the ranks. You could substitute either of our names in the taunts we both so often hear. When I hear them doing it to you, I see it like when they are doing it to me. Just switch names, add water, rinse, repeat.

        Yet we are hard and we are proud; we got where we were by dint of merit, direct thought, and much expended elbow grease. We both owe (and have) loyalty to others specific.

        Point here, really, is that Anthony released data in two cases (the first in a disastrous round with NOAA). Both times, those who got it said they abided by what was agreed, and maybe they did, too; I won’t presume to judge that negatively (having neither the data nor the inclination).

        Yet the end result was, regardless of any fault or lack of fault, that we greatly regretted it both times. So we want to wait until publication. We will not take long. From my end, at least, there is nothing personal in this.

        You will get your data, and you will get it in a flexible, malleable format that can be tested down to the last detail and replicated in any way you see fit. We are nearing the end of a long, hard slog. I have spent thousands of hours on this. So have others. there is just a little more time to wait. Please don’t believe you won’t get this material. I fully agree that no one can pass even intermediate judgment until we release the Full and Complete data.

    • I want to see whether BEST can replicate it.

      I want to see whether BEST can replicate it.

      They will have to alter their method to account for siting bias, but that should not be an insurmountable problem.

      P.S., Mosh, please be just a little patient with us. We are in the final stages of completion and the data will be available sooner rather than later.

    • Leroy the answer came back that there was no solid objective field test data. There was some small amount of testing done

      Well, Leroy is only looking at offsets. The only gold speck in there is that he puts both Class 1 and 2s offsets at zero, so we can effectively combine the two.

      But to put it bluntly, Leroy (2010) is a wonderful tool but it is also a bit of a meataxe. It is a bit of a work in progress itself. It’s the best practical tool out there, but I assume there will be improvements. I could suggest a few.

      But it does enable us to demonstrate what happens when there is a nearby heat sink by allowing us to rate and filter.

      More study needed all around.

  4. The National Weather Service COOP network has been underfunded for many decades and this trend shows no signs of change. Of the 410 “unperturbed” stations it may “perturb” many researchers in 10 to 20 years when there are only about half that number providing data.

    Congress, are you listening? We must invest in our climate observation infrastructure or we will end up using models for observations.

  5. How much impact would this have on the various global temperature estimates? It seems that a large number of land based temp measures are sited in the US but I don’t if the weighting adjustments would tend to ‘wash out’ this error from impacting the global estimate.

    • AW et al say : “We also see evidence of this same sort of siting problem around the world at many other official weather stations, suggesting that the same upward bias on trend also manifests itself in the global temperature record”.

      • Did they publish their evidence? A Journal will require them to.

      • They will work that out with the journal that publishes the paper, nicky? But you are welcome to engage is pre-publication sniping. If it floats your boat.

      • Nick, not sure that a competent journal or reviewer would disallow a comment that they had observed similar issues at other stations around the world. Especially if they have a few photos to back it up. You think someone has to pre-publish a separate paper to justify every single sentence in a paper they are trying to publish? And this was a comment on a blog. Will they have to publish a paper proving everything they ever said on a blog in order to get a different paper published even if that blog comment is not in the paper? Grow up.

    • Note that “pre-publication sniping” has provided valid criticism that is indispensable to our work. And we are both appreciative and grateful for it. That is why Anthony (most wisely) pre-released in 2012.

      • evan

        In hindsight it would be useful if climate science generally had many more ‘pre releases’ so constructive criticism can take place. Otherwise material that might be highly contentious is presented as fact.

        Judith’s ‘uncertainty’ monster; is a small creature compared to his very big brother the ‘speculative’ monster.

        tonyb

      • That is NOT why he pre released in 2012

      • Mosher is correct. Here’s what Watts says now:

        I admit that the draft paper published in July 2012 was rushed, mainly because I believed that Dr. Richard Muller of BEST was going before congress again the next week using data I provided which he agreed to use only for publications, as a political tool. Fortunately, he didn’t appear on that panel. But, the feedback we got from that effort was invaluable.

      • Learn your words, evan. Sniping is not the same as constructive criticism.

      • My track record in getting data from climate scientists.. Nearly perfect.

        Getting data from skeptics.. Even sub samples… Even with a promise of nda???
        ZERO DATA

      • Your little anecdote on your personal experiences is supposed to tell us what about the general willingness of skeptics to publish their data, Steven?

      • Geoff Sherrington

        Steve osher,
        My problem is not that I cannot get data.
        My problem is that I have heaps of Australian data, but nobody wants it – though I am corresponding finally with WUWT moderators.
        For one example, I selected 44 of the most pristine sites I could find here, did daily data on Tmax and Tmin, established LLS trends in degC per century. Study period was similar to AW’s 1972 to 2006.
        One conclusion is that some sites have intuitively improbable trends, like in excess of 4 dec C/century extrapolated.
        Another conclusion is that the more pristine the site, to use the word broadly, the poorer is the data quality. These screens with thermometers were introduced for standardisation and noise reduction but in hindsight they are pretty useless at those tasks for the present purpose. They need frequent manual babysitting to keep the lipstick on the pigs.
        http://WWW.Geoff stuff.com/pristine_feb_2015.xls
        This is unpublished because it does not significantly add to knowledge. Many researchers already know that this type of data has huge errors.
        Not much point in taking it forward unless you are comparing Australian land records with those of other countries.
        For fun and to illustrate noise, I regressed each temperature trend against each station WMO number. There was an effect.
        So, why not have a look at it and modify your assertion that you are starved for timely data. This study has been around since 2009 IIRC.

      • Geoff Sherrington

        Bloody automatic intervention with my accurate typing.
        That is Steven Mosher not as appears above, nor the Kosher that it changes to.
        Also link is without the capital G inserted.
        http://www.geoffstuff.com/pristine_feb_2015.xls

      • Mosh is only partly correct. The main (and by far the most important) reason we pre-released was the same as BEST: to obtain badly needed hostile review. I don’t think we would be here without it.

      • You are not interpreting Anthony’s words correctly. We had decided to pre-release already. It was only the timing that was rushed, that was not when the decision had been made. I was there, so I know.

        I read so much speculation about these things. You can just ask. We made the decision to release to get feedback to brush things up before review. The timing was done as Anthony described. The proof is in the pudding — we went to great efforts to address the criticisms.

        There was more bushing than any of us thought. That accounts for the delay.

      • Geoff regarding that xls of “pristine sites”, I just looked at one, Willis Island, and it seems to have undergone significant changes over the period in your spreadsheet.
        http://www.austehc.unimelb.edu.au/fam/0616.html
        Note change to AWS just as the temperature takes off.
        http://www.austehc.unimelb.edu.au/fam/0612.html points out that the island had trees in 1947.
        The meteorology buildings have undergone significant expansion across the latter half of last century, note the earlier accounts pointing to a lack of freezers, fresh meat etc. No longer true. Plus the island is part of a P&O liner drive-by for the purposes of duty-free shopping – no details what this means.

      • Mosher : Getting data from skeptics.. Even sub samples… Even with a promise of nda??? ZERO DATA

        How much data do they actually have to give though? Or to put it another way, how much government funding do they actually have at their disposal?

    • How much impact would this have on the various global temperature estimates?

      Good question. Well, the way I figure it, if the problem is typical throughout the GHCN, then I’d say it would reduce “official” global warming by maybe ~10% to 15% on the low side. And if we are on target regarding the CRS units, which I think likely, it could go up to 15% to 20%. Esp. since a lot of land stations project their radii out to sea.

      So that is the scale we are talking, here.

  6. “It will certainly be interesting to see how the various groups producing global surface temperature analyses respond to the study.”

    The warmists will apologize profusely for decades of fear-mongering, apologize for shameful ad hominem attacks, return fraudulently obtained grant monies, and initiate a movement to overturn the Paris accord.

    NOT!

  7. “The extension of this analysis globally is important to build confidence in the land surface temperature records.”

    It would appear that this should DESTROY confidence in the adjusted/homogenized/algorithmed land temperature record.

    Or, am I missing something?

    • You had confidence in the adjusted/homogenized/algorithmed land temperature record?

      Once they abandoned backward sensor compatibility the whole thing became a joke.

      Besides the temperature record is stew not milk. They shouldn’t be homogenizing it.

  8. “events conspire to set a fire with the methods we employ”

    https://www.youtube.com/watch?v=QohZXA8wvIA

    Pointman

  9. Thanks for posting this information. I salute the authors for this analysis.

  10. Can it get more insidious? “We believe the NOAA/NCDC homogenization adjustment causes well sited stations to be adjusted upwards to match the trends of poorly sited stations.”

    • They gave you a hint and wrote “we believe”. They present no evidence for this claim. After homogenization the trend Watts et al. (2015) computed are nearly the same for all siting five siting categories, just like it was for Watts et al. (2012) and the published study Fall et al. Just like before, they did not study homogenization algorithms and thus cannot draw this conclusion.

      That the trend after homogenization is larger in the USA was known (and is frequently shown as evidence that climatology has an agenda, ignoring that the net adjustment for the global temperature make the warming smaller). This US increase is due to the time of observation bias and the transition to the MMTS. Watts et al. (2015) replicate this and also find that “aw” series with a “perturbation” show a smaller trend than the series where Watts et al. did not find evidence of a perturbation. Thus the perturbations cause a cooling bias.

      In this light, it is normal that the raw data from the category with the largest trend shows a trend that is nearer to the one of the homogenized data. Also the current version of the manuscript does not check whether there were really no perturbations in the stations put in the category “no perturbation” by comparing the series with the observations at neighboring stations. Thus there are likely still perturbations in this subset.

      http://variable-variability.blogspot.com2015/12/anthony-watts-agu2015-surface-stations.html

      • Victor Venema,

        I believe the Earth’s surface has cooled from its initial molten state, to that of the present. Just a belief – or an assumption, if you prefer. I have no personal knowledge of the situation four and a half billion years ago. Pontificating about anything else in between demonstrates a reliance on faith, rather than fact

        Maybe the Earth was created a millisecond ago, as it is. You have no proof to the contrary.

        Your opinions are worth precisely as much as mine. Would you not agree?

        Cheers,

      • Mike Flynn, in the light of your example, I unfortunately have to politely disagree.

      • Victor Venema,

        I see you believe your opinion is worth less than mine. I respect your beliefs, although I cannot understand why you feel that my opinion, based on the same facts, is superior to yours.

        Maybe you suffer from a lack of self esteem. You have my sympathy, if this be the case.

        Cheers.

      • by ‘homgenization’ you mean the processing of raw data to remove some part (hopefully, in large part) of the warming bias that is introduced by the majority of poorly sited stations due to the UHI effect?

      • Actually, Victor, I did a little sub analysis just on CRN1, just for GISS (because easier to access and display than NCEI). What they say about homogenization is generally correct. See my comment below for reference details. The published paper will have the stats my sample was too small to produce.

      • Victor Venema: “After homogenization…”

        You mean Mannipulation, right?

        In the cause of proving Mann-made Catastrophic Anthropogenic Global Warming?

      • After homogenization the trend Watts et al. (2015) computed are nearly the same for all siting five siting categories, just like it was for Watts et al. (2012)

        Ah, my dear VeeV, indeed it is. Fancy that.

        And the way homogenization did that was to, on average, adjust the trends of the 22% minority of well sited stations upward to match the trends of the 78% majority of poorly sited stations.

        Unperturbed Class 1\2 (1979-2008): 0.204C/decade.
        Class 3\4\5: 0.318C/decade
        Homogenized Class 1\2: 0.336/decade.
        Entire USHCN (all 1218), homogenized: 0.324C/decade.

        And that is how homogenization bombs: A systematic data error. This is a known thing — it’s right there on the bottle in fine print, between the disclaimer and the skull-and-crossbones.

        And that’s what has occurred. Sicks out like a fish in a tree.

        You could salvage the mess, VeeV. But you will have to use a.) Class 1\2s as your homog-baseline or b.) else apply a whopping downward adjustment to the non-complaint stations before you homogenize.

        That will remove the systematic error, and you can then proceed. All you are doing now is making a badly needed adjustment — in exactly the wrong direction.

        And the one who taught me most about how that all works is you.

      • But Mr. Jones, you showed yourself that the raw data in the USA has a cooling bias. Thus when this bias is removed the trend becomes larger.

        In your “raw” data, the “unperturbed” subset has a trend in the mean temperature of 0.204°C per decade. In the “perturbed” subset the trend is only 0.126°C per decade. That is a whooping difference of 0.2°C over this period. This confirms that in the USA the inhomogeneities (“perturbations”) cause a cooling bias.

        You are not seriously arguing that you showed homogenization to be wrong without studying how homogenization methods work, but only on the basis of two numbers looking similar?

      • David Springer

        With an as yet undetermined appendage Venema writes:

        “they did not study homogenization algorithms”

        The arrogance. It burns.

        Homogenization isn’t rocket science. It’s middle school science fair level work. It requires very little study. The institutional failure in academia that allowed you to somehow make a career out of such a simplistic area of study is the $64,000 question.

      • Why?

        Hmm. Let’s see … Because they are a stereotypical result of homogenization applied to a dataset containing a systematic error? Because there is an identified systematic error evident?

      • David Springer

        With an as yet undetermined appendage Venema writes:

        “they did not study homogenization algorithms”

        Teh arrogance. It burns.

        Homogenization isn’t rocket science. It’s middle school science fair level work. It requires very little study. The institutional failure in academia that allowed you to somehow make a career out of such a simplistic area of study is the $64,000 question worthy of in-depth study.

      • Pr Morel, a French climatologist and former head of LMD (Laboratoire de météorologie dynamique = Dynamic Meteorology Laboratory) used to say that 2 thirds of temperature anomalies were actually resulting from data correction

        Well, we are finding it to be a third. Maybe that will become a half, once I have deconstructed the CRS mess.

        Our CONUS Class 1\2 stations with most of the interval with MMTS equipment show 0.163C/d. And if they had been MMTS for the whole stretch, it’d almost certainly be even lower than that.

      • David Springer

        I’m beginning to wonder if Venema has a problem reading and writing in English. He’s patently ignoring these findings:

        Unperturbed Class 1\2 (1979-2008): 0.204C/decade.
        Class 3\4\5: 0.318C/decade
        Homogenized Class 1\2: 0.336/decade.
        Entire USHCN (all 1218), homogenized: 0.324C/decade.

        This is clear, indisputable evidence that unperturbed class 1 & 2 (well-sited) stations needing no adjustment except MMTS correction have a greatly increased trend (>50% trend increase) compared to all class 1 & 2 stations which have been “homogenized” to “correct” for perturbations.

        This is a smoking gun. Venema, Mosher, et al are exposed as charlatans There is no way they could have competently worked so long and hard on surface station temperature series adjustments without having noticed that unperturbed stations showed a greatly reduced warming trend in the US.

      • There is no way they could have competently worked so long and hard on surface station temperature series adjustments without having noticed that unperturbed stations showed a greatly reduced warming trend in the US.

        As they were not considering microsite and had no easy way of determining it, anyway, it was a very easy mistake and natural to make. I make that sort of mistake all the time. Mistakes are allowed.

      • Victor,

        Your observations are thoughtful and likely correct. My question to you is whether you think the basic finding is incorrect. The common purpose for everyone here is to reconcile land records with satelite records for the purpose of looking back to the pre-satelite era. (I assume that in the future the land record will be deprecated in favor of the satelite record.)

        Regards,

        Will Kernkamp

      • Will Kernkamp, the trend differences could be interesting. Depends on the reason. What is the reason will be very hard to determine given that we only know the siting at the end of the period, while we would need to know it throughout.

        There are a decent number of studies on cooling bias due to the time of observation bias and the influence of the transition to MMTS. And on the warming bias due to urbanization. For most other non-climatic changes we do not have many studies. For example, for the likely cooling bias due to relocations.

        http://variable-variability.blogspot.com/2015/01/temperature-bias-from-village-heat.html

        The changes in observational practices can be studied by make side by side measurements, also called parallel measurements. A group in the International Surface Temperature Initiative is gathering such parallel measurements. If we are able to find and get access to enough datasets, this would give a quite direct estimate of the size of the various biases. If anyone here knows of such datasets please contact me.

        http://www.surfacetemperatures.org/databank/parallel_measurements

        As far as I know the long-term trend of the satellite estimates fits to the station estimates over the USA. The differences are in the tropics. The satellites and some radiosonde dataset do not see the tropical hotspot. Given the various lines of evidence, I feel it is more likely that we will discover additional problems with the satellite estimates in the tropics. See a discussion I had earlier this month on this same topic:

        Given that the difference is mainly due to the missing tropic hotspot in the satellite temperature trend, it seems more likely than not that there is some problem with the satellite trends.

        The tropical hotspot 1) is seen in some radiosonde datasets, 2) it is seen in radiosonde winds, 3) it is expected from basic physics (that we know that the moist adiabatic temperature profile should be a good approximation in the topics due to a lot of convection), 4) you see the strong response of the troposphere compared to the surface at shorter time scales and 5) it is seen in climate models.

        But we will only know this with confidence when we find the reason for the problem with the satellite trends or when we find problems with all of the other 5 pieces of evidence against it.

        For the following discussion see:
        http://judithcurry.com/2015/11/28/week-in-review-science-and-technology-edition-3/#comment-747013

      • The tropical hotspot 1) is seen in some radiosonde datasets
        The IUK analysis does indicate some what of a hot spot, but that process uses ‘kriging’ over the huge spaces devoid RAOB stations which may not be valid, because kriging assumes a homogeneous distribution:
        http://climatewatcher.webs.com/HotSpot2012.png

        The majority of analyses, both RAOB and MSU, not only show no Hot Spot, but indicate less warming with height, not more:
        http://climatewatcher.webs.com/HotSpot.png

        3) it is expected from basic physics (that we know that the moist adiabatic temperature profile should be a good approximation in the topics due to a lot of convection)

        No. The HotSpot is modeled, but there’s no physical law being violated if the model fails. In fact, much of the Eastern Pacific has cooled over the MSU era, meaning, if the same amount of convective exchange occurs, physically, one would expect less warming aloft because less warming is occurring at the sea surface:
        http://climatewatcher.webs.com/SatelliteEraMap.gif

        But we will only know this with confidence when we find the reason for the problem with the satellite trends or when we find problems with all of the other 5 pieces of evidence against it.

        MSU and RAOB corroborate the models in these ways:
        1.) Stratospheric cooling 2.) Arctic maxima 3.) LT land-only trend which matches GISSTEMP land-only trend. 4.) co-located sonde correlation.

        But MSU and RAOB both falsify the Hot Spot, at least for the MSU era and there’s no compelling reason to believe that the measurements are correct in all other regions, but not in the region of the HotSpot.

        Does it matter? Maybe not. There is still warming. But that warming may be more than we would expect if the Hot Spot was occurring because the Hot Spot is what provides the negative Lapse Rate feedback. So if the Hot Spot appeared, perhaps surface warming would decrease.

        It’s possible that the Pacific cooling is part of some long term fluctuation that reverses and the Hot Spot does occur – only time will tell, and even then, may tell in abstract messiness.

      • Eddy, you may be interested in two abstracts presented at AGU.

        Trends in atmospheric temperature and winds since 1959
        Steven C Sherwood, Nidhi Nishant and Paul O’Gorman

        Sherwood and colleagues have generated a new radiosonde dataset, removing artificial instrumental changes as well as they could. They find that the tropical hotspot does exist, that the models predictions of this tropic hotspot in the tropical tropospheric trends thus fit. They find that the recent tropospheric trend is not smaller than before.

        Extract abstract.We present an updated version of the radiosonde dataset homogenized by Iterative Universal Kriging (IUKv2), now extended through February 2013, following the method used in the original version (Sherwood et al 2008 Robust tropospheric warming revealed by iteratively homogenized radiosonde data J. Clim. 21 5336–52). …

        Temperature trends in the updated data show three noteworthy features. First, tropical warming is equally strong over both the 1959–2012 and 1979–2012 periods, increasing smoothly and almost moist-adiabatically from the surface (where it is roughly 0.14 K/decade) to 300 hPa (where it is about 0.25 K/decade over both periods), a pattern very close to that in climate model predictions. This contradicts suggestions that atmospheric warming has slowed in recent decades or that it has not kept up with that at the surface.

        Wind trends over the period 1979–2012 confirm a strengthening, lifting and poleward shift of both subtropical westerly jets; the Northern one shows more displacement and the southern more intensification, but these details appear sensitive to the time period analysed. Winds over the Southern Ocean have intensified with a downward extension from the stratosphere to troposphere visible from austral summer through autumn. There is also a trend toward more easterly winds in the middle and upper troposphere of the deep tropics, which may be associated with tropical expansion.

        Uncertainty in Long-Term Atmospheric Data Records from MSU and AMSU
        In session: Methodologies and Resulting Uncertainties in Long-Term Records of Ozone and Other Atmospheric Essential Climate Variables Constructed from Multiple Data Sources
        Carl Mears

        This talk presents an uncertainty analysis of known errors in tropospheric satellite temperature changes and an ensemble of possible estimates that makes computing uncertainties for a specific application easier.

        The temperature of the Earth’s atmosphere has been continuously observed by satellite-borne microwave sounders since late 1978. These measurements, made by the Microwave Sounding Units (MSUs) and the Advanced Microwave Sounding Units (AMSUs) yield one of the longest truly global records of Earth’s climate. To be useful for climate studies, measurements made by different satellites and satellite systems need to be merged into a single long-term dataset. Before and during the merging process, a number of adjustments made to the satellite measurements. These adjustments are intended to account for issues such as calibration drifts or changes in local measurement time. Because the adjustments are made with imperfect knowledge, they are therefore not likely to reduce errors to zero, and thus introduce uncertainty into the resulting long-term data record. In this presentation, we will discuss a Monte-Carlo-based approach to calculating and describing the effects of these uncertainty sources on the final merged dataset. The result of our uncertainty analysis is an ensemble of possible datasets, with the applied adjustments varied within reasonable bounds, and other error sources such as sampling noise taken into account. The ensemble approach makes it easy for the user community to assess the effects of uncertainty on their work by simply repeating their analysis for each ensemble member.

      • Victor Venema: “Sherwood and colleagues have generated a new radiosonde dataset, removing artificial instrumental changes as well as they could.”

        Did they Mannipulate the data using AlGore-ithms running on computer games climate models, Victor?

        One day you and your colleagues will be held to account.

        Think on that.

      • @Victor Venema | December 23, 2015 at 5:53 pm |

        Steven C Sherwood, Nidhi Nishant and Paul O’Gorman

        Did they make data and code available? If so, where is it?

        Thanks.

      • So and so “have generated a new radiosonde dataset, removing artificial instrumental changes as well as they could. They find that the tropical hotspot does exist, that the models predictions of this tropic hotspot in the tropical tropospheric trends thus fit.”

        Climate science at it’s best. When all else fails, gather a few nondescript Team members ’round the ole caldron and conjure up a new dataset. Anything goes, when the planet needs saving. We have to wonder what took them so long to think of juggling the radio sonde data. How many climate scientists would we need, if not for the CAGW story? I will help you: about 9.

      • Losing to some really nice, exceptionally smart people is going to be hard for you.

      • JCH, “Losing to some really nice, exceptionally smart people is going to be hard for you.”

        Losing what? The trend in the tropics from 1959 and 1979 to 2012 are about the same 0.14 C/dec which is lower than modeled. The shift in the northern tropical temperature is larger than the southern and the rate of cooling in the tropical stratosphere has slowed “possibly” due to the “beginning” of stratospheric ozone recovery. Also the altitude of the “hot spot” is lower than modeled.

        They have a trend that’s weaker, lower and more NH shifted than modeled. We have greater than modeled land amplification in the 30-60 NH band than happens to correlated with a peaking AMO. We also have “possible” ozone recovery during a weaker solar cycle which happens to have greater than expect UV.

        Unless I am missing something I am not very impressed with the word salad.

      • You talking to me, putz? A win for me would be effective mitigation if it is necessary, and none if it ain’t.

        What really concerns me is that the climate alarmists may be right, but they are too freaking weak, incompetent and dishonest to convince about 7 billion people that global warming is a big problem. And their little anonymous blog troll minions aren’t helping.

      • JCH, btw, if that unexpected 30-60 N warming is smeared, I mean Kriged, into the tropics, that could be a bit problematic for fans of the kirging method.

      • Don,

        What’s the difference between a putz and a schlonge?

      • How many climate scientists would we need, if not for the CAGW story? I will help you: about 9.

        hmm. Looks like you’re implying it’s all made up and a hoax. They are trying to pull the wool over our eyes. It’s not honest disagreement about science at all. Are you saying that, Don?

      • Joseph, “Looks like you’re implying it’s all made up and a hoax. They are trying to pull the wool over our eyes.”

        Some really great hoaxes and conspiracy theories start with undeniable truths which are wonderfully embellished and exaggerated.

        https://youtu.be/yi3erdgVVTw

      • Joseph,

        You wrote –

        “hmm. Looks like you’re implying it’s all made up and a hoax. They are trying to pull the wool over our eyes. It’s not honest disagreement about science at all. Are you saying that, Don?”

        I don’t know what Don is saying, apart from what he writes, but you’re employing the Warmist deny, divert, and obscure ploy.

        Warmists suffer from mass delusional psychosis. They cannot distinguish fact from fantasy, obviously. Whether this makes them dishonest, or merely stupid, deluded, or both, is a matter of definition.

        Warmists deny normal science, and try to create their own fantasy version, with an invented language to suit. I suppose you are silly enough to agree that after cooling for some four and a half billion years, the Earth started to warm up, at the behest of the Warmist cultists. It matters not, really. Fact is fact. Fantasy is fantasy. No amount of measurebation is going to create a non existent greenhouse effect. Try as hard as you like. I don’t think it will make you go blind, although it may reinforce your blindness to reality.

        Cheers.

      • Victor,

        Eddy, you may be interested in two abstracts presented at AGU.

        Yes, you obviously didn’t look at the data I plotted – the IUK is the middle column, top row.

        The error with IUK ( Yuk? ) is in the name – Kriging.
        Kriging assumes a homogeneous distribution.
        If you have large unsampled areas, which the RAOB data does,
        and the peripheral observations are high, the data will be skewed high.

        Other analyses don’t assume that stations reflect what’s going on many thousands of kilometers away.

        If you believe that the upper air should reflect the surface, then you would want to reject such assumptions also, because the surface is most certainly not homogeneous at these spacings and the Eastern Pacific waters indicate cooling, not warming.

      • I didn’t say or imply that it’s a hoax, yoey. If I thought it a hoax, I wouldn’t have said I am concerned about it being real and the climate scientist chumps not being competent or credible enough to convince the folks of the seriousness of the alleged situation. You don’t help, yoey. Dishonest know-nothings who blindly support the cause do more harm than good.

        Mark: Pretty much interchangeable, I think. Mr. Trump knows the nuances and when to use one instead of the other. Didn’t you learn this stuff in school, Mark?

      • David Springer

        Mistakes of this magnitude, supported by a “consensus” of scientists, for 25 years running, are definitely NOT allowed.

  11. NASA doesn’t need no stinkin’ adjustments. Doesn’t even need thermometers!

    From the NASA site –

    “Q. If SATs cannot be measured, how are SAT maps created ?
    A. This can only be done with the help of computer models, the same models that are used to create the daily weather forecasts.”

    Surface Air Temperatures can’t be measured? No problem – create some with a model. BEST can no doubt assist. Steven Mosher can explain why algorithms and endless analysis are preferable to recorded temperatures.

    Nobody bothers to measure the actual surface temperature. The surface is generally buried under something, up to and including 10 kms of sea water. Some bits are 20 odd km closer to the Sun than others, with 9 km less atmosphere to get in the way.

    Completely pointless. What a waste of time and money! When you’re hot, you’re hot, and when you’re not, you’re not. At the very least, Anthony’s efforts show the silliness of believing that official temperatures are useful for anything serious.

    Cheers.

    • Brian G Valentine

      Hansen used to make life easier simply assuming that lines of constant latitude are isotherms

    • Mike Flynn: “NASA doesn’t need no stinkin’ adjustments. Doesn’t even need thermometers!”

      Astongliy, climate “scientists” admit it.

      “The data doesn’t matter. We’re not basing our recommendations on the data. We’re basing them on the climate models.”

      ~ Prof. Chris Folland ~ (Hadley Centre for Climate Prediction and Research)

      Now come on Mike!

      If you were a climate “scientist” and your living depended on it which would you believe – the reading on a $10 thermometer or the output of a $100,000,000 computer game climate model?

      Come on now, be honest!

    • Actually, the idea is to improve their usefulness. This is the start of a longer process.

  12. This looks like a very important work by Anthony et al. I find this quote particularly interesting:

    “We believe the NOAA/NCDC homogenization adjustment causes well sited stations to be adjusted upwards to match the trends of poorly sited stations.”

    It makes me think back to Karl et al. Hmmm.

    • Sciguy (and others):
      The individual surface station documentation (photos, ratings) has been available on line for years, also in tabular form in a large Excel spreadsheet I separately archived. It is not online at present because there have been many attempts at hacking/ tampering.
      On 3 Aug 2015 I posted a little analysis of the CRN1 stations at WUWT. Title is roughly How Good is GISS? WUWT search tool takes you to it from that snippet. Enough to see patterns, not enough for statistical validity. I did not include CRN2 to run stats because my Koch check never came.

      What the analysis showed is that GISS homogenization appears to do a reasonable job of removing urban CRN1 UHI. But it contaminates almost all the CRN1 suburban and rural stations, changing pristine ‘no trend’ to homogenized ‘increasing trend’ in all but one case. For whatever reason, Apalachicola Fl escaped unscathed. This issue is logically inherent in the published homogenization methodology, and makes GISS unfit for purpose unless only the high quality stations (CRN1 and 2) are used for homogenization. And NASA does not; it obviously uses the whole sorry lot.

      • David L. Hagen

        AKA Noble Cause Corruption – whether conscious or not.

      • Rud

        Thanks for the pointer to your August post at WUWT. I believe I was out of pocket that week so I had missed it.

        You noted:
        “One could either cool the present to remove UHI or warm the past (inserting artificial UHI for trend comparison purposes). Warming the past is less discordant with the reported present (the UHI correction less noticeable), so preferred by GISS.”

        Whenever I get clever and do something backwards for the sake of convenience, it comes back to haunt me. Maybe I am unlucky or just not so smart as Karl or the climate crew at Goddard. In any case, warning flags start waving in my mind when something is done backwards.

        In this case I am not sure why correcting for UHI is “discordant”. I see temperatures reported with “wind chill” all the time, resulting in huge deltas from the actual measured temperature. The public seems to understand the premise of “wind chill” and are comfortable with such reportage, so I would be surprised if folks had difficulty with temperatures reported “as measured” and also “as corrected” for siting issues.

      • Geoff Sherrington

        Rod,
        When you claim that GISS homogenisation does a reasonable job in removing certain UHI, you must have data or understanding that I cannot get.
        You have to know the “true” temperature at a site to judge if corrections do a good job reconstructing it.
        If you already have the true value, why bother to homogenise?
        In concept, this is logically similar to the synthetic attribution of climate changes to natural or man made. Sorry, cannot be done yet.
        Now, about the brainwashing that made nuclear electricity disliked by you – the good data do not support your dislike. But not here …..

      • Geoff, the data was for four large urban areas with CRN1 stations in my WUWT post.
        As for nuclear, you misunderstand. I am very much in favor. But think building as little gen 3 nuclear as possible, and investing a lot to really sort out and improve better gen 4 options (passive safety, refueling, radwaste) is a wiser course since there is no CAGW crisis to be resolved.

  13. Whoa. Mosher?

  14. I predict that NOAA will put out a paper called “Artifacts in the…” in the next month or so that will purport to “disprove” the Watts et al study. Of course that assumes that the Climate Nomenklatura will be unsuccessful in quashing the paper before it can get published in a journal

  15. It would be interesting to see how the global trend changes if temperature series that use other stations for corrections used only the 410 BEST ones.

    • they had 410 that they believe were un perturbed… not best…. un perturbed… read it again

      That means… there is no record of being changed or moved.
      That is different that different than being unperturbed.

      • Mosher, the description above of the 410 stations states they have not changed site status – meaning their siting hasn’t deteriorated. So, using the anomaly method would still yield good data.

    • they had 410 that they believe were un perturbed… not best…. un perturbed… read it again

      That means… there is no record of the station being changed or moved.
      That is different than actually being unperturbed.

      • David Springer

        If unperturbed stations may actually be perturbed due to inaccurate records then perturbed stations may be unperturbed due to inaccurate records.

        All bets are off when records cannot be trusted. Thanks for bringing up the #1 reason why skeptics don’t trust the so-called “consensus”. With friends like you warmunists don’t need enemies.

      • “unperturbed stations may actually be perturbed”

        Like when she says ‘no’ she really means ‘yes’.

        Andrew

      • Steven Mosher: “they had 410 that they believe were un perturbed… not best…. un perturbed… read it again”

        Wriggle…wriggle…wriggle…

        Somebody else playing in your sandpit, Mosher?

        Worried, are you?

        Perhaps you should be.

      • “If unperturbed stations may actually be perturbed due to inaccurate records then perturbed stations may be unperturbed due to inaccurate records.”

        The victor link below has an interesting idea. Since only well positioned stations are used, that will include those stations that used to be in bad locations, were moved, and for which there is no record of the move in the meta-data. This would cause a spurious cooling trend.

        Watt’s also states curators were interviewed, and that presumably diminishes this possibility. The link didn’t consider the interviews, for some reason.

      • Since only well positioned stations are used, that will include those stations that used to be in bad locations, were moved, and for which there is no record of the move in the meta-data.

        That is not impossible. But USHCN metadata is the best in the USHCN and has vastly improved historical notation than it did when we started out on this. Much more and better info covering both before and after. Microsite rating is fairly constant. There were only a handful of localized moves that changed the rating.

        For the metadata-poor GHCN, as a whole, the problem is far worse than for the seemly, serk USHCN. And even in the US, the metadata gets spottier going back before the satellite era.

      • “. Much more and better info covering both before and after. Microsite rating is fairly constant. There were only a handful of localized moves that changed the rating.”

        So, how did you assess the Shading for a site in 1979?
        And how did you assess the shading at the “current time”

        And How did you assess that metadata was “better”

        Example. The records indicate a TOBs change. Do you trust it?

        One reason we check both the data series for breaks and the metadata for changes is that neither record can be assumed to be pristine. And because sometimes TOBS changes require no adjustment. .

      • “If unperturbed stations may actually be perturbed due to inaccurate records then perturbed stations may be unperturbed due to inaccurate records.

        All bets are off when records cannot be trusted.”

        ####################

        wrong. All bets are not off.
        With Historical data it is always possible that reports and records are wrong. But you have tests for some of these.

        Example: metadata says the site moved from 0 meters ASL to 1000
        meters ASL.

        Data shows no cooling.

        So you have a choice:
        A) the laws of physics have been broken
        B) the metadata is wrong.

        Example: 10 sites all located within a few km of each other.
        in one month 9 stations show a change in TOB in metadata.
        they all show a jump of .25C
        The 10th station also shows a jump of ..25C.
        BUT its metadata shows no TOB change.

        Again you have a choice.
        each choice is a bet. not all bets are off.

      • I’m just interested in the surprisingly good match between BEST and UT1 back into history. Looks like you may have nailed the early days better than the others. :-) (Only if this is a valid treatment of course).

        https://wordpress.com/post/climatedatablog.wordpress.com/378

      • David Springer

        If TOBS change sometimes requires no adjustment then the theory behind TOBS having a warming effect is bullschit.

        That’s probably part of the reason why stations without perturbations show a drastically different trend. Keep talking Mosher. You dig your hole deeper with every word.

      • And How did you assess that metadata was “better”

        By the huge amount and detail added between the time when we started looking at this and now. Someone at NCDC made a good hire.

      • Mosh is correct. The term “perturbed” means a station with (as far as we can determine) clean metadata. It does not mean clean microsite.

        “Compliant” means Class 1\2 (Leroy puts the microsite offset effect at zero for both).
        “Non-compliant” means

        If we know the location of a station after a recorded move (HOMR seems quite good with this, and so they should be), but do not know what the microsite was prior to the move (a very large number), we drop the station. That removes most of the Before-after issue (though I am sure it is not 100%).

        If the HOMR metadata indictes a TOBS flip (AM to PM or PM to AM 10% of the way within the series interval), we drop the station. If there is a blip in the middle, but it goes back to what it was, if it is not badly skewed, we retain the station, because such a blip will not materially effect trend. (Note that a centered blip in a longer time series may not be centered in a shorter series, and we’d have to drop it. Etc. It’s all relative.)

        HOMR is very good on TOBS. After all, all they have to do is transcribe it from the B-91s (which are archived as PDFs online).

        J N-G is looking for major discrepancies in TOBS-adjusted vs. Raw data for some stations, and we may prune our set slightly. So far we’ve lost a couple, but no Class 1\2s, so no material effect. Some we may include but flag (so’s you-all can remove them if you like).

        That is our basic method.

        The advantage of NOAA is that it is organizationally stable. No regime change, you know. Inter alia. So, at least during our study period, we have good, consistent records, among the best, if not the best that the world has to offer.

        ————————————————————–

        Anyhoo. We got the sweet spot in terms of distribution, data, and metadata. The further back you go, of course, the worse it gets.

        Poor Mosh! What a tangle that he has to deal with that I do not. Not only does he have the older USHCN’s ubiquitous “-9999s” to deal with an all those “Quien Sabe” notations in the metadata boxes, but he as the whore RoW’s problems on his shoulders.

        He does it the way he does it because there is no other way to do it. We can afford to (and do) drop our known perturbed stations. Mosh (and the VeeV) cannot. They cannot. The RoW distribution sucks, so they can’t afford to drop the perturbed stations. Just can’t.

        So he must adjust them. And since metadata is severely lacking, he is compelled to infer that from the data. It’s the tail wagging the dog, but he has no other option.

        And, besides, that is what I am doing, in effect, inre. homogenization, anyway — inferring from our findings.

        I also infer, in much the same manner, that the HOMR metadata is relatively clean: The data (upterturbed, compliant v. non-compliant) shows a relatively gradual divergence, not a series of discordant jumps which would occur if our results were an artifact of bad or missing metadata. So in addition to the HOMR USHCN metadata looking good, it acts good, too, when we crack the whip a little.

        All that is inference — very good inference, think. And, given the circumstances, unavoidable. Now maybe the body on the floor with a knife sticking out his back is actually a clever suicide and not a murder. Or maybe he was cleverly poisoned and then stabbed to cover up the needle hole. Until the forsensic team (VeeV, Mosh, Zeke, et al.) gives it a much hairier eyeball, we cannot know for sure. But for whatever reaon, there it is, dead on the floor. I think it’s horses, not zebras this time.

        I am not against inference when it cannot be avoided. A missing datapoint is a missing datapoint. You might say that one of the goal of our project is to improve current methods of inference.

      • Okay, a Freudian slip there. But my subconscious meant it.

    • 92 out of the 410 are well sited. The 318 remaining are poorly sited.

      • But if I’m reading correctly, the siting hasn’t changed during the period of interest. Therefore, the anomaly method should yield good data.

      • That’s what NOAA thinks. It isn’t so. Of the 410 unperturbed stations, the trends of the 318 poorly sited stations averaged over 50% higher than the well sited. Adjusted was worse.

        You would be right if the offset of the heat sink was the same at the start of the series as at the end. But there is a delta. Therefore, the poorly sited unperturbed stations trends are increased, anomalized or not, and the trend is invalid.

        Therefore the anomaly method can’t and won’t work.

  16. This is the global temperature change from 2000-2010 relative to 1970-1980. Does this look like a map you get from siting problems? No. Case closed. The trend is dominated by regions that have less annual snow cover than they used to, but elsewhere it is also equally warming in populated and unpopulated areas.
    http://data.giss.nasa.gov/cgi-bin/gistemp/nmaps.cgi?sat=4&sst=6&type=anoms&mean_gen=0112&year1=2000&year2=2010&base1=1970&base2=1980&radius=1200&pol=rob

    • Your proclamations are always very persuasive, yimmy. We would have preferred a huffpo link, but what the heck. We all give up now. You win. You can stop the incessant preaching.

    • Case closed.

      I think not.

      • I would like to see Watts explain that with his station siting issues. He just misses the big picture.

        From what I have seen, it could easily be explained. From what little I’ve seen of it, Arctic siting appears scandalously wretched. Perhaps it is you who are missing the big picture.

    • Jim D:

      The trend is dominated by regions that have less annual snow cover than they used to, but elsewhere it is also equally warming in populated and unpopulated areas.

      Yes to the first part. Not so certain about the second part.

      http://data.giss.nasa.gov/tmp/gistemp/NMAPS/tmp_GHCN_GISS_ERSSTv4_250km_Anom0112_2000_2010_1970_1980/nmaps_zonal.pdf

      • Warming even more, if anything, in unpopulated areas. Not what Watts would want to see, for sure.

      • Jim D: “Warming even more, if anything, in unpopulated areas.”

        You mean areas where the temperatures are Krigged/made up because there are no measuring stations?

        You just don’t get it, do you?

      • Some people just don’t believe that the Arctic is warming the fastest of all areas. I would like to see Watts explain that with his station siting issues. He just misses the big picture.

      • Jim D: “Some people just don’t believe that the Arctic is warming the fastest of all areas.”

        Please point out where I or anyone else has indicated that we disbelieve that the Arctic is warming the fastest of all areas.

        You’re making stuff up again, aren’t you?

        In any case, what if it is?

        Does it prove AGW, and if so, how and why?

      • Looked like you were disputing Cowtan and Way when they added warming for the missed Arctic regions to HADCRUT4, but maybe you agree that it needs this correction. Hard to make sense of what you say sometimes.

      • Jim D: ” Hard to make sense of what you say sometimes.”

        Ummmm….Yesss…..

        I can see how someone with a mindset like yours would find that…

      • Warming even more, if anything, in unpopulated areas. Not what Watts would want to see, for sure.

        The greatest warming seems to be in unmeasured areas.

      • You mean like the Arctic Ocean? Have you heard of polar amplification?

      • Jim D:

        The warming shown by zonal maps is concentrated in the NH. That’s where most people live. As you pointed out, the higher NH trend is associated with something other than atmospheric CO2 concentrations.

        Hansen’s 1200 km radius had an average correlation coefficient for temperature variation of 0.5 over the poles and 0.33 across much of the globe. It was chosen so Hansen could claim “global” coverage despite large gaps in station data. Merely dropping the radius to 800 km reduced global “coverage” to 65% (from 80% @ 1200 km) in 1987.

        Using 1200 km infill you only need two stations to cover most of the continental US. They really shouldn’t use the 1200 km range any longer.

      • The higher NH trend is simply because the NH has larger continental areas, and the CO2 effect is largest over land points because of the lower thermal inertia. The land is responding to the forcing change at twice the rate of the ocean, as a global average, which is what would be expected from a steady rate of external forcing increase, the main agent being GHG changes. Once again, to be clear, this is not what siting issues would look like. Watts is playing his followers on the noise and hoping they won’t notice this global signal.
        http://data.giss.nasa.gov/cgi-bin/gistemp/nmaps.cgi?sat=4&sst=6&type=anoms&mean_gen=0112&year1=2000&year2=2010&base1=1970&base2=1980&radius=1200&pol=rob

      • Jim D:

        We’ll have to wait for the details but from what I’ve seen thus far, Mr. Watts’ surface station work is more defensible than GISS’ 1200 km correlations.

      • Using only properly sited stations and the tried and true 1200km Kriging Kluge will give us the coverage and accuracy we are looking for. Right, yimmy?

      • Warming even more, if anything, in unpopulated areas. Not what Watts would want to see, for sure.

        Why not?

        We are not into not seeing things. We actually want to figure out what is and is not going on. Besides, it’s entirely compatible with our USHCN standard. If we remove urban data from our Class 1\2 mix, we see no change in trend.

        UHI and population density is not the issue for trend. For offset, no doubt, but not for trend. Poorly sited non-urban stations warm faster than well sited urban stations. And urban siting is, on average, superior to non-urban siting.

    • David L. Hagen

      Jim D
      Sounds persuasive until you examine the data in light of Watt’s findings – then it falls apart. Try again with ONLY class 1 and class 2 stations.
      Then try only those that have NOT been perturbed, adjusted, manipulated, spindeled, etc. Then for a reality check, compare that with the satellite data. Then it might be worth looking at seriously.

      • Then for a reality check, compare that with the satellite data. Then it might be worth looking at seriously.

        Done that. Details in the paper. Our CONUS trend runs ~10% below RSS2 and UAH6. Considering that LT trend is supposed to be 10% to 40% over LST during a warming interval, our results splits the uprights, and supports the work of Klutzbach and Christy. So it is not only Anthony that is being vindicated, but those LT v. surface papers as well.

      • David Springer

        You guys finally found the missing hotspot! Masked by false surface warming. Perfect.

    • Wow. Thanks for that JimD. It shows that the changes are mostly Northern hemisphere, just like the Minoan, Roman, and Medieval warm periods (ok, I have only seen this said of the medieval, so sue me). So one can not simply dismiss higher temp.’s in these times compared to now by saying “they only show up in the NH”, not globally. Well, all we have now is a number we call the global avg. temp. but there are many steps to get to that number and it has a large error bar (or should). And we don’t have that same metric for past times like the MWP. Both data and (I believe) theory say that the temp. increases will be larger at night and in the NH (i.e. cold places). So the data you link to is great that it confirms that the changes now are also NH just like the MWP.

    • There has already been an effort by skeptics to do the temperature series from scratch. It was joined by Watts and Curry, and Mosher, who was more skeptical back then, and a few well respected statisticians, Rohde and Hausfather, and it was sparked by Climategate and a general mistrust of Jones and his CRUTEM datasets. Anyway, they ended up confirming that Jones was basically right, Watts and Curry bailed and prefer not to talk about their involvement with BEST, and we are here now with Watts trying something again to see if he can get a better answer this time.

      • Brian G Valentine

        Jim you get worse by the day, you sound like a conspiracy theorist

      • To my knowledge, nobody has done a complete site survey like this before. Watts deserves a lot of credit for the idea and follow through.

        Now, the details and results remain to be seen.

        But it appears that all the analyses include crap stations, kinda similar to the sub-prime mess when bad loans were homogenized into the Collateralized Debt Obligations.

      • You can have the best site in the world, but if you change your thermometer or time of ob without accounting for it, you’re screwed. BEST had a way of detecting these using neighbors. Watts? I am not sure what he does?

      • Geoff Sherrington

        Sorry Jim D,
        This scientist does not accept that Jones was basically right. In 2007 I sent him emails with questions about data quality and got evasive answers.
        I sent the emails because there were plausible strong signs of cherry picking in the UHI papers re Australia and China.
        Those strong signs did not go away with the efflux of time.
        Please stop making generalisations about people you cannot represent.
        Geoff.

      • they ended up confirming that Jones was basically right, Watts and Curry bailed and prefer not to talk about their involvement with BEST, and we are here now with Watts trying something again to see if he can get a better answer this time.

        No need to speculate. You can always just ask. Both GHCN and BEST are currently subjected to the same systematic error, i.e., faiulre to consider microsite. When you adjust doing pairwise, the well sited stations tend to be identified as outliers and adjusted upwards to match the poorly sited majority.

        Homogenization (or any other pairwise) has two faces. One is where the majority of the data is correct, and we see the beaming visage of Kindly Uncle H, who cures all ills of man, beast, and missing metadata. The other face is when the majority of the data has the same error, and that is when kindly Uncle H goes postal and becomes the H-bomb.

        Systematic data error is what has happened here.

      • You can have the best site in the world, but if you change your thermometer or time of ob without accounting for it, you’re screwed. BEST had a way of detecting these using neighbors. Watts? I am not sure what he does?

        We use metadata (TOBS listed for each station). All BEST does is detect jumps, and then assumes it is a TOBS flip. But sometimes jumps just happen. Sometimes it gets warmer in a particular spot because it got cooler nearby. (That is one reason I am leery of pairwise even if I have to use it.)

      • David Springer

        If you’re Jim D you make generalizations about people you cannot represent. It’s what you do.

    • Does this look like a map you get from siting problems?

      Yup that is exactly what it looks like.

      You see quantization effects (areas have sawtooth edges).

      The level of detail is so low in that illustration (it doesn’t qualify as a chart) that it doesn’t tell you much.

      There is no way the real planet has warming that is that consistent – particularly with the square edges on circular boundaries.

      • So you think that the largest warming which is in the high latitude continental areas of Canada and Siberia is not due to regional snow-cover changes for example? Interesting. Continue.

      • Well…

        Canada gets to the adjustment.issue – but I have looked at some Canadian stations and some of them are warmer. I’m not sure if they are adjusted or not.

        Michigan stations I’ve looked at.are just weird. The 90s were cool, the 00 were warm and 2014 was almost a record low. And this is presumably the adjusted data. At some point I will dump the USHCN data and diff the raws vs adjusted and form an opinion.

        Russia is a special case. The northern areas in the USSR got more fuel allowance if they claimed it was colder.

        The Arctic ice has been increasing for about 4 years.

        Hard to say what is going on. Climate looks like weather in the 21st century.

        The albedo (cloud cover) seems to be driving the temperature.

        I expect it to get up to 0.5 °C warmer as the 20th century warming gets fully incorporated and that happens on a century time scale.

        https://i.imgur.com/G879Hww.png

        Bottom line is in 2014 only 35% (.35) of CO2 emissions stayed in the atmosphere (CDIAC Dec 2015 Global Carbon Budget data). So even if I was worried that a GINORMOUS CO2 increase could doom us I wouldn’t be worried. CDIAC dialed back their estimates of CO2 emissions despite China admitting they cheated which is a bit odd.

  17. I made a lot of short independent comments at the end of the WUWT thread. I’d like to consolidate them and put them where they’ll get more attention.
    ——————

    A question: Has this wobbled N-G’s warmism any?

    This is almost as exciting as reading the first Climategate thread here. (I was the first commenter on it.) Fortunately, the warmists won’t be able to whitewash this one away. AW has put a spoke in the wheels of the bandwagon.
    And to think that AW had to pass the hat to pay for his way down–and had to drive to cut costs. While money was no object for the 40,000 attendees in Paris. They ought to hold the next COP in Chico.

    Maybe you’ll be called to testify by Lamar Smith! (Or one of your team.)

    This ought to make the satellite data sets the gold standard, and relegate the land-based records to the lumber room.

    !!!!!!
    Ev’ry valley shall be exalted,
    and ev’ry mountain and hill made low;
    the crooked straight
    and the rough places plain.
    !!!!!!
    (Isaiah 40:4)

    THEY could have done this study. THEY could have told their grad students that this would make a great dissertation. THEY should have wanted to ensure they had a firm foundation. For example, NOAA could have told its stations to send in photos of their sites. But NOAA didn’t. In fact it refused to ask them, when the suggestion was made to it.
    But they didn’t look. Because they didn’t want to see.

    The necessary (by inference) revision of the global temperature record puts it below the lower bound of the models’ projections. So now we can say, “The consensus is 97% wrong.” How pleasant to turn the tables! And how deserved!

  18. My reaction is the same as it was in 2012

    http://climateaudit.org/2012/07/31/surface-stations/#comment-345345

    “Posted Jul 31, 2012 at 1:19 PM | Permalink
    ‘but the idea behind this was to put it out into the blogosphere for trial by fire.”

    Precisely.

    You will note that data for this paper is absent. That effectively means that we cannot do a proper review. we can’t audit it.

    Prediction: special pleading will commence.

    Latimer Alder
    Posted Jul 31, 2012 at 2:30 PM | Permalink
    Re: Steven Mosher (Jul 31 13:19),

    Mosh is right. You have to publish the data as well as the press release.

    You cannot even begin to claim the high ground without doing so. Leave such nonsense to the stuffy academics.”

    #####################################

    Zeke wrote

    http://climateaudit.org/2012/07/31/surface-stations/#comment-345355

    I wrote… and McIntyre commented
    http://climateaudit.org/2012/07/31/surface-stations/#comment-345389
    Posted Jul 31, 2012 at 3:40 PM | Permalink
    But Hu.

    1. Anthony has put it out for blog review and cited muller as a precedent for this practice. that practice included providing blog reviewers with data.

    2. Anthony brought Steve on board at the last minute even though hes been working on this paper for a year. Steve has a practice as a reviewer of asking for data. Since we bloggers are asked to review this, we would like the data.

    3. if, they want to release the data with limitations, that is fine to. I will sign a NDA to not retransmit the data, and to not publish any results in a journal.

    4. You have to consider the possibiity than Anthony and Steve could now stall for as long as they like, never release the data and many people would consider this published paper to be an accepted fact.

    Steve: Mosh, calm down. this is being dealt with.

    ################################################

    My reaction again.

    1. I would like to see the data.
    2. In 2012 I thought the classification of stations was Publishable on its own.
    3. I was willing then and am willing now to sign a NDA, basically promise
    not to copy the data, transmit the data, or publish anything based on the data, or even talk about it.

    • Well, if the data isn’t forthcoming after a few years, or if it is lost, hidden, etc., then you can really have something to complain about.

      Meanwhile, I’m wondering if any public funds were used to compile the data. If not, it’s really up to the owner (presumably Anthony) to do as he wants with it.

      • Anthony Watts stated that data compilation was done with private funds, not public funds. Main benefit from releasing data is to establish credibility.

      • Much of the siting data was collected by volunteers who weren’t paid for the service – other than to put yet another nail into the warmist coffin.

      • Meanwhile, I’m wondering if any public funds were used to compile the data.

        This paper is entirely unfunded.

    • David Springer

      Steven Mosher | December 18, 2015 at 12:50 am | Reply

      “My reaction is the same as it was in 2012”

      —————————————————————–

      Obviously. Knee jerk. And we could leave out the knee part.

    • David L. Hagen

      Mosher See Watts above. After Mueller breached NDA, twice burnt, thrice shy.

      • 1. There was no NDA
        2. We did exactly what we promised to do.
        3. Anthony was pissed over other matters..perhaps I should pull out some emails…..

      • Pull out the emails, Steven. Do you got the ones from 2012? That seems to be the key year. If you got enough emails, I bet Tony will relent and give you the data. If that’s really what this is all about.

    • > http://climateaudit.org/2012/07/31/surface-stations/#comment-345345

      I don’t think Ron’s and Nick’s questions have been unanswered on that thread.

  19. Since the trend for the “unperturbed” Class 1 or Class 2 stations is derived from only 92 stations across the country it would be interesting to see the error bars/ uncertainty associated with this trend, compared to the uncertainly for the full USHCN

    • Heck I would Like to see a random sample of the 92 stations..

      he doesnt have to release them all… just half

      • Why are yout talking about 2012, Steven? It’s almost 2016. Evan is answering quewtions over on WUWT. You seriousluy think they aren’t going to release the data?

        evanmjones
        December 17, 2015 at 4:45 pm

        It’s not perfect, but it’s as good as it can reasonably be. We define our terms and what we think is going on in the paper, itself.

        We will also be archiving the data and formulas in Excel, which will put it in a format that anyone can dicker with it or change the parameters — add or drop stations, change ratings, add categories (i.e., subsets), add whatever other version of MMTS adjustment you like, that sort of thing. (And I have some iconoclastic notions of how MMTS should really be addressed.)

        But the thing is, we welcome review. Some station ratings are obvious at a glance, but there are a few close calls. So it will all be open for review, complete with tools to test and vary. This paper is not intended as an inalterable doctrine. It is just part of a process of knowledge in a format it is easy to alter and expand.

        If anyone has any questions, I’ll be glad to answer.

      • Based on this comment and others below, it seems Mosher will get all that he is asking for. What’s with all the whining?

      • Don.

        people asked for my comments. that comment has changed.
        show me the data.

        Last time Zeke and I made comments… Do you think either one of us
        were asked to be co authors? Let take zeke because he is nice and I am not.

        Anthony and Company work for a year. They give it to Steve Mc. and Christy. Those two paste their names to garbage!!! Zeke finds the problem in TWO FRICKING SECONDS…. and he gets a polite
        thank you. he should have been invited to co author.

        so. show me the data and you’ll get my comments.

      • “Comments hasnt changed:

      • “Those two paste their names to garbage!!! Zeke finds the problem in TWO FRICKING SECONDS…. and he gets a polite
        thank you. he should have been invited to co author.”

        Then you really ought to feel really bad for McIntyre, who finds really deep rooted problems that take I would guess many days/months to find, carefully explains them, and as I recall in a couple of occasions doesn’t even get the credit for finding the problems.

        It’s not as if he finds some on the surface problem in 2 fricken seconds and expects to be a co-author, or something.

      • David Springer

        “he [Zeke Hausfather] ]should have been invited to co author”

        John Neilsen-Gammon was invited instead. Zeke is a piker. No PhD and his masters is in the wrong field. Of course Zeke’s inadequate qualifications exceed yours by a step or two as evidenced by mostly not lowering himself to perpetual climate science blog comment warrior.

        Gammon on the other hand has world-class qualifications exceeded by essentially zero others.

        Professor John Nielsen-Gammon is an American meteorologist and climatologist. He is a Professor of Meteorology at Texas A&M University, and the Texas State Climatologist, holding both appointments since 2000. Born: 1962 Education: Massachusetts Institute of Technology

        MIT, doctorate in meteorology, full professor, state climatologist for Texas.

        If Texas was a country it would have the 11th largest economy in the world.

        And you suggest Zeke as co-author. Unphuckingbelievable.

      • mosher, “Those two paste their names to garbage!!! Zeke finds the problem in TWO FRICKING SECONDS…. and he gets a polite
        thank you.”

        Didn’t Zeke find that if you use a crappy method you have to include TOS adjustments to fix the choice of a crappy methodology? TOS has virtually zero impact on Tmax and can have significant impact on Tmin, but the adjustment is based on Tmean. If you are trying to isolate issues with Tmax and Tmin you would look at the more pristine sites that have the least need of any adjustment including TOS. This is a bit like the slicing method with less cutting.

      • It’s unfortunate and annoying that the release of data is going to be after the press release for the paper.

        Mosher’s right to be impatient.

      • Scott, “It’s unfortunate and annoying that the release of data is going to be after the press release for the paper.”

        I believe if someone had not stolen portions of a unique data collection that all of that would be online now. Then the first stage would have been the station rating recommendations and the second stage would have been methodology. However, once a preemptive paper is published it complicates the “uniqueness” requirement for most peer review journals.

      • It was a press release from the AGU promoting a AGU poster presentation, Scott. It ain’t a freaking paper, yet. Try to pay attention. Don’t let Mosher’s shiny objects distract you.

      • Somebody here is very bitter…and should withhold comments until a calmer moment of rationality it regained.

      • Most of the time, when data is released, it is upon publication, not before.

      • remeber what I wrote about the SAME PAPER, and the SAME DATA in 2012

        “But Hu.

        1. Anthony has put it out for blog review and cited muller as a precedent for this practice. that practice included providing blog reviewers with data.

        2. Anthony brought Steve on board at the last minute even though hes been working on this paper for a year. Steve has a practice as a reviewer of asking for data. Since we bloggers are asked to review this, we would like the data.

        3. if, they want to release the data with limitations, that is fine to. I will sign a NDA to not retransmit the data, and to not publish any results in a journal.

        4. You have to consider the possibiity than Anthony and Steve could now stall for as long as they like, never release the data and many people would consider this published paper to be an accepted fact.

        Steve: Mosh, calm down. this is being dealt with.”

        ######################################

        How many times, including today, have people considered this to be a published fact.

        Hint. I think you will find something in micro site. BUT

        Hint 2: They use a method that ross mckittrick has roundly criticized.

        go figure.

    • And they are not looking at the trend only for these 92 stations. They are using them, presumably via gridding (and interpolation?), to calculate the temperature anomaly for the entire CONUS each month.

      • That wasn’t a criticism Evan, just a clarification.

      • Criticism is fine.

        Thing is that the poorly sited stations outnumber the well sited by almost 5:1. So the pairwise (between both types] is primarily with poorly sited stations. Homogenization does not take an average, it identifies a majority-mean and adjusts the minority to conform.

        So which set of stations do you suppose are getting adjusted? And in which direction do you imagine they are adjusted?

        That’s what’s going wrong.

    • There is a statistically significant difference. In spades.

      • I thought so. My post using just the much smaller subset CRN1 showed this, but not statistically. Cannot wait to read the paper when it gets published.

    • I put them in the bar charts. J N-G is doing the stats, and a full analysis will be included in the paper. (For the full set of Class 1\2 vs. 3\4\5, he gets over 99% confidence, FWIW.)

  20. Anthony Watts is on the record with this:
    ‘When the journal article publishes, we’ll make all
    of the data, code, and methods available so that
    the study is entirely replicable.’

    • Of course, he first published his conclusions back in 2012, so there’s no knowing how long it might be before he makes anything available for people to check his work. I would think that’d be a reason not to publish a press release, but, you know, apparently publishing press releases when people have nothing they can examine is cool by him.

      • And those were faulty. The errors identified then appear to have been corrected now. A press release about a poster presentation giving the poster conclusions is fair and ordinary. Warmunists and universities do it all the time. The data and code will be released with the paper. Cool your jets.

      • McIntrye versus rud.

        Who makes more sense? Mcintyre in 2012
        or rud, who tells Brandon to cool his jets.. over three years later?

        “Steve Mcintyre commented

        “Steve: I agree that there is little point circulating a paper without replicable data – even though this unfortunately remains a common practice in climate science. It’s not what I would have done. I’ve expressed my view on this to Anthony and am hopeful that this gets sorted out. Making the data set publicly available for statistically oriented analysts seems far more consistent with the crowdsourcing philosophy that Anthony’s successfully employed in getting the surveys done than hoarding the data like Lonnie Thompson or a real_climate_scientist.

        It would have been nice if you’d spoken out on any of the occasions in which I’ve been refused data. You are entitled to criticize Anthony on this point, but it does seem opportunistic if you don’t also criticize Lonnie Thompson or David Karoly etc.”

      • Mosh,

        How does what Steve Mc said help your case? He said it was a common practice in climate science and that you should complain just as strongly about others if you are going to complain about Anthony. Also consider that a non-climate scientist (yes, he is a meteorologist) running one of the world’s most popular skeptic blogs is going to have a harder time getting published and being criticized for minor choices in data analysis, that those in the CLUB don’t have to deal with and I understand why he is reluctant to release it all too soon. Maybe he should trust you if you sign a NDA. I really don’t know how to judge that.

      • Vintage McIntyre (2012) has very little relevance to a poster presented at the 2015 AGU meeting. Do you have any quote from Steve Mc on the current controversy, which is mostly about you whining about Tony not giving you data that he is not giving to anyone else? You are making a spectacle of yourself, Steven.

      • ==> “Vintage McIntyre (2012) has very little relevance to a poster presented at the 2015 AGU meeting.”

        Don makes an excellent point. Just because someone offers a set of standards for one situation doesn’t mean that there should be an expectation that they would be applied in very similar situations three years later.

      • “He said it was a common practice in climate science and that you should complain just as strongly about others if you are going to complain about Anthony. ”

        CLUE FOR YOU EINSTEIN

        1. I did complain about others. remember who coined the phrase
        “free the code, free the data”
        2. NOT A SINGLE SKEPTIC , least of all Anthony, complained
        when I went after the data of Jones and the code of Hansen.
        NOT A ONE. No skeptic ever called my demands for data “whining”

        On two occassions now Anthony has posted stuff asking for help and criticism to do ‘open science’ of sorts.. And as I pointed out
        if he wants good criticism, he has to supply the data.

        On many occassions Anthont has complained about science by press release… I AGREE ! but now he wants to do his own science by press release.

        Imagine I did a study that proved c02 was the cause.. of all the evil
        And I did a press release… buut I refused to give you all the data..
        and YEAR after year I said…. wait for the data… I need to publish

        At some point folks are within their rights to say… shut up until you do publish.. or Publish outside the standard “science” and “nature” collection of journals.. If the data is true and the method sound, folks like me could give a rats ass about the journal name or “impact factor”

      • Mosher: Don’t you love newbies who weren’t online when you had WUWT shilling your Climate Gate book to deniers? Don Don is right, this temperature stuff is a dead horst.

      • Horse, “Don Don is right, this temperature stuff is a dead horst.”

        Most of it is a dead horse, not all of it. It seems like a dead horse because the defenders stick to the dead parts. The majority of the temperature record is LIG max/min which limits accuracy. Once the MMTS was introduced there was a different set of problems. No matter which you pick as your “standard” you will get different uncertainty ranges and variance. To me, “ideal would be a method that maintains the more consistent uncertain for whatever length you want the record to be so you don’t have “almost unbelievable” accuracy at one point and +/- a degree at the other end. You just end up confusing what types of error you are messing with.

        There is the same problem with paleo.

      • OMG! It’s just like 2012. Wheeeeere’s McIntyre when you need him? Tony WUWT is at it again. Poster in the hall at AGU in front of three or four people. Poster science by press release! Oh, the freaking humanity! Somebody please report former Mosher mentor Tony WUWT to the freaking AGU!

        I will spell it out for you, Steven. Tony won’t show you the steenking data because you have been Svengalied by Muller and you can’t be trusted. You will just have to live with your choices.

      • > remember who coined the phrase “free the code, free the data”

        The third bit is missing. First it was “free the debate.” Then a book got published. Then it became “open the debate.”

      • “I will spell it out for you, Steven. Tony won’t show you the steenking data because you have been Svengalied by Muller and you can’t be trusted. You will just have to live with your choices.”

        I remember when Jones didnt trust Hughes, and said why should I give you my data when you are just going to find mistakes.

        So, what is Anthony afraid of?

        1. That I will take his data and publish before him? Not gunna happen,
        we took the data he gave us before and didnt publish before him.
        In fact we SUPPORTED HIS CONCLUSIONS IN OUR PAPER!!

        2. take his data and find errors? Wouldnt that make his submission BETTER?

        so what is he afraid of? that I will share it? Nope. he can sue me if I do
        That I will publish before him? nope he can sue me if I do
        plus I only ask for half the data…

        But thanks for arguing that scientists Dont have to share data with people they dont trust…. wait… Jones didnt McIntyre or Willis or me..

        You’ve set a fine standard for science.

      • Steven Mosher: “So, what is Anthony afraid of?”

        Judging by your extraordinary level of agitation and entirely unprovoked attempts to discredit AW et al even before they publishes their paper Mosher, I think the question should be “so, what is Steven Mosher afraid of?”

      • I haven’t set a standard for science, Steven. I am just an outside, objective observer telling it like it is. I don’t know why you scientists, quasi-scientists and wannabes can’t just get along and share your little data things. Especially you and Tony. You used to be tight.

        Anyway, it’s PR about a poster. It may never be a paper. We have more important things to think about. Our brainpower is wasted on this, Steven.

      • It appears Steve Mcintyreis no longer an author. I true, why is Steve Mcintyre no longer an author of the paper?

    • I would even take a RANDOM SUB SAMPLE of the data.

      At one point I asked for a sub sample of the 92 stations.

      For an entirely un related project.

      Answer.

      No.

      • You’d think he might not like you much or something.

      • Anthony says you will get your data. I can see why he wants to keep it to himself given the history. Plus, he goes to all the trouble to gather it, why shouldn’t he get first shot at analysis? That’s only fair.

      • AW has now posted links to the two prior times he got burned on this early provision of data thingy. The second was by BEST, and in express contravention of the data agreement. You will just have to be patient for the paper, as your organization PROVED itself untrustworthy in this regard.
        You all made your bed. Now lie in it and stop complaining here. Why not apologize for your organization’s previous bad behavior over there.

      • jim2.

        I have promised him that I will NOT publish anything using the data.
        I will sign a NDA.

        Further I ask for only a SUB SAMPLE of the 92 stations. or a subsample
        of the BAD stations

        And NOT for the reason you think. basically, I want to use the station
        data to build a classifier that can work on world wide stations.

        basically take a subsample and see how well I can predict which other stations in his collection belong to the groups.

        I dont need all 410 stations. For this project I DONT WANT all 410.
        just a sub sample. With just a sub sample I can prove out the classifier.

      • “..he two prior times he got burned on this early provision of data thingy. “

        Can someone explain the harm caused to Anthony when he provided data prior to publication? Is the problem that when he made the data available, and his errors were criticized, that his feelings were hurt?

        Or were his loses financial? Reputational? In what way was he a victim that has so many bleeding heart “skeptics” feel compelled to defend his refusal to make his data available along with the publication of the conclusions he drew from that data?

      • Can someone explain the harm caused to Anthony when he provided data prior to publication?

        They used a preliminary, unedited version of his data to try to discredit his work in advance.

        There’s also claims of this paper being a “death blow” to the surfacestations project. I’m sure in some circles, they believe that to be true. However, it is very important to point out that the Menne et al 2010 paper was based on an early version of the surfacestations.org data, at 43% of the network surveyed. The dataset that Dr. Menne used was not quality controlled, and contained errors both in station identification and rating, and was never intended for analysis. I had posted it to direct volunteers to so they could keep track of what stations had been surveyed to eliminate repetitive efforts. When I discovered people were doing ad hoc analysis with it, I stopped updating it.

        I know your game is just to try to waste people’s time with your dirty insinuations. We all do. It’s clear from the way you make these demands without even bothering to follow links and find out for yourself that you’re operating in very bad faith.

        Your comments here are a prime example of dishonest rhetoric.

      • Mosher, I truly believe your motives are pure in this case. But if I had had to organize volunteers, collate all the data, double and triple-check it, get burned by letting others have it, etc, etc. … I’m just saying I can understand AWs position. Besides, you will get it anyway, a little later perhaps, but still get it.

        I wish I had time to dig into BEST’s code. Maybe someday. What I’m really curious about is how to generate error bars using the sparse, but somewhat global, temperature measurements. If I run an experiment in the lab, I can set up 10 runs of it as similar as possible; take measurements at predetermined intervals, etc. Easy to calculate SD and other stats.

        But in this case, a given measurement may be unique in space – ship measurements for example. Even though you can use relationships to extrapolate the data, like lat/lon (and maybe altitude someday); still the proper statistical technique to get valid error bars, say daily, is not obvious to me. And, you have (others but at least) the two cases of a daily temperature where a stationary station exists and where the temp for that location is calculated. Is there a name to the technique BEST uses to create error bars? Is it standard in the sense that others have developed and proofed the method on synthetic data?

      • jim2

        jack knife and monte carlo

        pretty standard stuff.

      • ==> “It’s clear from the way you make these demands without even bothering to follow links and find out for yourself that you’re operating in very bad faith.”

        Lol! Demands? Too funny.

        They used a preliminary, unedited version of his data to try to discredit his work in advance.

        You seem to be confused in thinking that you’ve answered my questions:

        Here, I’ll link them so you can read them again:

        http://judithcurry.com/2015/12/17/watts-et-al-temperature-station-siting-matters/#comment-752424

        What was the material harm? Try to answer again.

      • Was your point that he was materially harmed because people tried to discredit his work but failed to do so?

        Presumably his work should be able to stand scrutiny. If someone invalidly attempts to discredit it, then it causes him no harm.

        I don’t see how that is material harm.

        Try again. You might consider leaving off the insults when doing so.

      • What was the material harm? Try to answer again.

        More dishonest rhetoric. Question for the class: why is that sentence dishonest?

      • AK –

        Substantial harm. Significant harm.

        Use your own freaking definition. What was the material, substantial, significant harm caused to Anthony in the past? From what you wrote, it seems that someone used the data to try to discredit his work, and according to your excerpt, they failed to do so.

        Do you consider that to be material/substantial/significant harm?

      • David Springer

        Mosher you’re a scumbag and Watts doesn’t trust you. Deal with it.

      • Do you consider that to be material/substantial/significant harm?

        More dishonest rhetoric.

      • Be a little patient, Mosh. We are on the final lap. Sorry I took so long. We are tweaking it for submission. You’ll get it, all of it. In easily alterable Excel so critical reviewers can play around with it. I want people to see our work. I’m proud of it. I didn’t put in thousands of hours to have my hard work chucked into some inaccessible archive.

    • So we got two little soreheads whining about something that happened in 2012. That effort blew up on them and they have worked several more years to get it right. It’s not like your tax money is supporting the gig.

      You are like two little eager Starwars fanboys who want to see the big show on the first day, but don’t have the patience to stand in the freaking line.

      • What happened to “show me the money” Don? Must be an impostor.

      • Some willy is starting to rub off on you, horse grabber. You should avoid his vicinity.

      • haha.. they are afraid to give the data to an English Major.

      • haha.. they are afraid to give the data to an English Major.

        What data Steven? What makes you think they’ve finished making the changes to their cleaning process required to respond to critical reviews of their paper?

      • if they need to “clean” the data in order to respond to criticism I would have to question the integrity of their data

      • or their workflow at the very least

      • if they need to “clean” the data in order to respond to criticism […]

        The whole thing is an exercise in data cleaning. They throw away the majority of reporting stations because they’re too “dirty”. Valid criticisms of their methodology might well require a few changes to which stations they throw away. All this should certainly (IMO) be accounted for in the final product. Personally, I’m going to remain skeptical till I see it replicated.

        And till I find out what meta-data they’ve preserved WRT their data cleaning exercise.

      • haha.. they are afraid to give the data to an English Major.

        Well, you know how it is, Mosh. English majors are the worst. It’s the way of the wicked world. I’m sure you knew that going in.

      • David Springer

        Actually they are afraid to give it to someone with a demonstrated lack of integrity.

  21. If we can embrace Karl, Cowtan, Way and the others taking a new look at the data sets then why not Watt?

  22. Well, sigh… this again.

    Until they deploy a network of sensors in pristine only areas with sensors designed, tested, and calibrated to an exact engineering standard this whole land temperature thing is a joke. There is no reason to put global climate sensors anywhere there is any UHI. “Rural areas” isn’t good enough. I lived in a rural area – it went from dirt to blacktop in 30 years.

    The moment they start “adjusting” and “homogenizing” it becomes subjective. It comes down to which cherry picker is better at picking cherries.

    As a side note, it was pointed out by a link from SM the “length of day” change indicates a 20th century (pre 1990) sea level rise of 1.2 mm/Y. The rotation change this century is about 1/3 what it was last century. It is hard to argue that there is much current warming. If there was significant ocean warming or ice melting the planet would be slowing down like someone put the brakes on. That isn’t happening.

    The rate of warming this century is about 1/3 what it was last century going by the rotational change – and that is the only measurement that is known to be very accurate. So either it isn’t warming much or Antarctica is gaining a lot of ice. The rotation rate indicates the pause is real and attempts to “kill the pause” are misguided.

    • PA: “The moment they start “adjusting” and “homogenizing” it becomes subjective.”

      No, no, no!

      According to Mosher, it’s all done on magic computers by “AlGore-ithms”, so it’s entirely untouched by human hand.

  23. Denizens may appreciate a thoughtful review

    http://variable-variability.blogspot.co.uk/2015/12/anthony-watts-agu2015-surface-stations.html

    It was not clear from Judith’s post which journal had published the paper. It appears from Victor’s review it is in fact a poster and has not yet been submitted for review.

    • Here’s what Watts says on his thread:

      We are submitting this to publication in a well respected journal. No, I won’t say which one because we don’t need any attempts at journal gate-keeping like we saw in the Climategate emails. i.e “I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow — even if we have to redefine what the peer-review literature is!” and “I will be emailing the journal to tell them I’m having nothing more to do with it until they rid themselves of this troublesome editor.”.

      When the journal article publishes, we’ll make all of the data, code, and methods available so that the study is entirely replicable. We feel this is very important, even if it allows unscrupulous types to launch “creative” attacks via journal publications, blog posts, and comments. When the data and paper is available, we’ll welcome real and well-founded criticism.

    • Also on the Watts thread, there’s this

      evanmjones says:
      December 17, 2015 at 8:50 pm
      You’ll have to wait until publication. Twice already we’ve had reason to regret releasing preliminary data. So we must tread with caution. But you shall have it. All of it. That’s a cross-my-heart promise.

      • So let’s wait until publication to see how important this is.

        Judging from Victor’s post, it could be a long wait yet, but we’ll see.

        Watts’ conspiracy ideation you quote is amusing.

      • Trolls are amusing.

      • Projectors project

      • Why can’t you annoying little chumps just be glad that Obama has saved the planet.

      • Chill, Don. Have a read of Victor’s post. You’ll learn something.

      • I read venoma’s post, trollguy. Do you chumps do similar deconstructions every time one of your consensus boys presents a poster at AGU, or publishes a pal-reviewed paper?

        In the grand scheme of things, I don’t expect Anthony et al.’s paper to have a significant impact. What’s all the gnashing of teeth about? You don’t have to try to marginalize and demonize the well-funded deniers any more. The planet has been saved. Merry Christmas!

      • Don, all I posted was a link to Victor. You’ll have a stroke if you can’t calm down a little.

        Look after yourself. Relax over the holidays. Maybe take a break from blogs, they seem to cause you stress.

        God Jul!

      • I usually always like Aussies, trollguy. I will have to make an exception in your case. I am the calming influence on this one. You clowns are making a mountain out of a mole hill. It’s a freaking AGU poster. A WUWT AGU poster. Get over it. Wallow in the glory of your big victory at the Soiree d’ Paree. Weren’t you invited? Why you so mad?

      • Don, I agree with you, as I so often do.

        And everything I’ve posted agrees with you too.

        Your anger saddens me, Don.

        For you. For Christmas.

        http://www.psychology.org.au/publications/tip_sheets/anger/

      • That’s a really dumb tactic, trollguy. You people are crapping your drawers over A WUWT poster presentation. It’s like Tony et al. have breached the walls of the consensus goon’s fort and they have to charge into the breach. Little climate soldiers rushing about whining and debunking. Clowns. This is why the vast majority of the folks in the world don’t take your climate alarmism seriously.

      • > This is why the vast majority of the folks in the world don’t take your climate alarmism seriously.

        Why would anyone take alarmism seriously, Don Don?

        Alarmism is alarmist, after all.

        Enjoy the season, and don’t drink when driving that bandwagon.

      • Willard: you lost the catastrophe argument, no matter what your acolytes say back over at ATTP where you and ken licked your wounds from the last drubbing. Now, you doth proteth thoo muth. Dimmy little Don Don nails the whole thread: Much Ado About Nothing and you people are having an aneurysm over a pathetic denier poster.

      • verytallguy: “Chill, Don. Have a read of Victor’s post. You’ll learn something.”

        I doubt it.

      • Willard: “Why would anyone take alarmism seriously, Don Don?”

        Dunno Willikins.

        Tell you what, as a staunch alarmist yourself, why don’t you tell us?

      • > you lost the catastrophe argument

        Are proofs by assertion the new fad in geologists’ networks, Regions that Lie Between Normal Faults?

        ***

        > Much Ado About Nothing and you people are having an aneurysm over a pathetic denier poster.

        This mind reading not only minimizes Willard Tony’s big moment of science by press release, but also Judy’s opinion of it:

        Anthony Watts has presented an important analysis of U.S. surface temperatures, in a presentation co-authored by John Nielsen-Gammon and John Christy.

        It’s amazing what Denizens could discover just by reading.

      • Assertion? Networks? You admit fear that little Auntie J has real influence beyond Ted Cruz and the muttering purchasers of trivial ebooks (bit coin and paypal gladly excepted). It’s ever so mystifying the daily chaff that turns Willards crank. Evolution is a clearer concept to grasp than a goal to achieve, word-boy.

      • > You admit fear […]

        A quote might be needed, unless that’s another way to blindly mine for mind states like a Kriging king would. After having minimized AGW in a recent thread, you know minimize Willard Tony’s science-by-press-release moment, an episode where NG got us all covered.

    • “Even paranoids have real enemies.”

    • The main noted issue seems to be the conjecture that choosing sites that have no meta-data indicating a move but that are well positioned will include those that were moved because they were poorly positioned. Plausible. But it would have been nice if Victor had also mentioned the author claims he interviewed curators, when discussing the methods he used to ensure pristine weather stations.

      The second claim the paper is weak because there is no reason given for the divergence. Of course, building things would be one, and I think that’s mentioned. Then the ancillary claim that there has been a slow down in divergence. That could be because the economy is sucking. It’s data, and hopefully data that isn’t lying.

      We all ought to be glad if there isn’t as much warming as thought, right (so long as we aren’t in fact cooling into a little ice age or worse)? It will give people more time to do what’s necessary.

    • “Denizens may appreciate a thoughtful review”

      Yep. Let us know if you ever find one. Here’s a hint, if the first thing your climate hero does is link to Hotwhopper then there wasn’t much thought involved.

      • Schitz, closing your mind to expertise merely condemns you to ignorance.

        It’s not big or clever.

      • David Springer

        “Schitz, closing your mind to expertise merely condemns you to ignorance.”

        You should know after living in said condemnation for so long now.

      • Ok tall, since you apparently need it, here’s your second hint.

        Many suspect that all the adjustments and homogenization of the temp records is adding a spurious warming trend, or at least make it larger. Anthony’s work tries to check this out by looking at just the stations that have never had station moves, changes in observation times, or anything else that would require their data to be adjusted. And apparently they found a significant difference compared to the official adjusted data.

        Now Double V comes along and says that we don’t know FOR SURE that those stations or really good. Yes, the meta data says their good, and the people Anthony talked to say their good, but he can imagine some really unlikely situations were they might not be. So what Victor thinks Anthony et al need to do with this is… wait for it…

        HOMOGENIZE IT!

        That’s right, the very same process they just effectively proved is causing greater warming in the adjusted data. And what would you have to adjust the good station with if you homogenized them? Why, the bad stations that needed all the adjustments in the first place, of course.

        So, do I need to give you any more hints? Or do you think you can figure the rest out on your own.

      • Schitztee: “HOMOGENIZE IT!”

        Homogenization – AKA Mannipulation

      • Schitz,

        yes, I understand the issues. You would understand them much better if you respected the views of those, like Victor, who have deep expertise.

        Your perfunctory dismissal of them, whilst protectively insulating your worldview, prevents you from developing anything beyond a knee-jerk, caps on denial of the issues a genuine expert, very respectfully pointed out.

        If you wish to be truly sceptical, you need to consider expertise that disagrees with you. Otherwise, it’s just denial.

      • verytallguy: “You would understand them much better if you respected the views of those, like Victor, who have deep expertise.”

        Heh, you’re funny!

      • David Springer

        I read Venema’s response. I did indeed learn something. I learned that Victor Venema is a shallow thinker. In his first criticism Venema argues that there must be far fewer than 410 unperturbed stations because there are, on average, two detectable breaks per station per 30 years so we should expect to find only about 158 unperturbed stations.

        That’s flawed. He is assuming a random distribution of perturbations which is not likely to be the case. A station that is perturbed once is far more likely to be perturbed more than the average of two times. This would be like saying that the average person goes to church twice per year so there should be X number of people who have never gone to church. That conclusion fails to take into account that people who go to church at all go far more frequently than twice per year and are likely to go once per week or 52 times per year. This drives the average way up. Similarly a weather station that is perturbed at all is likely to be perturbed more than the average number of times.

        To not recognize that perturbation distribution is not likely to be random is a sure sign of a shallow thinker. Or a dishonest one.

      • Springer,

        a weather station that is perturbed at all is likely to be perturbed more than the average number of times.

        An interesting assertion.   I have no idea if it is factual or not.  Evidence for this assertion would convince a reviewer that it was true.   Can you point at evidence for this? 

        a sure sign of a shallow thinker. Or a dishonest one.

        I’d advise against ascribing malign motives.   It’s a classic party of conspiracy ideation and takes away attention from any substantive point you wish to make.

        Victor is hugely more expert in this than you or I.   If learning about the issues is your objective,  engaging at his blog  (minus the insults) would be my recommendation.  You’ll get far more from him than from me or any of the denizens here. 

      • David Springer

        Venema also makes the statement that neighboring stations experience about the same weather.

        That’s not necessarily true. Hills and valleys, lakes and rivers, trees or grass, all make the “weather” four feet above ground level different despite being at about the same latitude and longitude.

      • David Springer

        Germans protecting their livelyhood through cheating isn’t an insult it’s a statement of fact.

        http://www.wsj.com/articles/vw-scandal-tests-auto-loving-germany-1443217183

        Venema makes his living riding the coat tails of climate alarmism. I suggest you take that into account, dummy.

      • David Springer

        Venema and you could learn from the denizens here. This for instance just upthread:

        http://judithcurry.com/2015/12/17/watts-et-al-temperature-station-siting-matters/#comment-752429

        You both lack common sense. Maybe that isn’t something that can be learned.

      • David Springer

        There are none so blind as those who refuse to see.

        re; insults

        In Venema’s opening paragraph he advises the reader to first go read a primer that supports his thinking and links to that website. The first thing we see on said website (hotwhopper) is a mission-statement banner that reads:

        Eavesdroppong on the deniosphere, its weird pseudo-science and crazy conspiracy whoppers.

        From this there’s a bit of a dichotomy in conclusion. Is Venema a passive-aggressive turdball who knew he was linking to something that insults a broad class of people who happen to disagree with his climate change co-conspirators or is he just too dense to realize what he did?

      • Springer,

        Scattering scatological insults through your posts makes any sensible dialogue impossible.

        It is very telling that given the opportunity to interact with a bona fide expert in a field you reject that chance and instead choose to behave in such a manner.

      • I find it amusing that so many people assume that stations located near to each other should record very similar results.

        I was involved a few years ago with installing an automatic meteorological station at an airport in China. It was state of the art stuff.

        We were allotted a location 1km from their existing instruments. It took me 2 years to get them to accept the readings from our station because they were so different to their old system. Rainfall for example was out by ~50%, even after I had both the old and the new rain gauges re-calibrated at an independent laboratory in Switzerland. The only reading that matched exactly was pressure.

        That was for two sets of instruments 1km apart across an exposed runway. Too many people forget that no matter how good in theory such instruments are, we are measuring mere pinpricks.

      • Jonathan,

        In what way are your observations incompatible with the views of experts in the field and what is incorporated into homogenisation techniques?

      • David Springer

        If you need to ask why Jonathan’s experience that weather at neighboring stations isn’t, in Venema’s words, “about the same” then you’re too clueless to be participating in this discussion. Are you stupid or just trolling?

      • vtg,

        The point I am making is that assuming nearby stations will record similar data is not necessarily a safe choice. I am sure that all experts in the field appreciate this.

        You would do well to remember that Victor is not the only expert around here, and even experts can disagree about fundamental assumptions. Your continual appeals to authority on his behalf reflect poorly on you.

      • Jonathan,

        I’ve not appealed to any authority- I’ve made no claims whatsoever, in fact.

        All I suggested was an opportunity to learn from an expert. I’m not sure why that is controversial.

        On nearby stations, it’s not at all clear to me what you are claiming which is different to the expert view.

        Even small station moves are well known to require adjustment.

        Perhaps you’d like to clarify?

      • I always feel a bit silly talking about min/max as if they really represented “temperatures”. Ignoring cloud and general volatility within the period of measurement is hard for me…But put that down to a lack of grounding in “climate science” and too much time spent in the paddock and scrub.

        However…

        If there is a single authority anywhere who does not understand that min/max temp and rainfall often differ critically between nearby sites then I’m surprised. If more than one such authority exists I am staggered.

        Maybe people live in places where a few kilometres do not make a difference. If so, surprise me. Where I live, the min/max temp differential between nearby stations (official town – official AP) is marked and unpredictable. The rainfall differential is enormous. (And the rainfall diff between those sites and my place? And between my place and a mate’s place, just a walk away on the other side of one low ridge? Enormous again!)

        Does this just apply to hill country off river valleys between the Pacific and Great Divide? Hmmm…

        Remember some excitement when Sydney Obs just beat the old daily max record of 1939 by reaching a searing 45.8C on Jan 18 2013? While the heat was not extraordinary away from Sydney that day (unlike 1939) the max readings from other stations around Sydney were a match for the Obs. It was brief, but it was extraordinary.

        However!

        Sydney Harbour (Wedding Cake West), which is just paddling distance from the Obs, only recorded a 34.3C on that very same day. It’s a station which usually runs cooler for obvious reasons, but on that day it ran 11.2C cooler! It wasn’t even Wedding Cake’s hottest measurement for the month.

        All of which should make one think. But will it?

      • vtg,

        The point I am making has nothing to do with resiting stations, I am addressing the idea that one can expect stations sited within a few miles to produce similar readings, and that homogenisation between such stations could be a valid technique for improving data quality. I am not saying that such homogenisation is wrong outright, but I would be very sceptical regarding claims of high accuracy and repeatability in data following such a process.

        Unfortunately you are still making the lazy assumption that I am not an expert in the practical capabilities of surface stations, and the quality of data they produce. Until a few months ago I managed a team of engineers designing, building and installing aviation meteorological systems. In particular I was responsible for obtaining operational approval from national safety regulators. Data quality was paramount in the audits we were subject to.

      • I can attest to the fact that it sometimes rains in my backyard and not in the front. And sunshine can do as well when clouds roll by. In the mtn SW temps can easily drop 20degF on a partially sunny day when a cloud passes between me and the sun! Nice when it is warm out, but not when it is cold!!

      • Correction: the diff between Wedding Cake and the Obs was, obviously, even bigger.

        Makes life still harder for homogenisers…but due to something called CLOUD I often wonder if we should even have these conversations about “temperatures”.

        Clouds, to quote the old song, really do get in the way, don’t they?

        And so, to quote another old song, till the clouds roll by…

      • Jonathan,

        I’m not sure where I made any comment or assumption about your expertise.

        If I maligned you anywhere, please accept an apology.

      • David Springer

        @Jonathan

        Obviously the fearful verytallguy is enamored of Victor Venema and suggests no one else as an expert to “learn from” in regard to temperature series and associated problems. He accuses me of refusing to interact with bona fide experts just because I don’t want to interact on Venema’s blog. I’ve been to enough warmist blogs, which are all more or less cult worship, to know how heretics are treated by both the owners and other participants. Real Climate, Skeptical Science, And Then There’s Physics are prime examples. Censoring is rampant on them. Why should I want to interact on such sites?

        I interact with at least two bona fide experts who don’t delete comments because the content is disagreeable and don’t act like they’re members of a cult with inalterable beliefs. Judith Curry and Roy Spencer are both bona fide experts in the field, don’t delete contrarian content, and treat everyone with respect. I frequent their blogs.

        Jonathan you should give Roy Spencer’s site a try if you haven’t already. The blog comment section is quite well trafficked and the owner is primarily responsible for the only trustworthy (for the purpose of determining precise, accurate to tenth of a degree) global temperature sensing system.

      • vtg,

        I was just struck that throughout your comments you refer to experts as if they are all other people, elsewhere. Sometimes, even on the internet, they are right here ;)

      • David,

        I do occasionally read Roy Spencers blog. But I find I struggle just to keep up with the posts and comments here at Judy’s, so I rarely comment anywhere else. Even here I restrict technical comments to subjects I know first hand, or have had time to check for myself.

        I have never bothered to visit Victor Venema’s blog, as I seem to remember him associating with William Connolley in the past, and I consider Connolley to be somebody who values political beliefs above scientific fact. Knowing Connolley, now I’ve mentioned him he’ll probably pop in to say ‘boo!’.

      • verytrollguy protesteth: “I’ve not appealed to any authority- I’ve made no claims whatsoever, in fact.”

        very next words out of verytrollguy’s pudgy little fingers: “All I suggested was an opportunity to learn from an expert. I’m not sure why that is controversial.”

        These characters are not big on self-awareness.

      • http://www.conservapedia.com/William_M._Connolley
        William M. Connolley is a British Wikipedia editor known for his fanaticism in promoting the theory of anthropogenic global warming (AGW) and in censoring the views of critics and skeptics. He is the ringleader of the infamous global warming cabal at Wikipedia, a powerful pro-AGW group that has an iron grip on global warming-related articles.

        William M. Connolley was banned from Wikipedia for a while – and they are pretty liberal and pro-warming.. had misused his administrator privileges to further his point of view in a content dispute….Connolley’s editing on Wikipedia is widely acknowledged to be a conflict of interest

        Part of the problem with global warming is the global warmers, if they weren’t so vicious, unethical, and dishonest their viewpoint would be more persuasive.

        The views and writings of an associate of Mr. Connolley should be considered biased and suspect.

        That was for two sets of instruments 1km apart across an exposed runway. Too many people forget that no matter how good in theory such instruments are, we are measuring mere pinpricks.

        How would you suggest that homogenization algorithms be tested and what is a successful result?

      • PA,

        I’ve never really thought about it, but in broad terms, if one followed the methodology used in aviation safety certification, one team would define the homogenisation method. The regulator would seperately set the pass/fail criteria. Another set of people would carry out field trials with as large a population of co-located systems as possible. The trial period would be a minimum of 12 months. The regulator would then have the trial data analysed and decide the result.

        Something like that.

      • Wow. Meta-hominem arguments.

        Finding reasons not to pay attention to expertise assures you of ignorance. Conservapedia is a great example of this: Don’t like the facts? Simply make up your own and ignore the nasty outside world.

        Also, a suggestion that rather than assert that homogenisation is not possible or applicable, reading the relevant literature would educate you as to the evidence which supports it.

      • Because VTG demanded it: Hint number 3!

        Just because I didn’t fall prostrate at the feet of your Climate Hero, it doesn’t mean I didn’t understand his argument.

        I don’t actually know that much about Double V. I’ve heard of him before but never really come across anything he’s written before. (Unless he writes science fiction. That name sounds familiar) But he linked to Hotwhopper, and I DO know Sou. I know her abject hatred of anyone who disagrees with her opinion, I know how she twists the truth until it looks like a balloon animal, I know how she considers pointing out that some ‘Climate Scientist’ (anyone saying what Sou wants to hear) got a different answer counts as a ‘debunking’. Been there, watched the circus, wrote posts at The Blackboard about it.

        But just because I don’t care much for Victors choice in webcomics, that doesn’t mean I didn’t read his article. I did read it. I understood it. And I spotted the logic flaws in it immediately. So no, not impressed by your Climate Hero.

        Now it took me awhile to get back to you, and it seems others have taken up the good fight to help you take a hint. Alas, it appears it was all in vain. You have your chosen expert and nothing we mere denizens say could ever make you question his proclamations. Not that you’re promoting an argument from authority, oh no. Not with all the skeptics learning about logic fallacies from a certain Lord. (Even if some only apply them to the other side)

        Now me, I’m definitely not a Climate Expert. And while you’re welcome to disregard anything I have to say on that grounds, please don’t try to lecture me on what or who I’m, in your mind, unqualified to question. Just because I haven’t published a paper on the changes in migration pattern of African Swallows brought on by Global Warming, that doesn’t mean I can’t spot a glaring hole in some climatists argument, any more then I need to be a better economist then Karl Marx to point out that Communism is a less efficient system the Capitalism.

        So here, even if you’re not going to take them. A free additional hint, just for you: Don’t assume someone is less intelligent then you just because they have a different opinion. Especially when it’s an opinion you need an ‘expert’ to defend for you.

      • Jonathan Abbott | December 19, 2015 at 1:15 pm |
        PA,

        I’ve never really thought about it, but in broad terms, if one followed the methodology used in aviation safety certification, one team would define the homogenisation method. The regulator would seperately set the pass/fail criteria. Another set of people would carry out field trials with as large a population of co-located systems as possible. The trial period would be a minimum of 12 months. The regulator would then have the trial data analysed and decide the result.

        Something like that.

        That sounds reasonable.

        NOAA and NASA must follow some sort of similar sensible procedure like this – because they get billions in budget and could cause trillions in costs depending on how their data is used.

        Perhaps someone knows how NOAA and NASA certify their algorithm changes? And who the independent second party is that tests them? And who the independent third party is that approves them?

      • Schitz,

        I made no comment as to whether Victor was right or not, I merely pointed to his article as something people interested in the issues would want to read. An opportunity to interact with an expert in the field (Victor is chair of the Task Team on Homogenization of the Commission for Climatology of the World meteorological organization) would surely be welcomed by anyone wanting to learn more of the topic?

        The heaping of opprobrium upon me for making such an innocuous suggestion has been quite remarkable, wouldn’t you say?

      • verytallguy: “The heaping of opprobrium upon me for making such an innocuous suggestion has been quite remarkable, wouldn’t you say?”

        NO!

        Self-awareness isn’t your strong point, is it?

      • David Springer

        “The heaping of opprobrium upon me for making such an innocuous suggestion has been quite remarkable, wouldn’t you say?”

        Not at all remarkable. Your anonymous cowardly climate troll reputation precedes you.

        The suggestion wasn’t innocuous. You refuse to acknowledge that Venema’s opening line linked to hotwhopper which is among the most vile of the warmunist snake pits.

        There are many other experts in the field of temperature sensing. If not for your tunnel vision and trollish behavior you might have suggested a second source.

      • David Springer

        Watts Gores Sacred Cow; Climate Cult Reacts Predictably

        I don’t seek out interactions with cultists on their own turf. It’s really just that simple.

      • verytallguy,

        Richard Feynman said something to the effect that science was belief in the ignorance of experts.

        You propose someone as an expert. I think Feynman was quite a lot smarter than your expert.

        Tell me why I should not believe Feynman, if you wish.

        Cheers.

      • Springer,

        Your continued inability to interact without trading insults is noted.

      • David Springer

        It takes two to trade.

        Freudian slip noted.

      • So, do I need to give you any more hints? Or do you think you can figure the rest out on your own.

        I do believe he’s got it.

        BTW, J N-G did an apples to apples set of pairwise and got a very similar result to mine although I chopped the major flags and he did not. Thank of that as the first step.

        It was the dratted VeeV who got me started on homegrown homog. I played with it until Excel yelled ENOUGH, already. He is the fiend who led me astray. Homogenize just once and you are a homogenizer for the rest of your life. I feel so dirty. Recurse You, Red Baron!

      • I’ve never really thought about it, but in broad terms, if one followed the methodology used in aviation safety certification, one team would define the homogenisation method. The regulator would seperately set the pass/fail criteria. Another set of people would carry out field trials with as large a population of co-located systems as possible.

        That sounds much like my role in the homogenization community. For my most influential paper, I generated a dataset with inhomogeneities (only known to me) and my colleagues would homogenize this dataset. Then I compared the homogenized data with the data before I put in the inhomogeneities. Conclusion: statistical homogenization using neighbouring stations improves temperature trend estimates.

        http://variable-variability.blogspot.com/2012/01/new-article-benchmarking-homogenization.html

        The homogenization of precipitation is more difficult. As already mentioned above neighbouring stations are not correlated very well. Instead of using a difference series, the homogenization is typically performed on a ratio series.

        Naturally I understand that people who have made opposing mitigation a core part of their identity do not like the tone of HotWhopper, but the science is normally accurate as far as I can judge.

        Accurate science is something you cannot say of WUWT and the tone of WUWT is certainly not better. If I am associated with William Connolley, it might be for the blog post making toilet jokes about WC and Enema. Stay classy.

      • Deep thinker Springer, I agree the situation may be more complex than independent random breaks. Reality normally is more complex.

        That is why I carefully wrote: “likely only 12.6% of the stations do not have a break (154 stations). According to Watts et al. 410 of 1218 stations have no break. 256 stations (more than half their “unperturbed” dataset) thus likely have a break that Watts et al. did not find.

        Otherwise I would have written: “thus only …” Watts et al. (2015) are naturally free to show that there is a case where their numbers are reasonable, but 256 stations missing a break is not a subtle effect.

      • David Springer

        Oh… only “likely”. Not very likely or extremely likely. Thanks for using such precise terms. Not. Krauts are such weasels. I’m still laughing over Volkswagen rigging its cars to cheat in emission tests. Is that how you managed to get a PhD?

  24. Here is the real travesty.

    Why is it that the a basic inventory of siting quality of US temperature locations has been done by an outsider in a crowd sourcing project that includes volunteers?

    Keep in mind that the global surface temperature record is just about the single most important time series in climate research, so making sure that it meets the highest data quality standards, which includes constructing an inventory of siting quality, should be top priority of the funding agencies and governing bodies, and is of interest to everybody.

    Not so much, it has turned out.

    The scientific community rather leaves that task to a few good-willing men and women. Instead, funding agencies and governing bodies rather spend billions of dollars on other topics like computer modelling than spending a few million on data quality.

    And that is just plain stupid, I have no other words for it.

    • Yup. We are spending about $ 22 billion a year on climate change.

      And it isn’t really clear the US is getting warmer.

      This is an actually USCRN site:
      http://www.drroyspencer.com/wp-content/uploads/USCRN-TX-Palestine-6WNW-annotated.jpg

      We should recreate the USCRN correctly. The enabling legislation should require by law that a design/testing/calibration standard for the climate sensor be developed. Further the legislation should make it a felony to deploy a non-compliant sensor. It would cost $50-100 million to put standard sensor stations on 1/2×1/2 square mile plots of land in about the same numbers as the USCRN network. All vestiges of humanity would be removed from the plot by law. The sensor would be a guaranteed 1/4 mile from any human influence. For $200-400 million you could buy 1 square mile plots and be 1/2 mile away guaranteed from any human influence.

      This would, by law, be the US climate network and the official measure of global warming. There is no need for adjustments, homogenizing, or anything. Just read it and weep.

      The global warmers claim this won’t make any difference. If the global warmers are right they should be the most enthusiastic backers of the plan. This would pull the teeth of the “deniers” best arguments. Spending less than half a percent (1/4 sq mi) to a little over 1% (1 sq mi) of the annual climate change budget from just 1 year, to deploy a real climate change measuring network is pretty cheap for honest data.

      • PA..

        HAHA… that’s not the only :gold standard” CRN site that has issues.

        I was saving that for my next surprise.

      • You are wasting all the space that Phil Jones, saved. What’s up now?

      • And it isn’t really clear the US is getting warmer.

        That’s why it’s good to have the MSU everyone grouses about.

        UAH LT data over CONUS shows a striking correlation with BEST CONUS surface temperature obs. The extents are different, but the peaks and troughs correlate very well.

        The trend for UAH-LT CONUS (Dec 1978 through Nov 2014 ) is 0.19C per decade, which is close to the AGU announced 0.204C per decade for the good stations ( 1979 though 2008 ).

      • Except UAH shows no warming significant overall from 1978 to 1997 and 2000 to today. The only warming is a step up coincident with the big ENSO. So is there is actually a warming trend in the LT it is just a heat shift within the atmosphere, not GHG warming. GHGs cannot warm the surface and LT without warming the T.

      • David Springer

        Mosher could surprise us all by not acting like a little bitch just because Watt’s no longer trusts his duplicitous ass with proprietary data

      • Yeah like Watts is going to give his data to Mosh when Muller is his boss.

    • Why is it that the a basic inventory of siting quality of US temperature locations has been done by an outsider in a crowd sourcing project that includes volunteers?

      Oh, NOAA did their bit: They affected greatest of fear that our thuggish volunteers would harass their curators. So they removed from their metadata all the names and addresses where the stations were located.

      (That alone added at least a year to our efforts.)

      I managed to locate and interview a few dozen (thanks mainly to Mac’s partials). They absolutely loved talking about their stations. I was on the phone with some of them for over an hour, heard some great stories. I certainly never dissed their stations or said they had a bad rating. All I did was get the info I needed and then thank them for their patriotic civic-mindedness. And listen to their fascinating tales.

      I also made a point of telling them their stations were a select elite — the USHCN, pride of the fleet. None of them even knew. Not one. A Weather Service station out west didn’t even know. “We’re HCN? No one told us.” Another curator said (with renewed pride), “I’ll never miss a reading again.” NOAA should inform their USHCN volunteers of their stations’ rare status.

  25. I’m confused. The WUWT post says:

    When the journal article publishes, we’ll make all of the data, code, and methods available so that the study is entirely replicable. We feel this is very important, even if it allows unscrupulous types to launch “creative” attacks via journal publications, blog posts, and comments. When the data and paper is available, we’ll welcome real and well-founded criticism.

    Unless I’m misunderstanding the situation in a rather drastic way, they just published a press release and encouraged everyone to promote their results while saying nobody gets to look at any paper, data or analysis supporting those results until some unspecified and unknowable future date. That would mean this is just science by press release, something skeptics have criticized for years.

    This is wrong. You shouldn’t just get to say, “Hey guys, I just proved X” while not showing anything at all which actually supports the idea X is true and have thousands of people believe you’ve really proved X just because you said you did. If you’re not going to share any sort of data, analysis or code, you shouldn’t be publishing press releases.

    “Hey guys, we just got some amazing results. Tell everyone about them! No, you aren’t allowed do anything to verify our results are true. Why would we want to let people examine our work when we can just tell them what our results are?”

    • Watts’s defense is:

      But, the feedback we got from that effort [a July 2012 draft paper] was invaluable. We hope this pre-release today will also provide valuable criticism.

      • I wonder if he realizes that in no way justifies putting out what he called an important press release. If he had just wanted to update people on his project and perhaps get feedback, he could have posted about. He could have been up front about the fact he wasn’t releasing anything for people to actually look at or examine, and point out that means people shouldn’t blindly accept what he says about his results.

        Of course, his previous pre-release actually involved releasing a draft of his paper. This pre-release involves a release of… I don’t even know what. I’m not sure how much feedback he can really expect with how little he’s actually provided. It seems more like an effort at making headlines.

      • > We hope this pre-release today will also provide valuable criticism.

        Brandon just gave one.

        Take note, Willard Tony.

      • Journals won’t accept papers whose contents have all been pre-released, right? My guess is that Watts hasn’t been able to get a journal to accept the paper yet (not necessarily a negative if it’s a skeptical paper), so that’s why he’s keeping the data under wraps. (He should maybe have said this himself, if that’s the case.)

      • rogerknights, it’s not even clear they’ve submitted the paper to any journal yet. The post says they are submitting it to a journal which makes it sound like it’s something they are going to do, not something they have done.

        In any event, he should have waited until the paper was published to issue a press release. Running to the media and trying to get headlines while not actually publishing anything to support what you say is… wrong. To put it kindly.

      • Brandon S? (@Corpus_no_Logos) | December 18, 2015 at 10:43 am |

        In any event, he should have waited until the paper was published to issue a press release. Running to the media and trying to get headlines while not actually publishing anything to support what you say is… wrong. To put it kindly.

        This is silly. The climategate files make it abundantly clear that warmers engage in gatekeeping.

        There is no reason not to do prepublicity on skeptical studies. If the warmers want the data and methods so they can poke holes – they have to let the study get published first.

        The study wasn’t funded by the government so there is no FOIA access right to any of the study, unlike government funded studies where they take delight in frustrating FOIA requests. Recipients of government grants who ignore FOIA requests should be permanently debarred – even if they put athletic equipment in their graphs.

      • The press release is related to the AGU presentation. The press release is here:

        https://fallmeeting.agu.org/2015/press-item/new-study-of-noaas-u-s-climate-network-shows-a-lower-30-year-temperature-trend-when-high-quality-temperature-stations-unperturbed-by-urbanization-are-considered/

        If you monkeys have a problem with the press release, you should complain to the AGU.

      • > Journals won’t accept papers whose contents have all been pre-released, right?

        The GWPF might not mind, and don’t forget that the GWPF set new standards in peer review:

        The review of Golkany’s paper was even more rigorous than the peer review from most journals […]

        https://www.documentcloud.org/documents/2642410-Email-Chain-Happer-O-Keefe-and-Donors-Trust.html#document/p6/a265727

      • Your descent into incoherent irrelevance is almost complete, willy. Pathetic. Where’s your boss kenny? Is he still mad?

      • Do you know if this research has been funded by the Heartland Institute, Don Don?

        Meanwhile, note that Willard Tony’s post contains this sentence:

        We do allow for […] one adjustment in the data, and this is only because it is based on physical observations and it is a truly needed adjustment.

        and that the leading sentence from the press release contains “do not require adjustments to the data.”

        Just imagine if the Editor found out that the IPCC said something like that.

        Willard Tony, take note.

      • I get the impression dinky dimmy Don Don must be pals with some of the WUWT cabal as he holds them to a very relaxed standard as opposed to his normal born hard kick arse and take the names persona. My guess is that he and Chuk the Mod have a Koffee Klatch over in Belmont.

      • Your impression is faulty, little horse grabber. Mr. Tony has banned my humble and gracious self from WUWT for pointing out the flaws in little willis’ nasty character. I just don’t see what all the angst is about. It’s a freaking AGU poster.

      • Why do you say the A word, Don Don?

        NG got everyone covered.

        Let’s enjoy Willard Tony’s moment of “science by press release” as much as he does.

        ‘Tis the season, after all.

        Grab some more egg nog.

        No, put that brandy and that rhum down.

      • Don Monfort: “Your descent into incoherent irrelevance…”

        Don’t you mean “ascent” ?

    • First they need to correct the temperature record by adjusting it.

      John N-G did the statistics with the help of some veterinary students at Aggie State University, so that odor that has you confused actually is BS. He’s a climate scientist and four out of two climate scientists do almost completely perfect statistical work, as is well known, so there is no problem here in jumping the gun before peer review.

      (Texans who did not attend Texas A&M love to make fun of the place because it’s hicksville as it gets, but it is actually an excellent school and J N-G is an excellent state climatologist).

      • Face facts, everyone can see except our betters.

        “I always thought it was other schools, not our schools,” Manweller said. “But then The Washington Post did an expose on Washington State University where they had uncovered all the syllabi saying, ‘if you used these words, if you write these words, you will fail or be punished in some way’.”

        Yeah, it’s for our kids… you bet.

      • JCH:

        Your comments of late have been more than 100% insults in the same statistical manner by which recent global warming has been more than 100% anthropogenic.

        In addition to being an excellent climatologist John Nielsen-Gammon has 3 degrees from MIT. So you might as well toss in some nerd insults while you’re at it.

      • (Texans who did not attend Texas A&M love to make fun of the place because it’s hicksville as it gets, but it is actually an excellent school and J N-G is an excellent state climatologist).

      • Solid, liquid or still more gas?

      • Yes, I saw your backhanded compliment the first time which is why I dropped the “state” limitation you insist on repeating.

      • I don’t know how it is a limitation. How would it be possible to be an excellent state climatologist, which is his job title, and not also be an excellent climatologist? I am very familiar with his work. I’m a fan of his. I followed his blog in the Houston Chronicle. He concerns himself mostly with providing valuable and interesting services to Texans. I was thinking there could be a warmest year with an ENSO neutral year, and within days he wrote a blog post with the same idea, and then it happened.

    • You are just babbling and sinking, willy. Merry Christmas!

      • Seems you can’t walk straight in the threads, Don Don. Hope you did not restarted to drink that early. Tis’ the season when it’s the harder.

    • “It’s a freaking AGU poster”
      Thanks for bringing me back to rational misanthropy

    • That’s the way it works in the real world. People submit posters and present talks. If they have a blog, they are free to advertise that they are presenting. If a group feels marginalized, would they not logically promote their work a bit more than others in the mainstream? Most people analyze their data (some of which may have shown up in several years worth of talks and posters), make the figures, write up the paper and submit for publication and then when accepted, they can but don’t always) make their data and programs available. This paper is being handled the same way most papers are except here they are saying the will make EVERYTHING available upon publication, which is rare. Most people, myself included, wait for someone to ask for it, which rarely happens in my field.

      • Thanks. We will. Sooner rather than later. it’s been a long haul.

        Did I like the publicity? Sure. Very modest pay for all our work. And it got me the chops to get the serious response I needed to make improvements (all of which worked against our hypothesis). That was the real gold speck. Just as Anthony posted back in 2012. I also got to field a ton of questions and read and answer a whole slew of objections. I always learn more about the battlefield from my opponents than from my allies. It was invaluable, indispensable.

        Besides, we are about to submit for review. And I expect a very hairy eyeball. So when the review boyz shoot us them snappy questions, I’d just as soon have some snappy answers.

  26. By the way, has anyone else had trouble commenting at WUWT? I tried submitting two comments, and they both just disappeared. They didn’t show up as awaiting moderation or anything. I was logged into the same account I use to comment here, so I can’t imagine the problem is ensuring I’m not a spam bot, and my comments didn’t include any links/language which would have seemed to trigger any filters.

    I’m wondering if there’s maybe something inadvertently catching innocent comments there. I know sometimes spam filters can act up.

    • I had trouble with the appearance of the site, and with Reply boxes popping up in unexpected places. But I re-launched my browser and the problem’s gone away. But maybe that’s a coincidence, and there was something shonky going on there. Try posting again.

      (I feel for you. A long comment of mine upthread is in moderation because I used too many exclamation points!)

      • David Springer

        How many is too many?????????????????????????????????????????

        Enquiring minds want to know!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

      • David Springer

        How many is too many?????????????????????????????????????????

        Enquiring minds want to know!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

        ———————————————————————

        The number above didn’t land it in moderation!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

      • I guess I was wrong that the exclamation points were the cause of the delay in moderation. It’s out of moderation now, at http://judithcurry.com/2015/12/17/watts-et-al-temperature-station-siting-matters/#comment-752233

    • Maybe you got lost, I found them !!

  27. Talking about min/max temps? Please don’t call them just temps. Drives me nuts. Completely different things. Hot day can have a low max, cool day can have a high max. Max refers to a moment in a day. You can have cloud cooling max and boosting min not just for short periods but for years. (More cloud about Eastern Oz in eg 1950s or 1970s than eg 1930s or 1990s. Bound to mess with min/max, right?)

    Don’t make me change my moniker to And Then There’s Cloud.

    – ATTC (just a warning this time)

  28. Steven Mosher and Victor Venema both point out that some of the stations that were selected for this study may have been disturbed even though there aren’t any records of them having been so. Disturbances of any sort can introduce discontinuities that are normally dealt with with homogenization (or so I understand). But the authors are distrustful of homogenization since they fear that this process can contaminate well sited stations with a warming bias that allegedly afflicts the worse sited stations. Whatever the merit of this worry, would it not possible to remove the discontinuities potentially caused by undocumented disturbances through only homogenizing the well-sited stations among themselves? This procedure ought to satisfy everyone, unless I am missing something.

    • It would theoretically be possible to only use the “well-sited” stations for homogenization. The reason this is normally not done is that the quality of the end product depends on how well correlated the neighboring stations are. If the neighboring station experienced almost the same weather, you will see jumps in their difference signal much easier than when the weather is more different.

      Watts et al. (2015) does apply MMTS corrections. They were computed by comparing the stations with these transitions to their neighbors. Thus it seems as if Watts et al. (2015) accepts the homogenization principle sometimes.

      A way to avoid “contaminat[ing] well sited stations with a warming bias” would be to only detect breaks to make a subset without “perturbations”. There is no need to correct the data. The only disadvantage of this method would be that you remove a large part of the stations from your analysis.

      • Actually, in our JRG paper on UHI we reran homogenization using only rural stations to homogenize, for four different definitions of urbanity. Wasn’t too hard to do, and it helped us demonstrate that the adjustments weren’t “spreading” urban warming to rural stations. That said, we could use the whole co-op network for that purpose; might be a bit harder with only 400 or so HCN stations as you would probably miss some issues.

        Regarding the Watts paper, it will be interesting to look at his results in more depth when the data is available. Until then it would be premature to speculate.

      • Thanks Zeke, I’m glad to hear someone already thought of doing that. It will be nice to see if it can be done again with this new set of “unperturbed” stations from Watts et al. 2015.

      • …I meant, for the subset of the “unperturbed” stations that are deemed by them to be well-sited.

      • Yes, yes, yes (but the devil is in the details).

        I think missing metadata is probably the biggest problem.

    • One thing that skeptics dont get is that ALL RECORDS HAVE ERRORS.

      records of temperature
      records of station moves and changes.

      So I kinda have to laugh when folks say they went over a B-91 and that the record “proved” something. or lack of documentation “proved” something.

      As for the siting criteria CRN1-5

      Ask about the field test performed to establish that specification.

      • So, when are warmunistas going to stop saying that any of their favored temp trends is proof that it’s co2 that done it, and that urgent and drastic action is required to decarbonize and impoverish once wealthy nations to save the planet?

      • One thing that skeptics dont get is that ALL RECORDS HAVE ERRORS.

        And evidently, some records have more errors than others, something a data series that referred to itself as BEST should have been telling us, not waiting on a third party.

      • David Springer

        Steven Mosher | December 18, 2015 at 9:30 am | Reply

        “One thing that skeptics dont get is that ALL RECORDS HAVE ERRORS.”

        That’s the thing that “skeptics” get better than anyone else poseur boy. The consensus (including milquetoast luke-warmers) doesn’t get the fact that older land surface records are so flawed as to be unacceptable for the task of establishing global average temperature trends with tenth-degree accuracy. Duh.

      • David

        You are right.

        I have looked at historic records more than most. We can accept their generality in terms of broad bands, such as very cold, cold, mild, warm, very warm etc.

        However, if we want scientific accuracy to tenths of a degree they can probably only be obtained from the automatic weather stations from the 1980’s onwards, always assuming they were sited and maintained correctly.

        I certainly wouldn’t base public policy on the accuracy of a global land record to 1880 or a global ocean record to 1860.

        Glacier records are also pretty good in their generality as we can go back some 1 or 2 thousand years without harming a single tree,

        tonyb

      • Actually, Tony, even if the stations are perfectly accurate, the statistical methodology is not accurate to a tenth of a degree. Statistical sampling theory is based on probability theory, so the first rule is that the sample must be a random sample of the population, period. The corollary is that no valid statistical inference can be drawn from a convenience sample, which these surface samples most certainly are. The surface sampling system needed to produce accurate statistics has yet to be built.

      • A lot of people are going to be upset when they learn the millihair scale doesn’t exist.

      • “One thing that skeptics dont get is that ALL RECORDS HAVE ERRORS.”. Behind this perverse comment lies Mosh’s perverse belief that “skeptics” of any hypothesis have a duty to provide a successful counter-hypothesis. To do so, he supposes, they must employ some kind of temperature record. Said record will have flaws, so the skeptic’s hypothesis will be no better than the one he is challenging.

        One day he’ll grasp the concept of disconfirmation, but until then…

      • David Springer

        “One day he’ll grasp the concept of disconfirmation”

        Don’t bet on it.

      • “Behind this perverse comment lies Mosh’s perverse belief that “skeptics” of any hypothesis have a duty to provide a successful counter-hypothesis. To do so, he supposes, they must employ some kind of temperature record. Said record will have flaws, so the skeptic’s hypothesis will be no better than the one he is challenging.

        One day he’ll grasp the concept of disconfirmation, but until then…”

        1. There is no intellectual requirement to produce a counter theory.
        2. Pragmatically speaking, you lose if you dont.
        3, the end goal is to produce a better explanation. Mere criticism,
        loses.

      • 1. There is no intellectual requirement to produce a counter theory.

        Fine. This is the second coming of the MWP. Barley was easy to grow during the MWP on Greenland and the sea level was 6 inches higher.

        When we get to where indisputably the current time is warmer than the MWP, the sea level is just as high, and Greenland is having bounteous barley harvests we can revisit the “humans make it warmer” theory. Until then there isn’t even a potential problem to address.

      • PA,

        Bugger Greenland. I’m waiting for Antarctica to become ice free and fertile again, as it was. Maybe the permafrost in the North will unfreeze, and large grazing animals will repopulate the areas.

        The Golden Age awaits! More CO2 is what we need – for plant food, of course – it’s not worth a cracker for warming anything!

        Wotcha reckon?

        Cheers.

      • Bugger Greenland.

        Too large a target for me – I will leave that in other people’s capable hands.

        On other subjects. Now that I look at the leap second adjustments I’m honestly worried we are slipping back into the ice age.

        https://en.wikipedia.org/wiki/Energy_subsidies
        https://upload.wikimedia.org/wikipedia/commons/4/47/US_energy_consumption_by_source_2012.png

        I’m not even sure renewable energy level subsidies for fossil fuels (25+ times current levels of fossil fuel subsides) can keep us out of an ice age. But 5 times higher fossil fuel subsidies are easily justified, and we can cut renewable subsidies by 80-90 % to bring them to parity (5 up x 5 down = 25).

        Just the additional food it will bring us and the potential to stave off future starvation would more than justify the higher fossil fuel subsidies. Averting an ice age would just be a lucky fringe benefit..

        I am concerned that global warmers deliberately want to starve people and bring on an ice age. I’m not sure what their disturbed and twisted reasoning is, just that it is disturbed and twisted.

      • PA.

        Skeptics lost. even with republican “deniers” running the show
        we ended up with billions of subsidies for renewable energy that
        “solves” a problem that you guys argue doesnt exist.

        that is pretty funny.

      • David Springer

        Only a warmist trying to save face would say “skeptics lost”. We won and won big. The 2016 budget includes a 5-year extension and gradual phase out for tax credits for wind and solar. Republicans voted for that in trade for Democrats allowing an immediate end to a 40-year ban on crude oil exports.

        Warmists want TRILLIONS, Mosher. Instead they got some measly tax credits on wind and solar plus a glut of US crude oil on the world market that will keep supply up, price down near $35/bbl and work to foil any OPEC plans to run crude price back up to $75/bbl+.

        How that can be spun into a loss for skeptics is beyond me. I’d have voted for it in a heartbeat. I’d have voted for it without lifting the ban on US crude oil exports. That’s because wind and solar have a place in the grand scheme of things. Not a huge place but a place nonetheless. Fossil fuels won’t last forever. I couldn’t care less about CO2 emission or global warming. I care about running out of finite natural resources which necessarily includes fossil fuel.

        Write that down.

      • Skeptics lost. even with republican “deniers” running the show we ended up with billions of subsidies for renewable energy that
        “solves” a problem that you guys argue doesnt exist.

        Skeptics won: subsidies for solar and wind are tiny compared to what the alarmists want(ed) to do (as Springer mentions above). Their impact on the economy, even if they don’t work, will be minimal.

        BEST won: opening LNG exports will provide huge support for development of gas as a “bridge”.

        Proponents of solar won: if solar PV continues its exponential decline in cost, this will be enough to push it over the hump. Similar for wind, although I’m skeptical about its scaleability.

        The only real losers are the socialists who wanted to use “global warming” as a stalking horse for their own agenda. Everybody these days is saying that capitalism can solve the problem.

        And skepticism certainly played a part in that: the uncertainty about the magnitude of the problem, and whether it even exists, certainly (IMO) influenced people’s willingness to impose the known problems/risks of giving up “free-market” capitalism.

    • would it not possible to remove the discontinuities potentially caused by undocumented disturbances through only homogenizing the well-sited stations among themselves? This procedure ought to satisfy everyone, unless I am missing something.

      You do not seem to be missing anything I can see. I have been urging the VeeV to do just that. Save the GHCN. Be a hero. Win a place of dishonor in the Deniers’ Hall of Shame.

      Of course, with the GHCN, he might have to infer some of that metadata. (Ick.) State of the wicket is not good.

  29. Trend differences are not found during the 1999- 2008 sub-period of relatively stable temperatures, suggesting that the observed differences are caused by a physical mechanism that is directly or indirectly caused by changing temperatures.

    But temperatures are relatively stable over 1980-1997, which is when the discrepancy opens up, so this suggestion doesn’t seem to have any merit.

    • That paragraph is an error.

      To clarify:

      There is cooling from 1999 – 2008. Poorlly sited stations cool faster.

      There is an essentially flat trend from 2005 – 2014. COOP and CRN show no significant diversion (seeing as how there is no trend to exaggerate).

  30. Well,

    1. Though significantly less, ~2C per century is still warming.

    2. People take veracity of observations for granted:
    “Everyone believes the measurements, except those who take them.
    No one believes the models, except those who make them.”

    3. Homogenized crap is still crap.

    4. Because of agricultural concerns, the US has one of the oldest histories of meteorological observations. Older doesn’t necessarily mean better, but I would have to think the problems are even worse in the rest of the world.

    • TE writes: 4. Because of agricultural concerns, the US has one of the oldest histories of meteorological observations. ”

      LOL. American parochialism is so quaint.

      • Keyword: “oldest”

        American parochialism is so quaint.

        Many non-american climate records go back *beyond* 150 years. Maybe try a new graph. Perhaps one that goes back 400 or more years.

      • David Springer

        Stupidity isn’t quaint. The oldest city in the US, St. Augustine Florida, has been continuously occupied for 450 years. Several others have been occupied over 400 years and very many over 350 years.

        Hopefully the Oneill’s in Wisconsin read this and become a tiny bit less stupid. One can only hope.

      • Ah, the new nations stake their claim. There are cities which have been in continuous occupation slightly longer than that elsewhere in the world.

      • Indeed, the city nearest me has been continuously occupied for around 1000 years, and that’s young compared to many others

      • David Springer

        It’s not how long the cities been there but what those cities have accomplished. European superiority is so quaint. Any of you boys have cities whose citizens drove golf carts on the moon? LOL

      • “You might have a point, but you don’t:”

        Have you noticed how badly under sampled spatially large chucks of the world are. Shall we just estimate/model the rest?

      • RichardLH: “Have you noticed how badly under sampled spatially large chucks of the world are. Shall we just estimate/model the rest?”

        Don’t worry, Richard.

        REAL climate “scientists” don’t have any use for data anyway.

        “The data doesn’t matter. We’re not basing our recommendations on the data. We’re basing them on the climate models.”

        ~ Prof. Chris Folland ~ (Hadley Centre for Climate Prediction and Research)

        What “scientist” in their right mind would bother with the readings of $10 thermometers when they can use $100,000,000 computer games climate models to just make stuff up?

      • “It’s not how long the cities been there but what those cities have accomplished. European superiority is so quaint.”

        And there’s me thinking CET was a short record :-)

      • Climate scientists model the global temperature to compare to the models.

        They don’t’ say that is what they do, but they do it anyway.

    • Yes (though I am a bit more lukewarm than that).

      Yes (but I bore that in mind and tried to be good).

      Yes (but — if properly applied — it is arguably better crap).

      Yes (metadata: crap). But if it must be inferred (ick), at least do it right, and include new factors as they arise. It couldn’t be any mushier than it is, anyway. Welcome to homogland, where every datapoint is an outlier (but some being more equal than others).

      We shouldn’t be in a position where we need homog to record temperature going forward (yet we are). But we can’t redo the past. Missing metadata is missing metadata. Raw data won’t do. Just won’t. Some sort of inference is necessitated.

    • Modelling the 3D Temperature Field from the point sampled data in order to compare it to the models allows errors at both ends of the exercise.

  31. There’s also cooling from the output of heat pumps.

  32. The problem of people using straight line ‘trends’ of any climate series is that I want to utter the rather dry observation

    “The data capture widow available does not support the bandwidth required to get to that frequency”

    • Linear trends are climate porn. Like Shaw, we are merely haggling over a price. And the going rate is 30 years. But there’s a rumor Madame ENSO is taking about upping it to 60. And love may grow, for all we know.

  33. And that was in general, not directly related to this paper!

    • The whole idea of ‘homogenization’ is not to actually remove errantly introduced warming bias (UHI effect). Rather, it is to indelibly redistribute the amount of the error throughout the record. The end result of any consequence is at best, illumination of a trend. But, when it comes to global warming, we already are aware of the many trends that exist and they all depend on the start point.

      • My point was that for whatever reason you wish, you can’t draw a straight line on a time series and have it carry any meaningful information.

        The best you can do with any series probably is to limit what you do claim as a ‘trend’ is that it is bounded by a single sine wave over the series as a lower frequency limit if you are going to be ‘accurate’..

        Sure the line you draw MAY be right. But you can’t call it a scientific fact. You can’t tell the future and you don’t know the past beyond your series. Outside of the capture window the calculations are blind.

      • True, climatists pretend they can tease out the eerie solitude of a lonesome flugelhorn amidst the cacophony of an orchestral warm-up by turning a deaf ear to everything other than what they want to hear.

      • The whole idea of ‘homogenization’ is not to actually remove errantly introduced warming bias (UHI effect). Rather, it is to indelibly redistribute the amount of the error throughout the record.

        That’s what I thought, at first. Would that it were! That would hide the divergence, but at least it would not make it worse.

  34. It really is interesting how so much of climate science effort is spent defending questionable methodology instead of looking for better methods.

  35. Pingback: Quote of the Week – Watts at AGU edition | Watts Up With That?

  36. NASA has admitted that the surface stations measurements are inferior to the satellite measurements by removing most surface stations in 1980’s.

  37. Let’s see how this fares:

    Here, in my opinion as 30 year TV/radio/web media reporter on science is what should be in any professionally produced science press release:

    [1] The name of the paper/project being referenced

    [2] The name of the journal it is published in (if applicable)

    [3] The name of the author(s) or principal researcher(s)

    [4] Contact information for the author(s) or principal researcher(s)

    [5] Contact information for the press release writer/agent

    [6] The digital object identifer (DOI) (if one exists)

    [7] The name of the sponsoring organization (if any)

    [8] The source of the funding for the paper/project

    [9] If possible, at the minimum, one or two full sized (640×480 or larger) graphics/images from the paper/project that illustrate the investigation and/or results
    .

    http://wattsupwiththat.com/2012/09/24/science-by-press-release-where-i-find-myself-in-agreement-with-dr-gavin-schmidt-over-pr-entropy/

    No mention of the title.

    No mention that the paper is unpublished.

    No contact information.

    No mention of the writer of the press release (notice the “Lead author Anthony Watts said of the study”).

    No mention that there can’t be a DOI.

    No mention of the sponsors.

    All we got is [3] and [9].

    Take note, Willard Tony.

    • We will certainly be paid on Tuesday, if you can just get your hamburger today.

    • There isn’t any journal paper, wee willy. The PR is about a little poster presentation that Tony made to three or four people in a hallway at the AGU meeting. It’s an AGU press release promoting a little poster presentation. The AGU will surely be interested in your hyperventilating over it. They will probably retract and profusely apologize.

      You are deteriorating, willy. Get checked.

      • Willard Tony’s science-by-press-release doesn’t seem to meet Willard Tony’s guidelines for “science by press release,” Don Don. There’s no need to minimize this. It’s no big deal. You’re a fun chap, mostly non-violent when you don’t drink.

        Relax. ‘Tis the season.

        NG got us covered.

      • Did Tony write the press release, wee willy? Do you know whose web site published the press release, wee willy?

      • > Did Tony write the press release, wee willy? Do you know whose web site published the press release, wee willy?

        Thank you for making my point, Don Don. Which was also Willard Tony’s point, you know. Seems the spirits of Christmas Present make us all agree.

        Please don’t minimize this agreement too!

      • I don’t have any more time for you, willy. I hope kenny doesn’t stay mad. At least he is coherent.

      • NG got us covered.

        You can say that, again. And again. And again.

    • I think you are suffering from some sort of premature … something … disease.

  38. Note to all regarding Steve M’s comments (many above):

    Anthony Watts has responded on his own blog here:

    http://wattsupwiththat.com/2015/12/18/quote-of-the-week-watts-at-agu-edition/

    with this:

    “I’ve been reading the comments about my press release at WUWT, Bishop Hill, and at Dr. Judith Curry’s place and most have been positive. There is the usual sniping, but these aren’t getting much traction as one would expect, mainly due to the fact that there’s really not much to snipe about other than Steve Mosher’s usual whining that he wants the data, and he wants it now.

    Sorry Mosh, no can do until publication. After trusting people with our data prior to publication and being usurped, not once but twice, I’m just not going to make that mistake a third time.”

    It is very difficult to regain someone’s trust once lost.

    When Anthony et al. publish, they will include all the data and the code and the methods.

    But, ONLY after publication (and the expiry of the publication embargo).

    • Anthony is a straight up guy. I believe he will give us the whole banana. I just wish UAH would do the same. I don’t believe they have, but if so, I apologize.

      • Apologize. Each version has had two forms of documentation. Spencer blogs about the changes. His post on V5.6 to V6 beta was very detailed as to what why and how. Then they publish a peer reviewed paper on what was done. They also provide data set version control.

        OTH, take for example the NCEI switch from homogenization 1 to homogenization 2 (IIRC about 2008-9). Yes, they published papers on the changes at the time. But very provably there have been multiple additional changes since that are ‘dark’ (no information anywhere). All you have to do is compare successive years versions of past years to see what, but not why or how. Worse, NASA’s website explains its correction for UHI using Tokyo as the example, and then provably does just the opposite for major urban GHCN stations around the world. Essay When Data Isn’t has multiple examples. for both NCEI and GISS.

      • I want the actual code, not papers.

      • Note to Steven Mosher ==> Can it be possible?

        You HAVE written Anthony Watts a letter or email asking politely for the data, right?

        Right?……

        Is it possible, somehow, that you forgot that step? ASKING….?

    • Kip I am shocked! I believe Gavin Schmidt posted, “If he didn’t want his data stolen he shouldn’t have posted it on line.” Nothing quite like scientific warmth and fair play.

    • ““I’ve been reading the comments about my press release at WUWT, Bishop Hill, and at Dr. Judith Curry’s place and most have been positive. There is the usual sniping, but these aren’t getting much traction as one would expect, mainly due to the fact that there’s really not much to snipe about other than Steve Mosher’s usual whining that he wants the data, and he wants it now.”

      Actually I wanted it back in 2012. and I predicted that people would resort to special pleading… like someone will “steal” the data and publish.
      So I promise, I will sign a document and even put MONEY ON IT, that I won’t use the data to publish a paper or blog post or anything.
      And faced with that Anthony gives “mann like” responses.

      Now here is what is going to happen.
      The reviewers will ask them to address the criticisms that Victor raised.
      They are SOLID criticisms… maybe not paper killers, but they are SOLID criticisms. issues that MUST be adressed, like taking the analysis through
      2015… like addressing ALL of the siting criteria ( like shading ).
      Anthony and company will refuse to address these criticisms and the paper will languish and the data will remain.. unpublished.

    • Kip

      “Note to all regarding Steve M’s comments (many above):”

      Which of Steve McIntyre’s criticisms did Anthony Address?

      • If I had meant Steve Mc, I would have said so…..of course, it is perfectly clear if one reads the Anthony Watts quote.

      • Kip.

        Since Anthony hasnt addressed my criticisms I had to assume you meant Steve Mcintyre

      • Reply to Steven Mosher ==> I don’t think Anthony is going to play your little game.

        Neither am I.

      • It simple kip.

        #1. Skeptics have been rightly suspicious of peer reviewed science.
        Me too.
        #2. When someone makes a scientific claim, regardless of the venue,
        the scientific method requires that other people be able to reproduce
        your work. That requires data and method, typically code.
        #3. These principles hold REGARDLESS of where the claim is made,
        in a journal, on a blog, whereever.
        #4. When someone makes a scientific claim, we get to ask for the
        data and methods. Even in science fairs for 5th graders.
        #5. If they dont produce the data and methods… they are not doing
        science. They are doing advertising.
        #6. In 2012 Anthony published a paper on his blog with mcIntyre and
        Christy as co-authors. Data was not released. This is not science.
        there are no scientific findings in that paper. No data. no code.
        no science. There was a promise of science.
        #7. In 2015 A poster was presented. not peer reviewed. No data.
        no code. Not science. Its an advertisement about something
        that might be science.

        The bottom line is that Anthony and evan are not doing science. They are talking about science they might do someday. They could publish the paper and data today. Then they would be doing science.
        They could post it on any number of open journals. That would be science. provided they gave the code and data.

        They give one reason for not releasing data.

        In the past Someone used data that they freely posted on the web to write a paper. Imagine that!! someone used data to do science.
        Their sole reason for refusing to release data is that somebody
        might take data that they have been sitting on for 3 years plus and write
        a paper. But if the data contain the truth what kind of paper could be written? Are folks afraid that we will take the data and do what we did before? And what did we do? we showed that Anthony’s first paper was corrrect!! Imagine that. If the data contain the truth.. shouldnt we get that truth out ASAP before our economy is ruined?

        Where is snowden when you need him?

      • Stevem Mosher: “The bottom line is that Anthony and evan are not doing science.”

        Anthony and his colleagues have really thrown a scare into you and your friends Venemous and that silly woman with a fantasy about Hot Whoppers haven’t they Mosher?

        There is a school of thought that believes you would not recognise science if it bit you on the rump.

        But “science” now, that’s a different thing altogether.

      • Reply to Steven Mosher ==> It’s simple, Steven.

        Anthony DOES NOT TRUST YOU ANYMORE. It is not just you — he doesn’t trust the guys at the NCDC either — but your BEST team did blindside him once.

        He promises will share everything in the proper order at the proper time.

        (eGads! You and Willis — I demand this, I demand that, as if you were the royal princes of Climate Science.)

        Just to clear the air —

        Please post here, below,

        1. the colleagial letter or email you have sent to Anthony and his Team asking for the data on which they based their AGU presentation

        *and*

        2. their reply, if any.

        I would like to see *exactly* what it is that you say they have refused.

      • Note to Steven Mosher ==> Can it be possible?

        You HAVE written Anthony Watts a letter or email asking politely for the data, right?

        Right?……

        Is it possible, somehow, that you forgot that step? ASKING….?

    • Kip: The situation that you and the rest of the team are in I think I perfectly understand. In this Internet age you can get to the point where the mere placing of a proposed thesis may prompt a race. The fact that your thinking may well have leaked out over multiple blogs, comments, etc. ahead of time is now real.

      As to the data. Well that’s a race between paper and the Internet. It I easy to see how that will come out.

  39. Good timing for this to come out while NOAA is desperately trying to avoid complying with Congressional oversight into whether politics is driving ever-higher surface data adjustments. This farce has gone beyond mere confirmation bias and entered the realm of Lysenkoism. Do they really not understand that billions of tax dollars come with certain legal strings attached? “Shut up and go away” is not an acceptable response to a Congressional subpoena. I hope these savages end up in jail for obstruction, their behavior is horribly corrosive to both scientific integrity and the rule of law.

    It’s gotten so ridiculous, conspiracy theorists like Scott K Johnson at Ars Technica seem to think ordinary government oversight is some beyond the pale Inquisition, even as prominent Democrats are vocally trying to outlaw skepticism.

    The degree and speed which inconvenient facts are now memory-holed by AGW proponents is breathtaking, it was not that long ago that the respective accuracies of satellites and surface stations was uncontroversial.

    • noaa released the code and data and email from non scientists.

    • NOAA released only what was nonresponsive to the inquiry into what role motivated reasoning and political bias may have played in the adjustments, questions that are totally reasonable given graphs like this and this even before the Watts study.

      The taxpayers paid for the scientists as well as the nonscientists. No oversight? Fine, no funding. Shut NOAA down until they comply.

    • Ooh, you mustn’t go there. Today we try them? Tomorrow they try us. Let’s just do science.

  40. That siting matters greatly in setting the “trend” observed in station records has been known by professionals for many decades. Conrad and Pollack in “Methods of Climatology” called attention, in particular, to the problem of diminished correlation between urban and nearby non-urban records. The UHI evidence from megacities that have developed subsequently is unmistakable. While Watts et al. add further evidence, the stark effects of UHI are manifest not in 30-yr “trends,” which are particularly responsive to 60-yr oscillations, but in much longer sojurns of mean temperature resembling a logistic curve in various stages of saturation.

  41. Very many people now have personal weather stations online.

    Absolute temperatures are not the same as temperature anomalies over decades, but you can get an idea of siting problems by looking at the range of different temperatures, over a small area, on WunderMap.

    • Winter road closures can tell us something –e.g., the Tioga Road is currently closed due to snow. When the Tioga Road is closed it is not possible to drive to Tuolumne Meadows or enter Yosemite National Park from the east. Usually, that’s something that occurs sometime in November. When the Tioga Road is closed, there is no global warming. We’re doomed!

    • They should really stop reporting anomalies and just report absolute temperature, the anomalies make it too easy to game the numbers by cooling the baseline.

      A cynic might suspect anomalies are preferred because the absolute temps would tend to make people laugh.

      • I feel your pain. I never did like turning an item of data from what it is to what it is not. something I have done a million times (est.).

        But anomalies are necessary. If there is a gap in the data or station dropout, you can throw a mondo offset in even if the trends are the same. If you anomalize, then you wash away that error and apply an adequate band-aid.

        When we were cranking up for the final version, for our unperturbed 1\2s, I kept getting around 0.205C/d, while another on our team was getting 0.151. That was because of a dropout in Region 9 that threw the trend from +0.040C/d (correct) to around -6.95. If we do not anlomalize, our butts will be hanging out, soon to be in a sling.

        If you have complete data and no dropout, then you have it so golden, the issue never comes up. There is no need to anomalize no-dropout, infilled data to do your trends. (But it doesn’t hurt if you do.)

      • David Springer

        Like he said, anomalies make it too easy to game the system.

    • BTW the best proxy might be Great Lakes ice — record extents and record late ice were reported during what non-adjusted weather stations reported as record cold, but NOAA reported the Great Lakes temps as about average.

      Utterly ridiculous.

  42. There is a perfect 1:1 relationship between the upward bias on the temperature trend and the upward bias on the size of government, suggesting that the upward bias on the temperature trend will be corrected when the size of government reduced.

  43. Year
    Tioga Rd Closed

    2016

    2015
    1-Nov

    2014
    13-Nov

    2013
    18-Nov

    2012
    8-Nov

    2011
    17-Jan

    2010
    19-Nov

    2009
    12-Nov

    2008
    30-Oct

    2007
    6-Dec

    2006
    27-Nov

    2005
    25-Nov

    2004
    17-Oct

    2003
    31-Oct

    2002
    5-Nov

    2001
    11-Nov

    2000
    9-Nov

    1999
    23-Nov

    1998
    12-Nov

    1997
    12-Nov

    1996
    5-Nov

    1995
    11-Dec

    1994
    10-Nov

    1993
    24-Nov

    1992
    10-Nov

    1991
    14-Nov

    1990
    19-Nov

    1989
    24-Nov

    1988
    14-Nov

    1987
    13-Nov

    1986
    29-Nov

    1985
    12-Nov

    1984
    8-Nov

    1983
    11-Nov

    1982
    15-Nov

    1981
    12-Nov

    1980
    2-Dec

    Average
    5-Nov

    (’80-’11)

    Median
    12-Nov

    (’80-’11)

  44. Brian G Valentine

    Where are all the HOTTEST YEAR EVER EVER HOT HOT HOTTEST 2015 HOTTEST EVER IN HISTORY EVER! people?

    I’m guessing they will be our Christmas present from NOAA. I can’t wait!

    • 2015 is likely to be the hottest year in the instrumental period according to existing data and methods.

      Pffft.. not very important factoid.. like grape records in england.. a small piece of a larger picture.

      • Brian G Valentine

        yuh, but where are they are they? Where is the mass apoplectic fit over it? It is like waiting for an explosion

      • Steven Mosher,

        You wrote –

        “Pffft.. not very important factoid.. ”

        As is the futility of believing that by detailed examination of chaotic data, one can divine the future. The practice of arithromancy used by Warmists is no better or worse than reading the Tarot.

        The “instrumental period” you now claim as important, is yet another Warmist attempt to deny, divert and obscure.

        Deny. The Earth was obviously far hotter before the instrumental record. Molten surface, the boiling seas, and all that.

        Divert. Pretend that any evidence contradicting silly Warmist assertions are merely unimportant “factoids”.

        Obscure. Studiously avoid acknowledging the Warmist lack of physical knowledge, by claiming the surface temperature is being measured. Claim that actual temperatures are meaningless, so anomalies must be calculated and used. Use a lot of made up sciencey words, that not even Warmists can really explain.

        Delusional foolishness, all of it. CO2 does not “assist the Sun to make things hotter”, as I saw recently.

        You might be better off learning to cast runes. I can predict the future better than you, just by casting the runes. If you want a few pointers, please let me know.

        Cheers.

      • “Arithromancy”…I do hope that’s not copyright. Because I am gunna take it. Mine now!

      • I had to look it up, very usable!

        Per Harry Potter: Arithmancy is an elective subject offered from the third year on at Hogwarts School of Witchcraft and Wizardry. Little is known about the class, but the study of Arithmancy has been described as “predicting the future using numbers,” with “bit of numerology” as well.

      • But not according to 1999 methods, or satellites, or other proxies.

      • David L. Hagen

        Mosher – “instruments” are also aboard the Satellites to record atmospheric temperatures vs depth by microwaves. Those show 1998 as the warmest year in “the instrumental period”!

      • Steven, what about the nonexistent data and what was the method used to ‘dump’ it?

    • Probably Nunavut, Canada… -31°C about 196 minutes ago (20:00 UTC). According to NASA that probably is considered unseasonable warm this time of year and needs to be cranked into the homogenization machine.

  45. Congratulations to Watts et al for an important addition to the climate jigsaw puzzle. Well done. One does wonder why NOAA etc have not undertaken this properly before, including and especially BEST! Very frustrating.

  46. I visited the nearest reporting station this afternoon just to look.

    I think things have gotten worse since it was last surveyed.

    Big pond of drainage water 12 meters away ( on two sides of the station ).
    Some concrete and gravel pavement within 10 meters.
    Asphalt more than 20 meters away but on all sides.

    • I visited the nearest reporting station this afternoon just to look.

      (Grin. You just did science.) Okay, let’s “look”.

      For Leroy (1999), it’s simple: Concrete within 10m? Class 4.

      For Leroy (2010), if 10+% of area within 30m. is heat sink, then it’s Class 3 at best. Your asphalt and pond sound like they would easily do it alone. As for it being Class 4, 10+% of area must be sink. That’s ~31.4m^2. (ignore the false precision).

      So, unless you can fine down how much is paved within 10m., all we can say for sure is that it’s a Class 3 or Class 4.

      That’s one of the questions I have about Leroy. The whole area outside of 10m. could be an inferno, but the site would still be a Class 3. I have others.

      I might take a different tack and try to count every sink area and weight it by distance/area (and possibly by type) and get a bottomline number.

      There are possibilities that that approach would be completely invalid. But if not, then you have a nice easy, unified top-down system, then you have “coverage” and you could just drop meso/macrosite right in, or anything else you wanted to. (When the howls about circular logic start rolling in from the boyz, then I’ll know for sure I’m on the right track.)

      Leroy’s system is completely effective for his purposes — initial [sic] siting. But it is too Byzantine for what I want to do (esp. the 2010 version). I am guessing he is not a game designer. Sometimes when I’m rating a station I feel like I’m doing multiplication using Roman numerals. There may be a better way.

      I think things have gotten worse since it was last surveyed.

      I know a couple of notable examples, myself. But I looked at as much GE wayback as I could on these, and I was surprised how little microsite of an unmoved station changed over the years.

      We started out thinking that there was a spurious warming because of continually encroaching microsite. Instead, what we found was that spurious trend amplification (warming or cooling) will occur even if the microsite is unchanging.

      Mesosite no doubt encroaches, but, for whatever reason, when we removed the well sited urban stations from the mix, it didn’t take out trends down even a jot. (Yet our urban Class 1\2 sample is too small to be definitive.)

      Of course if direct, heavy urbanization rapidly encroaches (as in some parts of the world), then that would cause a continually increasing offset which would spuriously jump the trend. But I don’t think a few paved sidewalk additions a hundred meters down the road are going to make a dime’s worth of difference. [+/- 1.2421 dimes?]

      • David Springer

        “Instead, what we found was that spurious trend amplification (warming or cooling) will occur even if the microsite is unchanging.”

        That is expected if working off the hypothesis that the cotton region shelters get darker as they age.

  47. What correct physics is telling us is explained here where you are invited to make a submission for a reward of several thousand dollars if you can prove the thermodynamics wrong and produce a study showing opposite results to mine which showed that more moist regions have both lower daily maximum and minimum temperatures than drier regions at similar latitude and altitude.

    Q.1: What is the sensitivity for each 1% of water vapor in the atmosphere?

    Q.2: Based on your answer to Q.1, how much warming does a mean of 1.25% of water vapor produce?

    Q.3: Also based on the above, how much hotter should be a rain forest with 4% WV compared with a dry region with 1% WV?

    Q.4: Taking into account the fact that solar radiation reaching Earth’s surface ranges between zero and about 1,000W/m^2 with a mean between 160 and 170W/m^2 and that radiation from the colder atmosphere is known not to penetrate water more than a few nanometers (thus unable to “warm” it) explain, using the Stefan Boltzmann equation and a typical range of flux between 0 and 1,000W/m^2 how the ocean surface reaches observed temperatures.

    For answers, study the new 21st century paradigm shift in climate change science which will be widely publicized in 2017 and common knowledge by 2025 whilst the current hiatus continues until about 2028 to 2030. Long-term (500 year) natural cooling will start before 2100 and mean temperatures will not rise more than about 0.4 to 0.6 degree before the cooling starts, as shown here.

    Who’s next to take me on?

  48. Evan Jones 2014/09/11: “We will, of course, be hitting it from the physics angle, as well. So it won’t be a statistics-only study. It will be backed by a mechanism that explains why and how (and to what extent) this occurs.

    Neither the AGU poster nor the press release hint at finding an actual physical mechanism.

    There *is* a known physical mechanism that produces similar results and has already been written up in the scientific literature – Hubbard & Lin (2004),Air Temperature Comparison between the MMTS and the USCRN Temperature Systems.

      • Your “Gossip Girl” writing style screams physics. Note to Willard: This is another one like you and Ken whom “Do Science” apparently behind the “green” door.

        Welcome to my world, guys. I will go on a bit.

        The heat-sink hypothesis is an unphysical one. This was pointed out to Evan Jones over a year ago in discussion at Stoat’s. The press release makes no mention of having found a physical explanation. “Heat-sink” in this context is merely a euphemism for: We haven’t found a physical explanation.

        And there I was, thinking it was a euphemism for, “Gosh, those trends sure average a heck of a lot higher when those houses and cementy things are near the sensor. Wow, look at those Tmin numbers. Well it seems pretty obvious why that is.”

        As Dr. Leroy put it: the quality of observations cannot be ensured only by the use of high-quality instrumentation, but relies at least as much on the proper siting and maintenance of the instruments.

        He refers to “heat sources”, writ large. We refine the observation to distinguish that which generates heat (“heat source”) from that which does not generate heat, but absorbs and re-radiates it (“heat sink”).

        Well, anyway, you don’t seem to think much of the term, that’s obvious. Or we wouldn’t still be going on about it after all this time. Is it possible that what you find bothersome about all this is that the words “heat sink” sit so well on the tongue?

        Dr. Leroy wasn’t looking at the trends when a station is exposed to “heat source” (which, by his definition includes sources and sinks), but offset. What we do is use his rating system and then look at the trends of the stations thus rated. In your haste to remind be to stick with the trends, I fear you have strayed into the land of offsets a bit, yourself. Besides, being colder does not mean you are not warming faster, as the Arctic guys like to say.

        Anyone that reflects on what a heat-sink does

        What a heat sink does is reflect.

        and how they’re used

        Well, in greenhouses, they’re used to take the edge off Tmin and bump up Tmax. That’s the offset effect, anyway. You wouldn’t know how that would affect trend during a warming interval until you measure it, of course. You guys remind me of the story of the dude who got tossed out of the Aristotellian tribe for the crime of instigation to commit empiricism.

        quickly realizes this is bass ackwards.

        I recommend realizing a little slower.

        Heat-sinks reduce trends, not exaggerate them. We don’t put heat-sinks around CPUs in our computers because we want them to run hotter.

        You are talking offset. You need to be be thinking trend. I could just leave it at that.

        A CPU is a heat source. It is generating its own heat. It is the hottest thing in the room. A CPU is generally located in an enclosed space, and is likely not exposed to get much sun. So the heat sink is taking up energy generated from the computer — a closed and trendless system.

        Placing a heat sink next to a computer when sitting outside on a sunny lawn is not going to cool it down. Both the sink and the computer are receiving radiation from both the sun and the surrounding atmosphere. The heat sink is absorbing more energy from the sun than it is from the CPU, then re-radiating some of it back towards the CPU, recorded only at at Tmax and Tmin. Not to mention the general lack of nocturnal/dinurnal variation of a room in a building. When is Tmin inside a closed, artificially controlled environment?

        So if anything, the heat sink will be marginally increasing the heat of the CPU at either Tmax or Tmin, which are only times the temperatures are recorded by USHCN. Not that this is much of a practical issue outside a closed room.

        I find this whole explanation – or lack of one – especially disappointing because Evan assured us this was easily figured out by their co-author physicist.

        I have no doubt that you do. I think I can feel your disappointment radiating off you at Tmin. We never managed to land him, unfortunately. We’ll have to get back to it.

        Please note that I was being starkly open about our process, far more than any other paper I’ve seen. Perhaps too open. But the idea is to operate as much as possible in the open. That’s what we do.

        First he said, “Our physicist co-author thinks this factor is easy to nail and he does know about the Hubbard paper.”

        Well, that work hasn’t been done yet. It will have to wait for followup.

        Later he said, “We will, of course, be hitting it from the physics angle, as well. So it won’t be a statistics-only study. It will be backed by a mechanism that explains why and how (and to what extent) this occurs.”

        The best laid schemes of mice and men gang aft agley. We can (and do) describe the mechanism, but we are going to need someone to add in the formulas. We’ll address this in followup.

        OTOH, there is a known component of the measuring system that *does* exaggerate highs *and* exaggerate lows – the Dale/Vishay 1140 thermistor used in the MMTS stations. This was documented by Hubbard and Lin, Air Temperature Comparison between the MMTS and the USCRN Temperature Systems (2004).

        Groovy. We already add an MMTS adjustment offset. When we publish, I will supply a tool that will allow you to drop in whatever MMTS numbers you like better than ours. Either by formula or by swapping in a new MMTS-adj dataset.
        Let us know when you do. We would find the results interesting.

        But in any event, it won’t be enough of a bump to change things much over what we already did. Maybe 0.01C/decade on the outside.

        And speaking of gluteal direction, all you guys think about is how to horsewhip the MMTSs in line with the CRSs. It never seems to occur to you that it’s the CRS units that are the actual problem in the first place — carrying your own personal heat sink around on your shoulders wherever you go will do that. Especially as the paint fades (net).

        It’s the CRS units that are giving the spurious results. And, as the MMTS units were calibrated to the CRS units, I see little real justification even for adding in the offset jumps. Either that or the calibrators have some ‘splaining to do. But, being a swell guy, I’ll go along. For now.

        It is possible that the offsets should remain — and don’t think I won’t be looking at pairwise to check. But it is glaringly obvious that the CRS trends, esp. Tmax are going to have to be adjusted down. Way down. And that has implications that are going to shake the chain all the way back to 1880.

        I think it’s youse guys, not me that have things reversed.

        Since the Menne MMTS Bias adjustments were based on all stations, regardless of microsite, it’s easy to envisage that Menne’s MMTS adjustment isn’t entirely applicable to a subset of the stations. The Hubbard MMTS Bias adjustment is instrument specific – regardless of location or microsite – since it’s just a description of the physical response curve of the sensor itself. But Menne relies on pairwise homogenization while Hubbard & Lin did a year-long side-by-side field study comparison.

        Just plug in Menne’s data. MMTS adjustment only data is available from NOAA if you care to do that. Or H&L. Besides, a little bigger or little smaller offset isn’t going to matter here. What’s going to matter is the bad CRS bias. You are the ones looking at this backwards.

        While there is nothing wrong with homogenization per se, using the average result from a large group of stations and expecting it to be applicable to all subsets is a leap of faith. It is also unnecessary considering the Hubbard MMTS Bias I adjustment is available. If nothing else, obtaining the same results also using Hubbard would make the results more robust and eliminate the MMTS sensor as a potential physical explanation.

        There is nothing wrong with homogenization per se, if there is no systematic error in the data. Then it is kindly Uncle H. but when a systematic error is introduced to the data series, Kindly Uncle H goes postal. This is a known thing.

        Yet I see no reason you can’t sub in Hubbard’s data. You could even do it station by station. You can be provided with excel sheets that will enable this process when we publish. But even if the bump in trend is double ours, it’s not going to affect our results much.

      • > This is another one like you and Ken whom “Do Science” apparently behind the “green” door.

        TL;DR.

        I thought it was a curtain, Kriging King. Was it a green? Hard to notice when you’re behind it.

        I did not notice I “Do Science” either. That curtain is too opaque.

  49. Nice plot: http://www.climate-change-theory.com/planetcycles.jpg
    When do you expect the next deep freeze that empties 125m of water from the seas to ice in the polar and N/S latitudes per the Ice Core 120,000y cycling pattern? And what will drive things below the LIAs as you have presented?

    http://pages.swcp.com/~jmw-mcw/Curry_120000y_warm_cycles.jpg

    • Dynamical excitation of the tropical Pacific Ocean and ENSO variability by Little Ice Age cooling

      BSTRACT
      Tropical Pacific Ocean dynamics during the Medieval Climate Anomaly (MCA) and Little Ice Age (LIA) are poorly characterized due to lack of evidence from the eastern equatorial Pacific. We reconstructed sea surface temperature, El Niño–Southern Oscillation (ENSO) activity, and the tropical Pacific zonal gradient for the past millennium from Galápagos ocean sediments. We document a “Mid-Millennium Shift” (MMS) in ocean-atmosphere circulation ~1500-1650 CE, from a state with strong zonal gradient and dampened ENSO to one with weak gradient and amplified ENSO. The MMS coincided with deepest LIA cooling and was likely caused by southward shift of the Intertropical Convergence Zone. Peak MCA (900-1150 CE) was a warm period in the eastern Pacific, contradicting the paradigm of a persistent La Niña pattern. …

      • JCH, a ref for this “BSTRACT”?
        Also, how do you propose the 120,000y cycle occurs and when is the deep freeze likely to start?

      • Thanks.
        Nothing about the 120,000y cycle? Surely, the last 3 such cycles would indicate that one is “eminent“. So, you must have the reason’s why and a good idea when the global temp will drop.

      • Joel – you graphic mentions MWP and LIA, which is what the paper I linked is about. When the Eastern Pacific chills, the GMST chills. Well, it used to be that way. Until 1985, Then natural variation found the ACO2 knob was twisted hard to the right, and it had no answer… man = 100% plus.

      • So, JCH, per your implicit pronouncement above, there will be no more deep freezes? Man has found the knob that will allow the earth to stay above the LIA cool level, forever. I doubt it, as I believe cosmic factors are BIGGER than man and his doings and depositing 125m of sea water up on the poles and land as ice was NOT a trivial matter, but at least I know where you stand regarding what are the controlling factors – only the pacific ocean and CO2 matter, nothing else – with man being 100% per your expressed opinion in control of all that will happen. Again, I doubt man has that much control of the situation.

        BTW, got a copy of the sciencemag paper you quoted the abstract of?

      • It’s behind a pay wall.

        http://mashable.com/2015/04/09/rapid-global-warming/#QBRUe5stCSqI“>The results suggest that when a cycle known as the Pacific Decadal Oscillation, or PDO, switches to a “positive mode,” the world will see faster temperature increases than it has since about 1999. The PDO, as it happens, has just switched into strongly positive territory.

      • JCH: Sounds like you just discovered PDO, which is a +/- 30 year cycle. Last positive Phase was ~1975-2005. In negative phase now. PDO does go positive for a bit during negative phases and negative a bit during positive phases.

        You are getting excited over this years El Nino weather event, which could extend for a few years before PDO goes negative again. When you feel the urge to compete with WUWT the whole fighting ignorance with ignorance just makes them look not so idiotic.

  50. Y’all are being too hard on Mosher. He is an integral part of BEST. BEST’s sole function was to pre-empt Anthony Watt’s surface stations project to debunk UHI and station sitings in general as even a factor in the mythical “Global Average Temperature”.

    Mueller succeeded in pre-empting Watts in the only arena that matters to warmists like he and Mosher – the political/media arena. That is what BEST is all about, no matter what they say.

    That is why Mueller reneged on his promise to Watts to keep the data Watts shared with him private. That is why Mueller ran to the press as soon as he could with his take on Watt’s data.

    What more do you expect from Mueller’s mini-me Steve Mosher? He has been carrying on for years about the sacrosanct purity of the “Global Average Temperature” product produced by BEST. It doesn’t matter which stations you pick, or what you do to the data. BEST always comes up with the same answer as far as trend. and is proud of that fact.

    If, however, properly stationed sites show a significantly lower warming trend than the remainder, everything Mosher has been proclaiming as holy writ for the last several years is garbage.

    It doesn’t matter that there is no such thing as “Global Average Temperature”. It doesn’t matter that neither BEST, nor NOAA, nor Anthony Watts for that matter, can come up with a GAT accurate to within a tenth of a degree.

    What matters is the press releases. Mueller understood that, which is why he conned Watt’s data out of him. Mosher understands that, which is why he wants Watts’ date NOW NOW NOW.

    BEST and Mosher don’t need to publish Watts’ data, or release it, or anything of the kind. What they need is a competing press release, and they need it now.

    Mueller has no chance of getting it. So Mosher is trying to trade on the former good will he used to have with the skeptical community, back when he pretended (more convincingly) to be a luke warmer.

    If Watts gives Mosher his data, any part of it, Mosher will come out within a week or so with an explanation of why Watt’s conclusions are absolute nonsense. The publicity is all that matters.

    If this were about science, time wouldn’t matter. This is about politics,. so time is everything.

    • ==> “BEST’s sole function was to pre-empt Anthony Watt’s surface stations project to debunk UHI and station sitings in general ”

      That’s beautiful, Gary. Never let obvious explanations suffice when you can dream up a conspiracy theory.

    • GaryM, love you man! You nailed it.

      Muller got his titty in the ringer with his Berkley/APS Physics buddies when he criticized the Mann hockey stick in his on line video. He has been trying to dig his sorry butt out of the ditch since then and Mosher is his attack dog.

    • Do you need any help writing the screenplay, Gary. The plot is a little far-fetched, but it has possibilities. Jack Warden could play Mueller, if he is still alive. Strother Martin looks just like Mosher, but I am pretty sure he is gone to that silver screen in the sky. Anyway, have your people call my people.

      • Here you go, Gary:

        https://www.youtube.com/watch?v=lj60OAh7O5U

        We could use this footage and pay the Strother Martin estate a few hundred bucks. There is some even better stuff with Strother in “Hanny Caulder”. And Raquel Welch is in that one, wearing a short pancho, and nothing else.

      • Brian G Valentine

        Richard Boone as Anthony Watts, but Boone’s gone

      • Don Monfort,

        The thought of Raquel Welch wearing nothing but a short Pancho is titillating. Even more if said Pancho was, in turn, wearing nothing more than his birthday suit!

        Cheers.

      • Boone would be OK, Brian. Not too many people know he’s dead. But I am thinking Frank Sinatra. We soul make it a semi-musical. I guess we could use Pat Boone. But live guys cost more money.

        Slow down, mike. We are going for a PG rating.

      • Brian G Valentine

        This whole concept does not give the flavor of the story of Redemption of the troubled and doubting Skeptic, Richard Muller, who, witnessing the downward spiral of possible doubters into the abyss of Denial, came to see the One Truth and a hero to all of those who live by the faith

      • This is what Mosher said at the end of his stint as a WUWT hero:

      • The thought of Raquel Welch wearing nothing but a short Pancho is titillating. Even more if said Pancho was, in turn, wearing nothing more than his birthday suit!

        We are more concerned with heat sinks than heat sources.

    • So skeptics are keeping the data secret to stop anyone finding anything wrong with it?
      Hmmmm

      • Just to emphasise, you’ve pretty much admitted that you think withholding scientific data is a good idea to prevent criticisms of it.

        Any you don’t even seem to self aware at what you’ve said…

      • Just to emphasise, you’ve pretty much admitted that you think withholding scientific data is a good idea to prevent criticisms of it.

        There’ll be plenty of opportunity to manufacture “criticisms of itafter the paper is published.

      • that would make sense if Gary voiced a concern that Mosher run with the data and jump in and release a paper of his own saying the SAME thing, therefore stealing the work.

        But Gary wasn’t voicing that concern, he was instead worried that Mosher might find something WRONG with the data that undermined the paper’s conclusion.

        That alone was enough for Gary to support hiding the data.

        I am reminded of an email by Phil Jones:

        “Why should I make the data available to you, when your aim is to try and find something wrong with it?”

        I think it has subsequently been agreed by all, even by Phil Jones himself, that you can’t pick and choose who gets the data just because you don’t think someone else is acting in good faith.

      • But Gary wasn’t voicing that concern, he was instead worried that Mosher might find something WRONG with the data that undermined the paper’s conclusion.

        That wasn’t how I read his comment. My understanding was that he expects Mosher (or BEST, whom Mosher supposedly represents) to issue a press release (if they get the opportunity) using some rationalization based on the data to assert that the whole study didn’t matter.

        Here’s what he said, just to remind you [all bolds mine]:

        What matters is the press releases. Mueller understood that, which is why he conned Watt’s data out of him. Mosher understands that, which is why he wants Watts’ date NOW NOW NOW.

        […]

        If Watts gives Mosher his data, any part of it, Mosher will come out within a week or so with an explanation of why Watt’s conclusions are absolute nonsense. The publicity is all that matters.

        […]

        If this were about science, time wouldn’t matter. This is about politics,. so time is everything.

      • So skeptics are keeping the data secret to stop anyone finding anything wrong with it?
        Hmmmm

        Hmm. Anyone who knows me would know better than to ask such a question.

        If you do not look at it and find at least one thing wrong with it I shall feel like a absolute wallflower.

      • I think I need to add a further word on the data (non)release. You all know it will be released. When we publish, wild horses couldn’t prevent me from releasing it.

        We are not withholding the data because we think someone is going to find something wrong with it. I fully expect that there will be folks picking around every edge and I am fully expect they will find at least something that is incorrect or can be interpreted otherwise.

        I am looking at methods to include partial unperturbed records, and that will change the results, too. I don’t think by much, but I won’t know till I do it.

        As for “review without data”, that has been what I have been after. And if you haven’t figured it out by now, that means review of method. It also means I know what kinds of questions to expect and will have considered them when peer-review time rolls around. That is what I got from the review. What I paid for it was an explanation of our own methods (which ought to be pretty well known by now).

        It was a good bargain for all sides in this. I paid good coin and received good value.

        When the data is released, we can have a whole new nice argument over it. I look forward to it.

      • evan

        “And if you haven’t figured it out by now, that means review of method.”

        You havent revealed your method either.

    • GaryM:

      Spot on! While human motives are often difficult to decipher very accurately, the fact that BEST’s efforts are concentrated on producing press releases, rather than scientific advances, is difficult to miss. The daily dose of Mosherisms only reinforces the conviction that their interest in understanding geophysical variables is entirely incidental.

      • Or maybe we are all just doing our science the best way we know how. Speaking personally, I have enjoyed the ride.

        C’mon, y’all. The politicking isn’t what’s going to endure, anyway. When the dust clears, it’s the work that counts.

        Science is the dog. Politics is the big fluffy tail. No one sees the dog for the tail. But the tail ultimately goes where the dog goes.

      • Maybe Mosher, who’s but an amateur programmer, is doing “science the best way” he knows. Muller’s M.O., however, cannot be explained that way; surely, he must know what he’s producing is numerology, not science!

      • From 2005 going forward, there is no real net trend, and therefore no net divergence between COOP and CRN. This supports our hypothesis. A diversion would challenge it.

        And note how the CRS cool a bit faster during the cooling interval and warm faster during the warming. Note that these are all classes though, and we are primarily concerned with Class 1\2. The divergence is larger for CRM/MMTS Class 1\2 than for Class 3\4\5.

        Our MMTS adjustments currently add only the offsets. I think it incorrect for Menne to adjust the trends using seven years backward and forward. I think it is an attempt the put the MMTS units in line with CRS, when they should be doing it the other way around.

        In any event, we are looking at a Microsite bias of >0.1C/decade and MMTS adjustments are on a much smaller scale. But you’ll be able to drop Menne’s USHCN MMTS-adjusted data for comparison if you like (when we release our data) and compare his adjustment and ours.

      • Actually, as a historical modeler and wargame designer and with much hands-on experience in that sort of “numerology”, I find the BEST approach quite intriguing. I also have acquired a (somewhat horrid) fascination with the concept of homogenization that I will never be able to cure myself of.

        They are both dangerous tools. They are fire. But, if used with discretion and proper direction, they have great potential.

        Look at what we are doing. it’s the flip side of what Mosh is doing. The net results of both could well turn out to be similar to ours (or vice-versa) in the end — which is not nigh.

        I am not encouraging VeeV to abandon Uncle H nor am I telling Mosh not to beat his splits into ploughshares. I am trying to figure out how to harness the advantage of both approaches. And insinuate Microsite considerations into them both.

        In terms of the science writ large, by dropping the perturbed stations, we have created a “check sum”. But we have the advantage of doing a mere 30-year stretch out of the data-metadata rich USHCN. So we can afford to drop perturbed stations.

        Mosh and VeeV have to cover the entire 140-year patch and have the entire GHCN to deal with. You cannot know how bad the data/metadata/coverage problems are (outside the UHCN, I have had but a few horrifying glances).

        They cannot possibly afford to blithely drop, as we have done. So to have a shot at redeeming the USHCN, one is going to have to rely on said “numerology”. So rather than discard it, we must work to improve it.

      • evanmjones:

        Being “intrigued” with BEST’s methods is a far cry from any contextual legitimization of that methodology as a scientific proposition. There’s a vital difference between designing fantastic war-games and establishing the realities of geophysical processes.

      • ” I am trying to figure out how to harness the advantage of both approaches.”

        Simple. our code is on the web.

        Approach 1. you release the data and We redo the station quality paper.

        Approach 2. You take the code for station quality and use your data.

        its been there for 3 years!!!!!

  51. “What matters is the press releases. Mueller understood that, which is why he conned Watt’s data out of him. Mosher understands that, which is why he wants Watts’ date NOW NOW NOW. ”

    Thank you, Gary M. That is it.

  52. What will be done about all of this? Probably nothing. Wood for Trees would have to start all over.

  53. So in layman’s terms the ‘professional climos’ corrupted the good data to make it match the bad data.

    They aren’t supposed to do that.

    Is ‘intellectual integrity’ a phrase any climo would recognise or understand?

    Or is the shysterism of Climategate still rife in this shoddy apology for a ‘science’?

    • Thats it in a nutshell. It is inherent in any homogenization that uses any form of regional expectation. So GISS, NCEI, BEST, BOM,… Because the micro siting issues do not necessarily crop up all at once to be detected by some breakpoint. They accumulate over time and population growth and economic development.

      • If I ever have the misfortune to shake hands with a ‘climate scientist’ I will be very careful to count my fingers afterwards.

        And should anyone be foolish enough to invite one into their home I advise that they lock up their daughters, put any spare cash in the bank and dine with very long spoons.

        In any walk of life other than academe, climos would be getting struck off for gross misconduct and/or bringing the ‘profession’ into disrepute.

        It is an irredeemably dishonest trade.

      • easily testable.
        Homogenization does not change the values of CRN stations.
        in over 90% of the cases, CRN values are unchanged.

      • In Business, ten percent of the sales force will typically make ninety percent of the sales. Now what?

    • ‘Homogenize,’ verb (used with object/)

      ‘to form by blending unlike elements.’

      aka …
      acclimatize,
      accommodate
      acculturate …

      • Kinda’ makes yr think of dendroclimatology,
        yarmalising.

      • I thought no that can’t be a word:
        ac·cul·tur·ate.
        [əˈkəlCHəˌrāt]
        VERB
        1.assimilate or cause to assimilate a different culture, typically the dominant one:
        “those who have acculturated to the US” … became a liberal green spouting for the arrest of denialists·

    • So in layman’s terms the ‘professional climos’ corrupted the good data to make it match the bad data.

      They aren’t supposed to do that.

      I don’t think they meant to.

      • David Springer

        You are too kind.

      • I have been eyeball-deep in the data, both raw and adjusted, and that is my honest opinion from the trenches.

      • Danny Thomas

        Evanmjones,

        Thank you for this above: “In terms of the science writ large, by dropping the perturbed stations, we have created a “check sum”.”

        I’m late to this discussion, but in a nutshell this describes what I perceive as the value of this offering and can in no way find an issue with this approach. After all, it’s the ‘trends’ which are important. Having a “check sum” or ‘control’ (unadulterated) seems like it should have value to all sides. It seems to me that those whose results (predictions) lay further away from the ‘check sum’ should ask more questions as to why.

        Mosher himself has stated that the GAT (as it’s currently manifested via numerous means and sources) is no more than a prediction and certainly is not an observation. I cannot see how your method is any worse than the other offerings. Thank you for the effort and the sharing.

      • the problem Danny is they are not unadulterated.

      • Danny Thomas

        Steven,

        Okay. How have they been manipulated? My understanding is this is a subset with a long history and has been selected based on criteria not involving changes. Where did the manipulation occur, how, and how can it be stated if the data set you desire from the authors has not been reviewed by you? The impression I’m working under leads me to believe that changes (external and instrumentation) led to stations being removed leaving the balance of 410 (+/-) ‘pristine’ sites.

      • Remember, Phil Jones said that he is sure that the original old stuff he ‘dumped’ would have looked just about the same to him today as if it were yesterday. Was it a long list, I still can’t find it on the net?

      • Until this ‘dump’ thing is cleared up with the facts…

        https://startthinkingright.wordpress.com/2009/11/30/global-warming-scientists-admit-purging-their-raw-data/

        we have all been wasting electrons. The servers are heating the TOA.

      • “Okay. How have they been manipulated? My understanding is this is a subset with a long history and has been selected based on criteria not involving changes. Where did the manipulation occur, how, and how can it be stated if the data set you desire from the authors has not been reviewed by you? The impression I’m working under leads me to believe that changes (external and instrumentation) led to stations being removed leaving the balance of 410 (+/-) ‘pristine’ sites.”

        1. your claim is that they are unadulterated. THAT is the claim that requires proof.
        2. Evan did not use the entire Leroy classification system which would
        have included “shading”
        3. The only evidence you have is what the site looks like “Today” or at the date of the last photo. So a site that was shaded by trees 30 years
        ago that has the trees chopped down today.. will be “undisturbed” using
        evan’s criteria.

        In short, You cant claim they are unadulterated. Extraordinary claims require extraordinary proof. All that can be proved is that Given
        a belief in metadata, given a belief that some of LeRoy criteria DONT matter, the stations show no signs in that metadata of being changed.
        and check your numbers again.

        Further, the first time Anthony and Evan published this they published maps of the stations. guess what you can do?

      • Danny Thomas

        Steven,

        Try turning over a new leaf in the new year. My “impression” was that the recorded temperatures were ‘unmanipulated’. As the homoginization process is what give Mr. Jones so much angst the presumption follows that the data has not been modified. I addressed the comment to him and would also presume that if I’m inaccurate a correction will come from him and invite that.

        As you’ve not been supplied the data you seem to desire, is it not an ‘extraordinary claim’ for you to assume that somehow the data has been manipulated? After reading some 800 odd comments posted, my choice is to take the advice that Mr. Jones suggested and via this response I’ll just ask him if it was.

        As suggested originally, the GAT is nothing but a prediction and this approach should be as valid as the approaches of others and I look forward to the full presentation while expression appreciation for allow us (me) to participate in it’s evolution.
        Happy New Year!

  54. “We believe the NOAA/NCDC homogenization adjustment causes well sited stations to be adjusted upwards to match the trends of poorly sited stations.”

    What hope for Africa , one fifth of the world’s land mass, and been under such turmoil, wars etc, over the last 50 years- truly a basket case for temp data. The few stations that give out any data, which is mostly less than 50% of the time, are based around airports, in cites or by the road.

    The WMO flag up Africa needs 9000 temp stations.

    And they estimate to tenths of a degree!!

  55. I love how nothing changes about the surface temperature debate. “Scientifically” land surface temperature and US surface temperature are about as close as they can get and if there wasn’t any media hype by NOAA that 2014 was the warmest year EVAH, the stray more likely negative 0.05 C impact wouldn’t be meaningful.

    0.05 to 0.10 is a pretty good ballpark estimate of UHI or suburban heat island effect (land use) which isn’t CO2 related or accounted for completely in the NOAA land temperature product. That tiny amount is about 10% of the total warming to date which has a value of roughly 500 billion dollars and could change the order of policy priorities.

    This is the price that is paid by over hyping over confidence. Since the over-hyping has also caused issues in commodities prices which has been linked to civil unrest and death, using the EPA own methods for assigning a value to human life, there could be around 200 billion in loss of life cost. Human life is consider a bit more precious than money by a few.

    So this little handbag fight is a great illustration of how a “wicked problem” get wicked.

    • Pretty much a nail on the head statement of the issue, capt.

    • capt,

      You’re in great form on this. The endless surface temperature debate is absurd.

      Richard Lindzen, wrote it off as such years ago and he now refrains from engaging.

      Kudos to Anthony and others who go above and beyond to try to quantify some of the more egregious issues. But unfortunately it’s like tilting at windmills in the CAGW crazed environment we live in.

      • It is a continuing process, and one which has been built on the work of others, most definitely including the NOAA. If they had not made a decision to oversample, we’d never have had enough unperturbed, compliant stations for coverage, much less for statistical significance. If they had not hugely improved their metadata, we would have far less basis to go on.

      • NOAA didn’t make a decision to over sample. Stop making stuff up

    • Captain

      The US monthly weather review was the journal for the US weather service as it evolved. Here is a sample from January 1895

      “Monthly weather review Jan 1895 edited by Prof Cleveland Abbe
      Jan 1895 (data) based on 2762 responses from stations occupied by regular and voluntary observers classified as follows;
      162 weather bureau stations
      Numerous special river stations (162)
      32 from Army post surgeons received through the surgeon general us army
      2385 from voluntary observers (of the weather bureau?)
      96 through the southern pacific railway co
      29 from life-saving stations
      31 from Canadian stations
      10 from Mexican stations
      7 from Jamaica
      International simultaneous observations are received from a few stations and used, together with trustworthy newspaper extracts and special reports

      Jan 1899 midsummer weather was being experienced in California-midday temperature from 70 to 80 f were observed in the great valley and southern California. At San Francisco a max temp of 78f was registered on the 26th the highest Jan maximum recorded during the past 27 years.

      The Richards self -regulating thermometer and the Draper were accurate to 2degrees F. ”

      tonyb

  56. When you look into the the records of well-sited stations, the lack of warming is obvious, as is the effect of adjustments. My study of USHCN stations meeting the CRN#1 standard is here, with supporting Excel workbooks:

    https://rclutz.wordpress.com/2015/04/26/temperature-data-review-project-my-submission/

    • Bravo for taking the look.

    • David L. Hagen

      Ron well done.

      it is clear that adjustments at these stations increased the trend over the last 100 years from flat to +0.68 C/Century. This was achieved by reducing the cooling mid-century and accelerating the warming prior to 1998.

      The warming is thus very strongly “anthropogenic” – due to improper “adjustments”!

      • And some of them may have been proper. TOBS-bias, for one. I do not accept (nor do I reject outright) the adjustments made for that; I will look at them myself.

        But raw data won’t do. Just is.

        Maybe that means we are “just haggling over the price”. But it is a price over which we must haggle. Not even our own data is raw; it’s only as raw as it can be.

  57. It strikes me that the siting ratings are dynamic.

    At our local station, parking lots, concrete slabs, gravel covering, drainage ponds have all encroached, some within 10 meters, some within 30 meters, all within 100 meters, over the course of only two decades.

    Even getting the highest rating today doesn’t preclude the trend of degradation.

    • Yep, the general degradation would most likely be a small warming bias that should skew uncertainty to the negative side slightly. I believe digital sensors also have a small warming bias as they age. You don’t really have to fix the problem, just recognize there could be a small problem.

    • It is dynamic, but once urbanization is complete around a site, there is no additional warming trend from urban heat sources; that is, the warming is now baked in. Guess what that means? Another explanation for the plateau in temperatures this century.
      https://rclutz.wordpress.com/2015/06/22/when-is-it-warming-the-real-reason-for-the-pause/

    • David Springer

      Surface stations have NEVER been adequate for the task of detecting global average temperature trend to tenths of a degree per decade.

      You can’t make a silk purse out of a sow’s ear by adjusting the ear. You can put lipstick on a pig but it’s still a pig.

      The ONLY instrumentation we have that is adequate to the task are the globe spanning orbital microwave sounding units.

      Interestingly the usual suspects used to point to the satellite data as confirmation of the sparse ground data and adjustments thereto but as soon as the satellites stopped producing data needed to confirm the globe was warming then suddenly the sparse adjusted ground data became the gold standard.

      This is so transparent it’s sickening. Global warming is a product of ideology not science.

      • The satellites have always been the worse measure. It was the satellites that FAILED to detect the late 20th century warming and the error wasn’t discovered until the early 2000s

      • The satellites have always been the worse measure. It was the satellites that FAILED to detect the late 20th century warming and the error wasn’t discovered until the early 2000s

        All of the global temperature data sets have issues.
        But the satellite observations do have the benefits of:
        1. having the greatest coverage and
        2. having a check with another sampling means, namely the RAOB data

        Looks like this:
        http://climatewatcher.webs.com/HotSpot.png

        Note the differences of all with the model ( Upper Left ).

        And note the latitudinal similarity with the surface obs ( Middle Left ).

      • “The satellites have always been the worse measure.”

        In your dreams.

        Stop making stuff up.