Watts et al.: Temperature station siting matters

by Judith Curry

30 year trends of temperature are shown to be lower, using well-sited high quality NOAA weather stations that do not require adjustments to the data.

Anthony Watts has presented an important analysis of U.S. surface temperatures, in a presentation co-authored by John Nielsen-Gammon and John Christy.  Here is the link to the AGU press release.  Watts has a more extensive post [here].  Excerpts:

SAN FRANCISO, CA – A new study about the surface temperature record presented at the 2015 Fall Meeting of the American Geophysical Union suggests that the 30-year trend of temperatures for the Continental United States (CONUS) since 1979 are about two thirds as strong as officially NOAA temperature trends.

Using NOAA’s U.S. Historical Climatology Network, which comprises 1218 weather stations in the CONUS, the researchers were able to identify a 410 station subset of “unperturbed” stations that have not been moved, had equipment changes, or changes in time of observations, and thus require no “adjustments” to their temperature record to account for these problems. The study focuses on finding trend differences between well sited and poorly sited weather stations, based on a WMO approved metric for classification and assessment of the quality of the measurements based on proximity to artificial heat sources and heat sinks which affect temperature measurement.

Following up on a paper published by the authors in 2010, Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends which concluded:

Temperature trend estimates vary according to site classification, with poor siting leading to an overestimate of minimum temperature trends and an underestimate of maximum temperature trends, resulting in particular in a substantial difference in estimates of the diurnal temperature range trends

A 410-station subset of U.S. Historical Climatology Network (version 2.5) stations is identified that experienced no changes in time of observation or station moves during the 1979-2008 period. These stations are classified based on proximity to artificial surfaces, buildings, and other such objects with unnatural thermal mass using guidelines established by Leroy (2010)1 . The United States temperature trends estimated from the relatively few stations in the classes with minimal artificial impact are found to be collectively about 2/3 as large as US trends estimated in the classes with greater expected artificial impact. The trend differences are largest for minimum temperatures and are statistically significant even at the regional scale and across different types of instrumentation and degrees of urbanization. The homogeneity adjustments applied by the National Centers for Environmental Information (formerly the National Climatic Data Center) greatly reduce those differences but produce trends that are more consistent with the stations with greater expected artificial impact. Trend differences are not found during the 1999- 2008 sub-period of relatively stable temperatures, suggesting that the observed differences are caused by a physical mechanism that is directly or indirectly caused by changing temperatures.

Key findings:

1. Comprehensive and detailed evaluation of station metadata, on-site station photography, satellite and aerial imaging, street level Google Earth imagery, and curator interviews have yielded a well-distributed 410 station subset of the 1218 station USHCN network that is unperturbed by Time of Observation changes, station moves, or rating changes, and a complete or mostly complete 30-year dataset. It must be emphasized that the perturbed stations dropped from the USHCN set show significantly lower trends than those retained in the sample, both for well and poorly sited station sets.

2. Bias at the microsite level (the immediate environment of the sensor) in the unperturbed subset of USHCN stations has a significant effect on the mean temperature (Tmean) trend. Well sited stations show significantly less warming from 1979 – 2008. These differences are significant in Tmean, and most pronounced in the minimum temperature data (Tmin). (Figure 3 and Table 1)

3. Equipment bias (CRS v. MMTS stations) in the unperturbed subset of USHCN stations has a significant effect on the mean temperature (Tmean) trend when CRS stations are compared with MMTS stations. MMTS stations show significantly less warming than CRS stations from 1979 – 2008. (Table 1) These differences are significant in Tmean (even after upward adjustment for MMTS conversion) and most pronounced in the maximum temperature data (Tmax).

4. The 30-year Tmean temperature trend of unperturbed, well sited stations is significantly lower than the Tmean temperature trend of NOAA/NCDC official adjusted homogenized surface temperature record for all 1218 USHCN stations.

5. We believe the NOAA/NCDC homogenization adjustment causes well sited stations to be adjusted upwards to match the trends of poorly sited stations.

6. The data suggests that the divergence between well and poorly sited stations is gradual, not a result of spurious step change due to poor metadata.

Lead author Anthony Watts said of the study: “The majority of weather stations used by NOAA to detect climate change temperature signal have been compromised by encroachment of artificial surfaces like concrete, asphalt, and heat sources like air conditioner exhausts. This study demonstrates conclusively that this issue affects temperature trend and that NOAA’s methods are not correcting for this problem, resulting in an inflated temperature trend. It suggests that the trend for U.S. temperature will need to be corrected.” He added: “We also see evidence of this same sort of siting problem around the world at many other official weather stations, suggesting that the same upward bias on trend also manifests itself in the global temperature record”.

The full AGU presentation can be downloaded [here]. 

JC reflections

This looks like a solid study.  The participation of John Nielsen-Gammon in this study is particularly noteworthy; Watts writes:

Dr. John Nielsen-Gammon, the state climatologist of Texas, has done all the statistical significance analysis and his opinion is reflected in this statement from the introduction

Dr. Nielsen-Gammon has been our worst critic from the get-go, he’s independently reproduced the station ratings with the help of his students, and created his own series of tests on the data and methods. It is worth noting that this is his statement:

The trend differences are largest for minimum temperatures and are statistically significant even at the regional scale and across different types of instrumentation and degrees of urbanization.

The p-values from Dr. Nielsen-Gammon’s statistical significance analysis are well below 0.05 (the 95% confidence level), and many comparisons are below 0.01 (the 99% confidence level). He’s on-board with the findings after satisfying himself that we indeed have found a ground truth. If anyone doubts his input to this study, you should view his publication record.

This paper has been a long process for Anthony, but it appears to have produced a robust and important analysis.

The extension of this analysis globally is important to build confidence in the land surface temperature records.

It will certainly be interesting to see how the various groups producing global surface temperature analyses respond to the study.

 

 

 

885 responses to “Watts et al.: Temperature station siting matters

  1. Pingback: Watts et al.: Temperature station siting matters | Enjeux énergies et environnement

  2. They will embrace it like they would a porcupine.

  3. I want to see whether BEST can replicate it.

    • I asked for the data back in July of 2012

      Steve Mcintyre commented

      “Steve: I agree that there is little point circulating a paper without replicable data – even though this unfortunately remains a common practice in climate science. It’s not what I would have done. I’ve expressed my view on this to Anthony and am hopeful that this gets sorted out. Making the data set publicly available for statistically oriented analysts seems far more consistent with the crowdsourcing philosophy that Anthony’s successfully employed in getting the surveys done than hoarding the data like Lonnie Thompson or a real_climate_scientist.

      It would have been nice if you’d spoken out on any of the occasions in which I’ve been refused data. You are entitled to criticize Anthony on this point, but it does seem opportunistic if you don’t also criticize Lonnie Thompson or David Karoly etc.”

      • From the beginning of the SurfaceStations project all of the individual station surveys with documenting photographs were publicly available online at the project website (although they’re currently offline because of server issues). Temperature series were available from other sources. Anybody at any time could have analyzed them too while this project was in process. It wasn’t a secret.

      • Gary, telling other people that they can do their own research is no longer considered sufficient. The right way to do it is a turnkey R script (or the equivalent) so that anyone can immediately duplicate one’s results exactly. There is really no conceivable excuse for not doing that. I spent too much time watching McIntyre reverse-engineer Mann’s results – for years! – because whenever McIntyre would produce a result different from Mann’s, Mann would respond that he had probably done it wrong…

      • I asked for the data back in July of 2012

        And it still isn’t available.

        That was kinda my point, Steven. Until they publish their code and data, and somebody who can be trusted to make a good-faith effort to replicate their results has done so, I’m just as skeptical as I am of Mann’s papers.

        But I’m also interested to see what BEST will do with their (Watts et al.‘s) list using their own methods. Once they’ve replicated their results, so we all know BEST is starting with exactly the same thing they started with.

      • AK
        The problem is a bit deeper since they did use adjustments. And the whole station rating system has never been properly field tested. That is why I pushed for a data paper first. Publish the ratings first so we can assess that interpretation of the meta data.
        But people have taken the Leroy stuff at face value.
        When we asked for backup from Leroy the answer came back that there was no solid objective field test data. There was some small amount of testing done, reported at Lucia years ago.

      • David L. Hagen

        Steven Mosher
        Watts responds:

        Sorry Mosh, no can do until publication. After trusting people with our data prior to publication and being usurped, not once but twice, I’m just not going to make that mistake a third time.

        Take that up the data access issue with Richard Mueller who breached his confidentiality agreement with Anthony Watts. Twice burnt, Watts is thrice shy.

      • Michael Aarrh,
        You miss the point. While the study was IN PROGRESS the station reports were available for anyone to do examine any way they like. There was NOTHING ELSE to release until Anthony et al. finished his analysis. That will be forthcoming when official publication is assured. He’s learned from experience that early release only causes harm, not progress.

      • Sorry David that was not the promise. Every promise made was kept.

      • David Springer

        The weasels weaseled out of their promises by rationalizing. Weasels is what weasels does… depends on what the definition of “is” is. Mosher knows the drill.

      • David Springer: “Weasels is what weasels does…

        Are you sure you don’t mean stoats?

      • Until they publish their code and data, and somebody who can be trusted to make a good-faith effort to replicate their results has done so, I’m just as skeptical as I am of Mann’s papers.

        I rootin’-tootin’ agree.

        (Key word: “Until”.)

        You can look at what we measure and discuss our basic methods here and now (QED). But you cannot run the numbers to see if we did our sums right (or check our ratings to see if they were done right) until we release the data itself.

      • Sorry David that was not the promise. Every promise made was kept.

        Mosh, let us both grant each other a little leeway in this. And let us understand each other. For in many ways, we are much alike, you and I. Or so I’d like to think.

        We are both insiders who made it in from the outside; we have both faced the hazing of peer review, and stood it well. Both of use vary between loquacious and two-word terse. We both regard scientific method in almost a more childlike, sincere way than many of the old, cynical hands. We look at it almost in awe, more seriously in almost a more childlike way than most hardened veterans.

        And we both suffer the analog attitude of British army officers raised from the ranks. Didn’t purchase our commissions like proper gentlemen. Not considered proper officers by our peers and even those merely attending in the ranks. You could substitute either of our names in the taunts we both so often hear. When I hear them doing it to you, I see it like when they are doing it to me. Just switch names, add water, rinse, repeat.

        Yet we are hard and we are proud; we got where we were by dint of merit, direct thought, and much expended elbow grease. We both owe (and have) loyalty to others specific.

        Point here, really, is that Anthony released data in two cases (the first in a disastrous round with NOAA). Both times, those who got it said they abided by what was agreed, and maybe they did, too; I won’t presume to judge that negatively (having neither the data nor the inclination).

        Yet the end result was, regardless of any fault or lack of fault, that we greatly regretted it both times. So we want to wait until publication. We will not take long. From my end, at least, there is nothing personal in this.

        You will get your data, and you will get it in a flexible, malleable format that can be tested down to the last detail and replicated in any way you see fit. We are nearing the end of a long, hard slog. I have spent thousands of hours on this. So have others. there is just a little more time to wait. Please don’t believe you won’t get this material. I fully agree that no one can pass even intermediate judgment until we release the Full and Complete data.

    • I want to see whether BEST can replicate it.

      I want to see whether BEST can replicate it.

      They will have to alter their method to account for siting bias, but that should not be an insurmountable problem.

      P.S., Mosh, please be just a little patient with us. We are in the final stages of completion and the data will be available sooner rather than later.

    • Leroy the answer came back that there was no solid objective field test data. There was some small amount of testing done

      Well, Leroy is only looking at offsets. The only gold speck in there is that he puts both Class 1 and 2s offsets at zero, so we can effectively combine the two.

      But to put it bluntly, Leroy (2010) is a wonderful tool but it is also a bit of a meataxe. It is a bit of a work in progress itself. It’s the best practical tool out there, but I assume there will be improvements. I could suggest a few.

      But it does enable us to demonstrate what happens when there is a nearby heat sink by allowing us to rate and filter.

      More study needed all around.

  4. The National Weather Service COOP network has been underfunded for many decades and this trend shows no signs of change. Of the 410 “unperturbed” stations it may “perturb” many researchers in 10 to 20 years when there are only about half that number providing data.

    Congress, are you listening? We must invest in our climate observation infrastructure or we will end up using models for observations.

  5. How much impact would this have on the various global temperature estimates? It seems that a large number of land based temp measures are sited in the US but I don’t if the weighting adjustments would tend to ‘wash out’ this error from impacting the global estimate.

    • AW et al say : “We also see evidence of this same sort of siting problem around the world at many other official weather stations, suggesting that the same upward bias on trend also manifests itself in the global temperature record”.

      • Did they publish their evidence? A Journal will require them to.

      • They will work that out with the journal that publishes the paper, nicky? But you are welcome to engage is pre-publication sniping. If it floats your boat.

      • Nick, not sure that a competent journal or reviewer would disallow a comment that they had observed similar issues at other stations around the world. Especially if they have a few photos to back it up. You think someone has to pre-publish a separate paper to justify every single sentence in a paper they are trying to publish? And this was a comment on a blog. Will they have to publish a paper proving everything they ever said on a blog in order to get a different paper published even if that blog comment is not in the paper? Grow up.

    • Note that “pre-publication sniping” has provided valid criticism that is indispensable to our work. And we are both appreciative and grateful for it. That is why Anthony (most wisely) pre-released in 2012.

      • evan

        In hindsight it would be useful if climate science generally had many more ‘pre releases’ so constructive criticism can take place. Otherwise material that might be highly contentious is presented as fact.

        Judith’s ‘uncertainty’ monster; is a small creature compared to his very big brother the ‘speculative’ monster.

        tonyb

      • That is NOT why he pre released in 2012

      • Mosher is correct. Here’s what Watts says now:

        I admit that the draft paper published in July 2012 was rushed, mainly because I believed that Dr. Richard Muller of BEST was going before congress again the next week using data I provided which he agreed to use only for publications, as a political tool. Fortunately, he didn’t appear on that panel. But, the feedback we got from that effort was invaluable.

      • Learn your words, evan. Sniping is not the same as constructive criticism.

      • My track record in getting data from climate scientists.. Nearly perfect.

        Getting data from skeptics.. Even sub samples… Even with a promise of nda???
        ZERO DATA

      • Your little anecdote on your personal experiences is supposed to tell us what about the general willingness of skeptics to publish their data, Steven?

      • Geoff Sherrington

        Steve osher,
        My problem is not that I cannot get data.
        My problem is that I have heaps of Australian data, but nobody wants it – though I am corresponding finally with WUWT moderators.
        For one example, I selected 44 of the most pristine sites I could find here, did daily data on Tmax and Tmin, established LLS trends in degC per century. Study period was similar to AW’s 1972 to 2006.
        One conclusion is that some sites have intuitively improbable trends, like in excess of 4 dec C/century extrapolated.
        Another conclusion is that the more pristine the site, to use the word broadly, the poorer is the data quality. These screens with thermometers were introduced for standardisation and noise reduction but in hindsight they are pretty useless at those tasks for the present purpose. They need frequent manual babysitting to keep the lipstick on the pigs.
        http://WWW.Geoff stuff.com/pristine_feb_2015.xls
        This is unpublished because it does not significantly add to knowledge. Many researchers already know that this type of data has huge errors.
        Not much point in taking it forward unless you are comparing Australian land records with those of other countries.
        For fun and to illustrate noise, I regressed each temperature trend against each station WMO number. There was an effect.
        So, why not have a look at it and modify your assertion that you are starved for timely data. This study has been around since 2009 IIRC.

      • Geoff Sherrington

        Bloody automatic intervention with my accurate typing.
        That is Steven Mosher not as appears above, nor the Kosher that it changes to.
        Also link is without the capital G inserted.
        http://www.geoffstuff.com/pristine_feb_2015.xls

      • Mosh is only partly correct. The main (and by far the most important) reason we pre-released was the same as BEST: to obtain badly needed hostile review. I don’t think we would be here without it.

      • You are not interpreting Anthony’s words correctly. We had decided to pre-release already. It was only the timing that was rushed, that was not when the decision had been made. I was there, so I know.

        I read so much speculation about these things. You can just ask. We made the decision to release to get feedback to brush things up before review. The timing was done as Anthony described. The proof is in the pudding — we went to great efforts to address the criticisms.

        There was more bushing than any of us thought. That accounts for the delay.

      • Geoff regarding that xls of “pristine sites”, I just looked at one, Willis Island, and it seems to have undergone significant changes over the period in your spreadsheet.
        http://www.austehc.unimelb.edu.au/fam/0616.html
        Note change to AWS just as the temperature takes off.
        http://www.austehc.unimelb.edu.au/fam/0612.html points out that the island had trees in 1947.
        The meteorology buildings have undergone significant expansion across the latter half of last century, note the earlier accounts pointing to a lack of freezers, fresh meat etc. No longer true. Plus the island is part of a P&O liner drive-by for the purposes of duty-free shopping – no details what this means.

      • Mosher : Getting data from skeptics.. Even sub samples… Even with a promise of nda??? ZERO DATA

        How much data do they actually have to give though? Or to put it another way, how much government funding do they actually have at their disposal?

    • How much impact would this have on the various global temperature estimates?

      Good question. Well, the way I figure it, if the problem is typical throughout the GHCN, then I’d say it would reduce “official” global warming by maybe ~10% to 15% on the low side. And if we are on target regarding the CRS units, which I think likely, it could go up to 15% to 20%. Esp. since a lot of land stations project their radii out to sea.

      So that is the scale we are talking, here.

  6. “It will certainly be interesting to see how the various groups producing global surface temperature analyses respond to the study.”

    The warmists will apologize profusely for decades of fear-mongering, apologize for shameful ad hominem attacks, return fraudulently obtained grant monies, and initiate a movement to overturn the Paris accord.

    NOT!

  7. “The extension of this analysis globally is important to build confidence in the land surface temperature records.”

    It would appear that this should DESTROY confidence in the adjusted/homogenized/algorithmed land temperature record.

    Or, am I missing something?

    • You had confidence in the adjusted/homogenized/algorithmed land temperature record?

      Once they abandoned backward sensor compatibility the whole thing became a joke.

      Besides the temperature record is stew not milk. They shouldn’t be homogenizing it.

  8. “events conspire to set a fire with the methods we employ”

    Pointman

  9. Thanks for posting this information. I salute the authors for this analysis.

  10. Can it get more insidious? “We believe the NOAA/NCDC homogenization adjustment causes well sited stations to be adjusted upwards to match the trends of poorly sited stations.”

    • They gave you a hint and wrote “we believe”. They present no evidence for this claim. After homogenization the trend Watts et al. (2015) computed are nearly the same for all siting five siting categories, just like it was for Watts et al. (2012) and the published study Fall et al. Just like before, they did not study homogenization algorithms and thus cannot draw this conclusion.

      That the trend after homogenization is larger in the USA was known (and is frequently shown as evidence that climatology has an agenda, ignoring that the net adjustment for the global temperature make the warming smaller). This US increase is due to the time of observation bias and the transition to the MMTS. Watts et al. (2015) replicate this and also find that “aw” series with a “perturbation” show a smaller trend than the series where Watts et al. did not find evidence of a perturbation. Thus the perturbations cause a cooling bias.

      In this light, it is normal that the raw data from the category with the largest trend shows a trend that is nearer to the one of the homogenized data. Also the current version of the manuscript does not check whether there were really no perturbations in the stations put in the category “no perturbation” by comparing the series with the observations at neighboring stations. Thus there are likely still perturbations in this subset.

      http://variable-variability.blogspot.com2015/12/anthony-watts-agu2015-surface-stations.html

      • Victor Venema,

        I believe the Earth’s surface has cooled from its initial molten state, to that of the present. Just a belief – or an assumption, if you prefer. I have no personal knowledge of the situation four and a half billion years ago. Pontificating about anything else in between demonstrates a reliance on faith, rather than fact

        Maybe the Earth was created a millisecond ago, as it is. You have no proof to the contrary.

        Your opinions are worth precisely as much as mine. Would you not agree?

        Cheers,

      • Mike Flynn, in the light of your example, I unfortunately have to politely disagree.

      • Victor Venema,

        I see you believe your opinion is worth less than mine. I respect your beliefs, although I cannot understand why you feel that my opinion, based on the same facts, is superior to yours.

        Maybe you suffer from a lack of self esteem. You have my sympathy, if this be the case.

        Cheers.

      • by ‘homgenization’ you mean the processing of raw data to remove some part (hopefully, in large part) of the warming bias that is introduced by the majority of poorly sited stations due to the UHI effect?

      • Actually, Victor, I did a little sub analysis just on CRN1, just for GISS (because easier to access and display than NCEI). What they say about homogenization is generally correct. See my comment below for reference details. The published paper will have the stats my sample was too small to produce.

      • Victor Venema: “After homogenization…”

        You mean Mannipulation, right?

        In the cause of proving Mann-made Catastrophic Anthropogenic Global Warming?

      • After homogenization the trend Watts et al. (2015) computed are nearly the same for all siting five siting categories, just like it was for Watts et al. (2012)

        Ah, my dear VeeV, indeed it is. Fancy that.

        And the way homogenization did that was to, on average, adjust the trends of the 22% minority of well sited stations upward to match the trends of the 78% majority of poorly sited stations.

        Unperturbed Class 1\2 (1979-2008): 0.204C/decade.
        Class 3\4\5: 0.318C/decade
        Homogenized Class 1\2: 0.336/decade.
        Entire USHCN (all 1218), homogenized: 0.324C/decade.

        And that is how homogenization bombs: A systematic data error. This is a known thing — it’s right there on the bottle in fine print, between the disclaimer and the skull-and-crossbones.

        And that’s what has occurred. Sicks out like a fish in a tree.

        You could salvage the mess, VeeV. But you will have to use a.) Class 1\2s as your homog-baseline or b.) else apply a whopping downward adjustment to the non-complaint stations before you homogenize.

        That will remove the systematic error, and you can then proceed. All you are doing now is making a badly needed adjustment — in exactly the wrong direction.

        And the one who taught me most about how that all works is you.

      • But Mr. Jones, you showed yourself that the raw data in the USA has a cooling bias. Thus when this bias is removed the trend becomes larger.

        In your “raw” data, the “unperturbed” subset has a trend in the mean temperature of 0.204°C per decade. In the “perturbed” subset the trend is only 0.126°C per decade. That is a whooping difference of 0.2°C over this period. This confirms that in the USA the inhomogeneities (“perturbations”) cause a cooling bias.

        You are not seriously arguing that you showed homogenization to be wrong without studying how homogenization methods work, but only on the basis of two numbers looking similar?

      • David Springer

        With an as yet undetermined appendage Venema writes:

        “they did not study homogenization algorithms”

        The arrogance. It burns.

        Homogenization isn’t rocket science. It’s middle school science fair level work. It requires very little study. The institutional failure in academia that allowed you to somehow make a career out of such a simplistic area of study is the $64,000 question.

      • Why?

        Hmm. Let’s see … Because they are a stereotypical result of homogenization applied to a dataset containing a systematic error? Because there is an identified systematic error evident?

      • David Springer

        With an as yet undetermined appendage Venema writes:

        “they did not study homogenization algorithms”

        Teh arrogance. It burns.

        Homogenization isn’t rocket science. It’s middle school science fair level work. It requires very little study. The institutional failure in academia that allowed you to somehow make a career out of such a simplistic area of study is the $64,000 question worthy of in-depth study.

      • Pr Morel, a French climatologist and former head of LMD (Laboratoire de météorologie dynamique = Dynamic Meteorology Laboratory) used to say that 2 thirds of temperature anomalies were actually resulting from data correction

        Well, we are finding it to be a third. Maybe that will become a half, once I have deconstructed the CRS mess.

        Our CONUS Class 1\2 stations with most of the interval with MMTS equipment show 0.163C/d. And if they had been MMTS for the whole stretch, it’d almost certainly be even lower than that.

      • David Springer

        I’m beginning to wonder if Venema has a problem reading and writing in English. He’s patently ignoring these findings:

        Unperturbed Class 1\2 (1979-2008): 0.204C/decade.
        Class 3\4\5: 0.318C/decade
        Homogenized Class 1\2: 0.336/decade.
        Entire USHCN (all 1218), homogenized: 0.324C/decade.

        This is clear, indisputable evidence that unperturbed class 1 & 2 (well-sited) stations needing no adjustment except MMTS correction have a greatly increased trend (>50% trend increase) compared to all class 1 & 2 stations which have been “homogenized” to “correct” for perturbations.

        This is a smoking gun. Venema, Mosher, et al are exposed as charlatans There is no way they could have competently worked so long and hard on surface station temperature series adjustments without having noticed that unperturbed stations showed a greatly reduced warming trend in the US.

      • There is no way they could have competently worked so long and hard on surface station temperature series adjustments without having noticed that unperturbed stations showed a greatly reduced warming trend in the US.

        As they were not considering microsite and had no easy way of determining it, anyway, it was a very easy mistake and natural to make. I make that sort of mistake all the time. Mistakes are allowed.

      • Victor,

        Your observations are thoughtful and likely correct. My question to you is whether you think the basic finding is incorrect. The common purpose for everyone here is to reconcile land records with satelite records for the purpose of looking back to the pre-satelite era. (I assume that in the future the land record will be deprecated in favor of the satelite record.)

        Regards,

        Will Kernkamp

      • Will Kernkamp, the trend differences could be interesting. Depends on the reason. What is the reason will be very hard to determine given that we only know the siting at the end of the period, while we would need to know it throughout.

        There are a decent number of studies on cooling bias due to the time of observation bias and the influence of the transition to MMTS. And on the warming bias due to urbanization. For most other non-climatic changes we do not have many studies. For example, for the likely cooling bias due to relocations.

        http://variable-variability.blogspot.com/2015/01/temperature-bias-from-village-heat.html

        The changes in observational practices can be studied by make side by side measurements, also called parallel measurements. A group in the International Surface Temperature Initiative is gathering such parallel measurements. If we are able to find and get access to enough datasets, this would give a quite direct estimate of the size of the various biases. If anyone here knows of such datasets please contact me.

        http://www.surfacetemperatures.org/databank/parallel_measurements

        As far as I know the long-term trend of the satellite estimates fits to the station estimates over the USA. The differences are in the tropics. The satellites and some radiosonde dataset do not see the tropical hotspot. Given the various lines of evidence, I feel it is more likely that we will discover additional problems with the satellite estimates in the tropics. See a discussion I had earlier this month on this same topic:

        Given that the difference is mainly due to the missing tropic hotspot in the satellite temperature trend, it seems more likely than not that there is some problem with the satellite trends.

        The tropical hotspot 1) is seen in some radiosonde datasets, 2) it is seen in radiosonde winds, 3) it is expected from basic physics (that we know that the moist adiabatic temperature profile should be a good approximation in the topics due to a lot of convection), 4) you see the strong response of the troposphere compared to the surface at shorter time scales and 5) it is seen in climate models.

        But we will only know this with confidence when we find the reason for the problem with the satellite trends or when we find problems with all of the other 5 pieces of evidence against it.

        For the following discussion see:
        https://judithcurry.com/2015/11/28/week-in-review-science-and-technology-edition-3/#comment-747013

      • The tropical hotspot 1) is seen in some radiosonde datasets
        The IUK analysis does indicate some what of a hot spot, but that process uses ‘kriging’ over the huge spaces devoid RAOB stations which may not be valid, because kriging assumes a homogeneous distribution:

        The majority of analyses, both RAOB and MSU, not only show no Hot Spot, but indicate less warming with height, not more:

        3) it is expected from basic physics (that we know that the moist adiabatic temperature profile should be a good approximation in the topics due to a lot of convection)

        No. The HotSpot is modeled, but there’s no physical law being violated if the model fails. In fact, much of the Eastern Pacific has cooled over the MSU era, meaning, if the same amount of convective exchange occurs, physically, one would expect less warming aloft because less warming is occurring at the sea surface:

        But we will only know this with confidence when we find the reason for the problem with the satellite trends or when we find problems with all of the other 5 pieces of evidence against it.

        MSU and RAOB corroborate the models in these ways:
        1.) Stratospheric cooling 2.) Arctic maxima 3.) LT land-only trend which matches GISSTEMP land-only trend. 4.) co-located sonde correlation.

        But MSU and RAOB both falsify the Hot Spot, at least for the MSU era and there’s no compelling reason to believe that the measurements are correct in all other regions, but not in the region of the HotSpot.

        Does it matter? Maybe not. There is still warming. But that warming may be more than we would expect if the Hot Spot was occurring because the Hot Spot is what provides the negative Lapse Rate feedback. So if the Hot Spot appeared, perhaps surface warming would decrease.

        It’s possible that the Pacific cooling is part of some long term fluctuation that reverses and the Hot Spot does occur – only time will tell, and even then, may tell in abstract messiness.

      • Eddy, you may be interested in two abstracts presented at AGU.

        Trends in atmospheric temperature and winds since 1959
        Steven C Sherwood, Nidhi Nishant and Paul O’Gorman

        Sherwood and colleagues have generated a new radiosonde dataset, removing artificial instrumental changes as well as they could. They find that the tropical hotspot does exist, that the models predictions of this tropic hotspot in the tropical tropospheric trends thus fit. They find that the recent tropospheric trend is not smaller than before.

        Extract abstract.We present an updated version of the radiosonde dataset homogenized by Iterative Universal Kriging (IUKv2), now extended through February 2013, following the method used in the original version (Sherwood et al 2008 Robust tropospheric warming revealed by iteratively homogenized radiosonde data J. Clim. 21 5336–52). …

        Temperature trends in the updated data show three noteworthy features. First, tropical warming is equally strong over both the 1959–2012 and 1979–2012 periods, increasing smoothly and almost moist-adiabatically from the surface (where it is roughly 0.14 K/decade) to 300 hPa (where it is about 0.25 K/decade over both periods), a pattern very close to that in climate model predictions. This contradicts suggestions that atmospheric warming has slowed in recent decades or that it has not kept up with that at the surface.

        Wind trends over the period 1979–2012 confirm a strengthening, lifting and poleward shift of both subtropical westerly jets; the Northern one shows more displacement and the southern more intensification, but these details appear sensitive to the time period analysed. Winds over the Southern Ocean have intensified with a downward extension from the stratosphere to troposphere visible from austral summer through autumn. There is also a trend toward more easterly winds in the middle and upper troposphere of the deep tropics, which may be associated with tropical expansion.

        Uncertainty in Long-Term Atmospheric Data Records from MSU and AMSU
        In session: Methodologies and Resulting Uncertainties in Long-Term Records of Ozone and Other Atmospheric Essential Climate Variables Constructed from Multiple Data Sources
        Carl Mears

        This talk presents an uncertainty analysis of known errors in tropospheric satellite temperature changes and an ensemble of possible estimates that makes computing uncertainties for a specific application easier.

        The temperature of the Earth’s atmosphere has been continuously observed by satellite-borne microwave sounders since late 1978. These measurements, made by the Microwave Sounding Units (MSUs) and the Advanced Microwave Sounding Units (AMSUs) yield one of the longest truly global records of Earth’s climate. To be useful for climate studies, measurements made by different satellites and satellite systems need to be merged into a single long-term dataset. Before and during the merging process, a number of adjustments made to the satellite measurements. These adjustments are intended to account for issues such as calibration drifts or changes in local measurement time. Because the adjustments are made with imperfect knowledge, they are therefore not likely to reduce errors to zero, and thus introduce uncertainty into the resulting long-term data record. In this presentation, we will discuss a Monte-Carlo-based approach to calculating and describing the effects of these uncertainty sources on the final merged dataset. The result of our uncertainty analysis is an ensemble of possible datasets, with the applied adjustments varied within reasonable bounds, and other error sources such as sampling noise taken into account. The ensemble approach makes it easy for the user community to assess the effects of uncertainty on their work by simply repeating their analysis for each ensemble member.

      • Victor Venema: “Sherwood and colleagues have generated a new radiosonde dataset, removing artificial instrumental changes as well as they could.”

        Did they Mannipulate the data using AlGore-ithms running on computer games climate models, Victor?

        One day you and your colleagues will be held to account.

        Think on that.

      • @Victor Venema | December 23, 2015 at 5:53 pm |

        Steven C Sherwood, Nidhi Nishant and Paul O’Gorman

        Did they make data and code available? If so, where is it?

        Thanks.

      • So and so “have generated a new radiosonde dataset, removing artificial instrumental changes as well as they could. They find that the tropical hotspot does exist, that the models predictions of this tropic hotspot in the tropical tropospheric trends thus fit.”

        Climate science at it’s best. When all else fails, gather a few nondescript Team members ’round the ole caldron and conjure up a new dataset. Anything goes, when the planet needs saving. We have to wonder what took them so long to think of juggling the radio sonde data. How many climate scientists would we need, if not for the CAGW story? I will help you: about 9.

      • Losing to some really nice, exceptionally smart people is going to be hard for you.

      • JCH, “Losing to some really nice, exceptionally smart people is going to be hard for you.”

        Losing what? The trend in the tropics from 1959 and 1979 to 2012 are about the same 0.14 C/dec which is lower than modeled. The shift in the northern tropical temperature is larger than the southern and the rate of cooling in the tropical stratosphere has slowed “possibly” due to the “beginning” of stratospheric ozone recovery. Also the altitude of the “hot spot” is lower than modeled.

        They have a trend that’s weaker, lower and more NH shifted than modeled. We have greater than modeled land amplification in the 30-60 NH band than happens to correlated with a peaking AMO. We also have “possible” ozone recovery during a weaker solar cycle which happens to have greater than expect UV.

        Unless I am missing something I am not very impressed with the word salad.

      • You talking to me, putz? A win for me would be effective mitigation if it is necessary, and none if it ain’t.

        What really concerns me is that the climate alarmists may be right, but they are too freaking weak, incompetent and dishonest to convince about 7 billion people that global warming is a big problem. And their little anonymous blog troll minions aren’t helping.

      • JCH, btw, if that unexpected 30-60 N warming is smeared, I mean Kriged, into the tropics, that could be a bit problematic for fans of the kirging method.

      • Don,

        What’s the difference between a putz and a schlonge?

      • How many climate scientists would we need, if not for the CAGW story? I will help you: about 9.

        hmm. Looks like you’re implying it’s all made up and a hoax. They are trying to pull the wool over our eyes. It’s not honest disagreement about science at all. Are you saying that, Don?

      • Joseph, “Looks like you’re implying it’s all made up and a hoax. They are trying to pull the wool over our eyes.”

        Some really great hoaxes and conspiracy theories start with undeniable truths which are wonderfully embellished and exaggerated.

      • Joseph,

        You wrote –

        “hmm. Looks like you’re implying it’s all made up and a hoax. They are trying to pull the wool over our eyes. It’s not honest disagreement about science at all. Are you saying that, Don?”

        I don’t know what Don is saying, apart from what he writes, but you’re employing the Warmist deny, divert, and obscure ploy.

        Warmists suffer from mass delusional psychosis. They cannot distinguish fact from fantasy, obviously. Whether this makes them dishonest, or merely stupid, deluded, or both, is a matter of definition.

        Warmists deny normal science, and try to create their own fantasy version, with an invented language to suit. I suppose you are silly enough to agree that after cooling for some four and a half billion years, the Earth started to warm up, at the behest of the Warmist cultists. It matters not, really. Fact is fact. Fantasy is fantasy. No amount of measurebation is going to create a non existent greenhouse effect. Try as hard as you like. I don’t think it will make you go blind, although it may reinforce your blindness to reality.

        Cheers.

      • Victor,

        Eddy, you may be interested in two abstracts presented at AGU.

        Yes, you obviously didn’t look at the data I plotted – the IUK is the middle column, top row.

        The error with IUK ( Yuk? ) is in the name – Kriging.
        Kriging assumes a homogeneous distribution.
        If you have large unsampled areas, which the RAOB data does,
        and the peripheral observations are high, the data will be skewed high.

        Other analyses don’t assume that stations reflect what’s going on many thousands of kilometers away.

        If you believe that the upper air should reflect the surface, then you would want to reject such assumptions also, because the surface is most certainly not homogeneous at these spacings and the Eastern Pacific waters indicate cooling, not warming.

      • I didn’t say or imply that it’s a hoax, yoey. If I thought it a hoax, I wouldn’t have said I am concerned about it being real and the climate scientist chumps not being competent or credible enough to convince the folks of the seriousness of the alleged situation. You don’t help, yoey. Dishonest know-nothings who blindly support the cause do more harm than good.

        Mark: Pretty much interchangeable, I think. Mr. Trump knows the nuances and when to use one instead of the other. Didn’t you learn this stuff in school, Mark?

      • David Springer

        Mistakes of this magnitude, supported by a “consensus” of scientists, for 25 years running, are definitely NOT allowed.

  11. NASA doesn’t need no stinkin’ adjustments. Doesn’t even need thermometers!

    From the NASA site –

    “Q. If SATs cannot be measured, how are SAT maps created ?
    A. This can only be done with the help of computer models, the same models that are used to create the daily weather forecasts.”

    Surface Air Temperatures can’t be measured? No problem – create some with a model. BEST can no doubt assist. Steven Mosher can explain why algorithms and endless analysis are preferable to recorded temperatures.

    Nobody bothers to measure the actual surface temperature. The surface is generally buried under something, up to and including 10 kms of sea water. Some bits are 20 odd km closer to the Sun than others, with 9 km less atmosphere to get in the way.

    Completely pointless. What a waste of time and money! When you’re hot, you’re hot, and when you’re not, you’re not. At the very least, Anthony’s efforts show the silliness of believing that official temperatures are useful for anything serious.

    Cheers.

    • Brian G Valentine

      Hansen used to make life easier simply assuming that lines of constant latitude are isotherms

    • Mike Flynn: “NASA doesn’t need no stinkin’ adjustments. Doesn’t even need thermometers!”

      Astongliy, climate “scientists” admit it.

      “The data doesn’t matter. We’re not basing our recommendations on the data. We’re basing them on the climate models.”

      ~ Prof. Chris Folland ~ (Hadley Centre for Climate Prediction and Research)

      Now come on Mike!

      If you were a climate “scientist” and your living depended on it which would you believe – the reading on a $10 thermometer or the output of a $100,000,000 computer game climate model?

      Come on now, be honest!

    • Actually, the idea is to improve their usefulness. This is the start of a longer process.

  12. This looks like a very important work by Anthony et al. I find this quote particularly interesting:

    “We believe the NOAA/NCDC homogenization adjustment causes well sited stations to be adjusted upwards to match the trends of poorly sited stations.”

    It makes me think back to Karl et al. Hmmm.

    • Sciguy (and others):
      The individual surface station documentation (photos, ratings) has been available on line for years, also in tabular form in a large Excel spreadsheet I separately archived. It is not online at present because there have been many attempts at hacking/ tampering.
      On 3 Aug 2015 I posted a little analysis of the CRN1 stations at WUWT. Title is roughly How Good is GISS? WUWT search tool takes you to it from that snippet. Enough to see patterns, not enough for statistical validity. I did not include CRN2 to run stats because my Koch check never came.

      What the analysis showed is that GISS homogenization appears to do a reasonable job of removing urban CRN1 UHI. But it contaminates almost all the CRN1 suburban and rural stations, changing pristine ‘no trend’ to homogenized ‘increasing trend’ in all but one case. For whatever reason, Apalachicola Fl escaped unscathed. This issue is logically inherent in the published homogenization methodology, and makes GISS unfit for purpose unless only the high quality stations (CRN1 and 2) are used for homogenization. And NASA does not; it obviously uses the whole sorry lot.

      • David L. Hagen

        AKA Noble Cause Corruption – whether conscious or not.

      • Rud

        Thanks for the pointer to your August post at WUWT. I believe I was out of pocket that week so I had missed it.

        You noted:
        “One could either cool the present to remove UHI or warm the past (inserting artificial UHI for trend comparison purposes). Warming the past is less discordant with the reported present (the UHI correction less noticeable), so preferred by GISS.”

        Whenever I get clever and do something backwards for the sake of convenience, it comes back to haunt me. Maybe I am unlucky or just not so smart as Karl or the climate crew at Goddard. In any case, warning flags start waving in my mind when something is done backwards.

        In this case I am not sure why correcting for UHI is “discordant”. I see temperatures reported with “wind chill” all the time, resulting in huge deltas from the actual measured temperature. The public seems to understand the premise of “wind chill” and are comfortable with such reportage, so I would be surprised if folks had difficulty with temperatures reported “as measured” and also “as corrected” for siting issues.

      • Geoff Sherrington

        Rod,
        When you claim that GISS homogenisation does a reasonable job in removing certain UHI, you must have data or understanding that I cannot get.
        You have to know the “true” temperature at a site to judge if corrections do a good job reconstructing it.
        If you already have the true value, why bother to homogenise?
        In concept, this is logically similar to the synthetic attribution of climate changes to natural or man made. Sorry, cannot be done yet.
        Now, about the brainwashing that made nuclear electricity disliked by you – the good data do not support your dislike. But not here …..

      • Geoff, the data was for four large urban areas with CRN1 stations in my WUWT post.
        As for nuclear, you misunderstand. I am very much in favor. But think building as little gen 3 nuclear as possible, and investing a lot to really sort out and improve better gen 4 options (passive safety, refueling, radwaste) is a wiser course since there is no CAGW crisis to be resolved.

  13. I predict that NOAA will put out a paper called “Artifacts in the…” in the next month or so that will purport to “disprove” the Watts et al study. Of course that assumes that the Climate Nomenklatura will be unsuccessful in quashing the paper before it can get published in a journal

  14. It would be interesting to see how the global trend changes if temperature series that use other stations for corrections used only the 410 BEST ones.

    • they had 410 that they believe were un perturbed… not best…. un perturbed… read it again

      That means… there is no record of being changed or moved.
      That is different that different than being unperturbed.

      • Mosher, the description above of the 410 stations states they have not changed site status – meaning their siting hasn’t deteriorated. So, using the anomaly method would still yield good data.

    • they had 410 that they believe were un perturbed… not best…. un perturbed… read it again

      That means… there is no record of the station being changed or moved.
      That is different than actually being unperturbed.

      • David Springer

        If unperturbed stations may actually be perturbed due to inaccurate records then perturbed stations may be unperturbed due to inaccurate records.

        All bets are off when records cannot be trusted. Thanks for bringing up the #1 reason why skeptics don’t trust the so-called “consensus”. With friends like you warmunists don’t need enemies.

      • “unperturbed stations may actually be perturbed”

        Like when she says ‘no’ she really means ‘yes’.

        Andrew

      • Steven Mosher: “they had 410 that they believe were un perturbed… not best…. un perturbed… read it again”

        Wriggle…wriggle…wriggle…

        Somebody else playing in your sandpit, Mosher?

        Worried, are you?

        Perhaps you should be.

      • “If unperturbed stations may actually be perturbed due to inaccurate records then perturbed stations may be unperturbed due to inaccurate records.”

        The victor link below has an interesting idea. Since only well positioned stations are used, that will include those stations that used to be in bad locations, were moved, and for which there is no record of the move in the meta-data. This would cause a spurious cooling trend.

        Watt’s also states curators were interviewed, and that presumably diminishes this possibility. The link didn’t consider the interviews, for some reason.

      • Since only well positioned stations are used, that will include those stations that used to be in bad locations, were moved, and for which there is no record of the move in the meta-data.

        That is not impossible. But USHCN metadata is the best in the USHCN and has vastly improved historical notation than it did when we started out on this. Much more and better info covering both before and after. Microsite rating is fairly constant. There were only a handful of localized moves that changed the rating.

        For the metadata-poor GHCN, as a whole, the problem is far worse than for the seemly, serk USHCN. And even in the US, the metadata gets spottier going back before the satellite era.

      • “. Much more and better info covering both before and after. Microsite rating is fairly constant. There were only a handful of localized moves that changed the rating.”

        So, how did you assess the Shading for a site in 1979?
        And how did you assess the shading at the “current time”

        And How did you assess that metadata was “better”

        Example. The records indicate a TOBs change. Do you trust it?

        One reason we check both the data series for breaks and the metadata for changes is that neither record can be assumed to be pristine. And because sometimes TOBS changes require no adjustment. .

      • “If unperturbed stations may actually be perturbed due to inaccurate records then perturbed stations may be unperturbed due to inaccurate records.

        All bets are off when records cannot be trusted.”

        ####################

        wrong. All bets are not off.
        With Historical data it is always possible that reports and records are wrong. But you have tests for some of these.

        Example: metadata says the site moved from 0 meters ASL to 1000
        meters ASL.

        Data shows no cooling.

        So you have a choice:
        A) the laws of physics have been broken
        B) the metadata is wrong.

        Example: 10 sites all located within a few km of each other.
        in one month 9 stations show a change in TOB in metadata.
        they all show a jump of .25C
        The 10th station also shows a jump of ..25C.
        BUT its metadata shows no TOB change.

        Again you have a choice.
        each choice is a bet. not all bets are off.

      • I’m just interested in the surprisingly good match between BEST and UT1 back into history. Looks like you may have nailed the early days better than the others. :-) (Only if this is a valid treatment of course).

        https://wordpress.com/post/climatedatablog.wordpress.com/378

      • David Springer

        If TOBS change sometimes requires no adjustment then the theory behind TOBS having a warming effect is bullschit.

        That’s probably part of the reason why stations without perturbations show a drastically different trend. Keep talking Mosher. You dig your hole deeper with every word.

      • And How did you assess that metadata was “better”

        By the huge amount and detail added between the time when we started looking at this and now. Someone at NCDC made a good hire.

      • Mosh is correct. The term “perturbed” means a station with (as far as we can determine) clean metadata. It does not mean clean microsite.

        “Compliant” means Class 1\2 (Leroy puts the microsite offset effect at zero for both).
        “Non-compliant” means

        If we know the location of a station after a recorded move (HOMR seems quite good with this, and so they should be), but do not know what the microsite was prior to the move (a very large number), we drop the station. That removes most of the Before-after issue (though I am sure it is not 100%).

        If the HOMR metadata indictes a TOBS flip (AM to PM or PM to AM 10% of the way within the series interval), we drop the station. If there is a blip in the middle, but it goes back to what it was, if it is not badly skewed, we retain the station, because such a blip will not materially effect trend. (Note that a centered blip in a longer time series may not be centered in a shorter series, and we’d have to drop it. Etc. It’s all relative.)

        HOMR is very good on TOBS. After all, all they have to do is transcribe it from the B-91s (which are archived as PDFs online).

        J N-G is looking for major discrepancies in TOBS-adjusted vs. Raw data for some stations, and we may prune our set slightly. So far we’ve lost a couple, but no Class 1\2s, so no material effect. Some we may include but flag (so’s you-all can remove them if you like).

        That is our basic method.

        The advantage of NOAA is that it is organizationally stable. No regime change, you know. Inter alia. So, at least during our study period, we have good, consistent records, among the best, if not the best that the world has to offer.

        ————————————————————–

        Anyhoo. We got the sweet spot in terms of distribution, data, and metadata. The further back you go, of course, the worse it gets.

        Poor Mosh! What a tangle that he has to deal with that I do not. Not only does he have the older USHCN’s ubiquitous “-9999s” to deal with an all those “Quien Sabe” notations in the metadata boxes, but he as the whore RoW’s problems on his shoulders.

        He does it the way he does it because there is no other way to do it. We can afford to (and do) drop our known perturbed stations. Mosh (and the VeeV) cannot. They cannot. The RoW distribution sucks, so they can’t afford to drop the perturbed stations. Just can’t.

        So he must adjust them. And since metadata is severely lacking, he is compelled to infer that from the data. It’s the tail wagging the dog, but he has no other option.

        And, besides, that is what I am doing, in effect, inre. homogenization, anyway — inferring from our findings.

        I also infer, in much the same manner, that the HOMR metadata is relatively clean: The data (upterturbed, compliant v. non-compliant) shows a relatively gradual divergence, not a series of discordant jumps which would occur if our results were an artifact of bad or missing metadata. So in addition to the HOMR USHCN metadata looking good, it acts good, too, when we crack the whip a little.

        All that is inference — very good inference, think. And, given the circumstances, unavoidable. Now maybe the body on the floor with a knife sticking out his back is actually a clever suicide and not a murder. Or maybe he was cleverly poisoned and then stabbed to cover up the needle hole. Until the forsensic team (VeeV, Mosh, Zeke, et al.) gives it a much hairier eyeball, we cannot know for sure. But for whatever reaon, there it is, dead on the floor. I think it’s horses, not zebras this time.

        I am not against inference when it cannot be avoided. A missing datapoint is a missing datapoint. You might say that one of the goal of our project is to improve current methods of inference.

      • Okay, a Freudian slip there. But my subconscious meant it.

    • 92 out of the 410 are well sited. The 318 remaining are poorly sited.

      • But if I’m reading correctly, the siting hasn’t changed during the period of interest. Therefore, the anomaly method should yield good data.

      • That’s what NOAA thinks. It isn’t so. Of the 410 unperturbed stations, the trends of the 318 poorly sited stations averaged over 50% higher than the well sited. Adjusted was worse.

        You would be right if the offset of the heat sink was the same at the start of the series as at the end. But there is a delta. Therefore, the poorly sited unperturbed stations trends are increased, anomalized or not, and the trend is invalid.

        Therefore the anomaly method can’t and won’t work.

  15. This is the global temperature change from 2000-2010 relative to 1970-1980. Does this look like a map you get from siting problems? No. Case closed. The trend is dominated by regions that have less annual snow cover than they used to, but elsewhere it is also equally warming in populated and unpopulated areas.
    http://data.giss.nasa.gov/cgi-bin/gistemp/nmaps.cgi?sat=4&sst=6&type=anoms&mean_gen=0112&year1=2000&year2=2010&base1=1970&base2=1980&radius=1200&pol=rob

    • Your proclamations are always very persuasive, yimmy. We would have preferred a huffpo link, but what the heck. We all give up now. You win. You can stop the incessant preaching.

    • Case closed.

      I think not.

      • I would like to see Watts explain that with his station siting issues. He just misses the big picture.

        From what I have seen, it could easily be explained. From what little I’ve seen of it, Arctic siting appears scandalously wretched. Perhaps it is you who are missing the big picture.

    • Jim D:

      The trend is dominated by regions that have less annual snow cover than they used to, but elsewhere it is also equally warming in populated and unpopulated areas.

      Yes to the first part. Not so certain about the second part.

      http://data.giss.nasa.gov/tmp/gistemp/NMAPS/tmp_GHCN_GISS_ERSSTv4_250km_Anom0112_2000_2010_1970_1980/nmaps_zonal.pdf

      • Warming even more, if anything, in unpopulated areas. Not what Watts would want to see, for sure.

      • Jim D: “Warming even more, if anything, in unpopulated areas.”

        You mean areas where the temperatures are Krigged/made up because there are no measuring stations?

        You just don’t get it, do you?

      • Some people just don’t believe that the Arctic is warming the fastest of all areas. I would like to see Watts explain that with his station siting issues. He just misses the big picture.

      • Jim D: “Some people just don’t believe that the Arctic is warming the fastest of all areas.”

        Please point out where I or anyone else has indicated that we disbelieve that the Arctic is warming the fastest of all areas.

        You’re making stuff up again, aren’t you?

        In any case, what if it is?

        Does it prove AGW, and if so, how and why?

      • Looked like you were disputing Cowtan and Way when they added warming for the missed Arctic regions to HADCRUT4, but maybe you agree that it needs this correction. Hard to make sense of what you say sometimes.

      • Jim D: ” Hard to make sense of what you say sometimes.”

        Ummmm….Yesss…..

        I can see how someone with a mindset like yours would find that…

      • Warming even more, if anything, in unpopulated areas. Not what Watts would want to see, for sure.

        The greatest warming seems to be in unmeasured areas.

      • You mean like the Arctic Ocean? Have you heard of polar amplification?

      • Jim D:

        The warming shown by zonal maps is concentrated in the NH. That’s where most people live. As you pointed out, the higher NH trend is associated with something other than atmospheric CO2 concentrations.

        Hansen’s 1200 km radius had an average correlation coefficient for temperature variation of 0.5 over the poles and 0.33 across much of the globe. It was chosen so Hansen could claim “global” coverage despite large gaps in station data. Merely dropping the radius to 800 km reduced global “coverage” to 65% (from 80% @ 1200 km) in 1987.

        Using 1200 km infill you only need two stations to cover most of the continental US. They really shouldn’t use the 1200 km range any longer.

      • The higher NH trend is simply because the NH has larger continental areas, and the CO2 effect is largest over land points because of the lower thermal inertia. The land is responding to the forcing change at twice the rate of the ocean, as a global average, which is what would be expected from a steady rate of external forcing increase, the main agent being GHG changes. Once again, to be clear, this is not what siting issues would look like. Watts is playing his followers on the noise and hoping they won’t notice this global signal.
        http://data.giss.nasa.gov/cgi-bin/gistemp/nmaps.cgi?sat=4&sst=6&type=anoms&mean_gen=0112&year1=2000&year2=2010&base1=1970&base2=1980&radius=1200&pol=rob

      • Jim D:

        We’ll have to wait for the details but from what I’ve seen thus far, Mr. Watts’ surface station work is more defensible than GISS’ 1200 km correlations.

      • Using only properly sited stations and the tried and true 1200km Kriging Kluge will give us the coverage and accuracy we are looking for. Right, yimmy?

      • Warming even more, if anything, in unpopulated areas. Not what Watts would want to see, for sure.

        Why not?

        We are not into not seeing things. We actually want to figure out what is and is not going on. Besides, it’s entirely compatible with our USHCN standard. If we remove urban data from our Class 1\2 mix, we see no change in trend.

        UHI and population density is not the issue for trend. For offset, no doubt, but not for trend. Poorly sited non-urban stations warm faster than well sited urban stations. And urban siting is, on average, superior to non-urban siting.

    • David L. Hagen

      Jim D
      Sounds persuasive until you examine the data in light of Watt’s findings – then it falls apart. Try again with ONLY class 1 and class 2 stations.
      Then try only those that have NOT been perturbed, adjusted, manipulated, spindeled, etc. Then for a reality check, compare that with the satellite data. Then it might be worth looking at seriously.

      • Then for a reality check, compare that with the satellite data. Then it might be worth looking at seriously.

        Done that. Details in the paper. Our CONUS trend runs ~10% below RSS2 and UAH6. Considering that LT trend is supposed to be 10% to 40% over LST during a warming interval, our results splits the uprights, and supports the work of Klutzbach and Christy. So it is not only Anthony that is being vindicated, but those LT v. surface papers as well.

      • David Springer

        You guys finally found the missing hotspot! Masked by false surface warming. Perfect.

    • Wow. Thanks for that JimD. It shows that the changes are mostly Northern hemisphere, just like the Minoan, Roman, and Medieval warm periods (ok, I have only seen this said of the medieval, so sue me). So one can not simply dismiss higher temp.’s in these times compared to now by saying “they only show up in the NH”, not globally. Well, all we have now is a number we call the global avg. temp. but there are many steps to get to that number and it has a large error bar (or should). And we don’t have that same metric for past times like the MWP. Both data and (I believe) theory say that the temp. increases will be larger at night and in the NH (i.e. cold places). So the data you link to is great that it confirms that the changes now are also NH just like the MWP.

    • There has already been an effort by skeptics to do the temperature series from scratch. It was joined by Watts and Curry, and Mosher, who was more skeptical back then, and a few well respected statisticians, Rohde and Hausfather, and it was sparked by Climategate and a general mistrust of Jones and his CRUTEM datasets. Anyway, they ended up confirming that Jones was basically right, Watts and Curry bailed and prefer not to talk about their involvement with BEST, and we are here now with Watts trying something again to see if he can get a better answer this time.

      • Brian G Valentine

        Jim you get worse by the day, you sound like a conspiracy theorist

      • To my knowledge, nobody has done a complete site survey like this before. Watts deserves a lot of credit for the idea and follow through.

        Now, the details and results remain to be seen.

        But it appears that all the analyses include crap stations, kinda similar to the sub-prime mess when bad loans were homogenized into the Collateralized Debt Obligations.

      • You can have the best site in the world, but if you change your thermometer or time of ob without accounting for it, you’re screwed. BEST had a way of detecting these using neighbors. Watts? I am not sure what he does?

      • Geoff Sherrington

        Sorry Jim D,
        This scientist does not accept that Jones was basically right. In 2007 I sent him emails with questions about data quality and got evasive answers.
        I sent the emails because there were plausible strong signs of cherry picking in the UHI papers re Australia and China.
        Those strong signs did not go away with the efflux of time.
        Please stop making generalisations about people you cannot represent.
        Geoff.

      • they ended up confirming that Jones was basically right, Watts and Curry bailed and prefer not to talk about their involvement with BEST, and we are here now with Watts trying something again to see if he can get a better answer this time.

        No need to speculate. You can always just ask. Both GHCN and BEST are currently subjected to the same systematic error, i.e., faiulre to consider microsite. When you adjust doing pairwise, the well sited stations tend to be identified as outliers and adjusted upwards to match the poorly sited majority.

        Homogenization (or any other pairwise) has two faces. One is where the majority of the data is correct, and we see the beaming visage of Kindly Uncle H, who cures all ills of man, beast, and missing metadata. The other face is when the majority of the data has the same error, and that is when kindly Uncle H goes postal and becomes the H-bomb.

        Systematic data error is what has happened here.

      • You can have the best site in the world, but if you change your thermometer or time of ob without accounting for it, you’re screwed. BEST had a way of detecting these using neighbors. Watts? I am not sure what he does?

        We use metadata (TOBS listed for each station). All BEST does is detect jumps, and then assumes it is a TOBS flip. But sometimes jumps just happen. Sometimes it gets warmer in a particular spot because it got cooler nearby. (That is one reason I am leery of pairwise even if I have to use it.)

      • David Springer

        If you’re Jim D you make generalizations about people you cannot represent. It’s what you do.

    • Does this look like a map you get from siting problems?

      Yup that is exactly what it looks like.

      You see quantization effects (areas have sawtooth edges).

      The level of detail is so low in that illustration (it doesn’t qualify as a chart) that it doesn’t tell you much.

      There is no way the real planet has warming that is that consistent – particularly with the square edges on circular boundaries.

      • So you think that the largest warming which is in the high latitude continental areas of Canada and Siberia is not due to regional snow-cover changes for example? Interesting. Continue.

      • Well…

        Canada gets to the adjustment.issue – but I have looked at some Canadian stations and some of them are warmer. I’m not sure if they are adjusted or not.

        Michigan stations I’ve looked at.are just weird. The 90s were cool, the 00 were warm and 2014 was almost a record low. And this is presumably the adjusted data. At some point I will dump the USHCN data and diff the raws vs adjusted and form an opinion.

        Russia is a special case. The northern areas in the USSR got more fuel allowance if they claimed it was colder.

        The Arctic ice has been increasing for about 4 years.

        Hard to say what is going on. Climate looks like weather in the 21st century.

        The albedo (cloud cover) seems to be driving the temperature.

        I expect it to get up to 0.5 °C warmer as the 20th century warming gets fully incorporated and that happens on a century time scale.

        Bottom line is in 2014 only 35% (.35) of CO2 emissions stayed in the atmosphere (CDIAC Dec 2015 Global Carbon Budget data). So even if I was worried that a GINORMOUS CO2 increase could doom us I wouldn’t be worried. CDIAC dialed back their estimates of CO2 emissions despite China admitting they cheated which is a bit odd.

  16. I made a lot of short independent comments at the end of the WUWT thread. I’d like to consolidate them and put them where they’ll get more attention.
    ——————

    A question: Has this wobbled N-G’s warmism any?

    This is almost as exciting as reading the first Climategate thread here. (I was the first commenter on it.) Fortunately, the warmists won’t be able to whitewash this one away. AW has put a spoke in the wheels of the bandwagon.
    And to think that AW had to pass the hat to pay for his way down–and had to drive to cut costs. While money was no object for the 40,000 attendees in Paris. They ought to hold the next COP in Chico.

    Maybe you’ll be called to testify by Lamar Smith! (Or one of your team.)

    This ought to make the satellite data sets the gold standard, and relegate the land-based records to the lumber room.

    !!!!!!
    Ev’ry valley shall be exalted,
    and ev’ry mountain and hill made low;
    the crooked straight
    and the rough places plain.
    !!!!!!
    (Isaiah 40:4)

    THEY could have done this study. THEY could have told their grad students that this would make a great dissertation. THEY should have wanted to ensure they had a firm foundation. For example, NOAA could have told its stations to send in photos of their sites. But NOAA didn’t. In fact it refused to ask them, when the suggestion was made to it.
    But they didn’t look. Because they didn’t want to see.

    The necessary (by inference) revision of the global temperature record puts it below the lower bound of the models’ projections. So now we can say, “The consensus is 97% wrong.” How pleasant to turn the tables! And how deserved!

  17. My reaction is the same as it was in 2012

    http://climateaudit.org/2012/07/31/surface-stations/#comment-345345

    “Posted Jul 31, 2012 at 1:19 PM | Permalink
    ‘but the idea behind this was to put it out into the blogosphere for trial by fire.”

    Precisely.

    You will note that data for this paper is absent. That effectively means that we cannot do a proper review. we can’t audit it.

    Prediction: special pleading will commence.

    Latimer Alder
    Posted Jul 31, 2012 at 2:30 PM | Permalink
    Re: Steven Mosher (Jul 31 13:19),

    Mosh is right. You have to publish the data as well as the press release.

    You cannot even begin to claim the high ground without doing so. Leave such nonsense to the stuffy academics.”

    #####################################

    Zeke wrote

    http://climateaudit.org/2012/07/31/surface-stations/#comment-345355

    I wrote… and McIntyre commented
    http://climateaudit.org/2012/07/31/surface-stations/#comment-345389
    Posted Jul 31, 2012 at 3:40 PM | Permalink
    But Hu.

    1. Anthony has put it out for blog review and cited muller as a precedent for this practice. that practice included providing blog reviewers with data.

    2. Anthony brought Steve on board at the last minute even though hes been working on this paper for a year. Steve has a practice as a reviewer of asking for data. Since we bloggers are asked to review this, we would like the data.

    3. if, they want to release the data with limitations, that is fine to. I will sign a NDA to not retransmit the data, and to not publish any results in a journal.

    4. You have to consider the possibiity than Anthony and Steve could now stall for as long as they like, never release the data and many people would consider this published paper to be an accepted fact.

    Steve: Mosh, calm down. this is being dealt with.

    ################################################

    My reaction again.

    1. I would like to see the data.
    2. In 2012 I thought the classification of stations was Publishable on its own.
    3. I was willing then and am willing now to sign a NDA, basically promise
    not to copy the data, transmit the data, or publish anything based on the data, or even talk about it.

    • Well, if the data isn’t forthcoming after a few years, or if it is lost, hidden, etc., then you can really have something to complain about.

      Meanwhile, I’m wondering if any public funds were used to compile the data. If not, it’s really up to the owner (presumably Anthony) to do as he wants with it.

      • Anthony Watts stated that data compilation was done with private funds, not public funds. Main benefit from releasing data is to establish credibility.

      • Much of the siting data was collected by volunteers who weren’t paid for the service – other than to put yet another nail into the warmist coffin.

      • Meanwhile, I’m wondering if any public funds were used to compile the data.

        This paper is entirely unfunded.

    • David Springer

      Steven Mosher | December 18, 2015 at 12:50 am | Reply

      “My reaction is the same as it was in 2012”

      —————————————————————–

      Obviously. Knee jerk. And we could leave out the knee part.

    • David L. Hagen

      Mosher See Watts above. After Mueller breached NDA, twice burnt, thrice shy.

      • 1. There was no NDA
        2. We did exactly what we promised to do.
        3. Anthony was pissed over other matters..perhaps I should pull out some emails…..

      • Pull out the emails, Steven. Do you got the ones from 2012? That seems to be the key year. If you got enough emails, I bet Tony will relent and give you the data. If that’s really what this is all about.

    • > http://climateaudit.org/2012/07/31/surface-stations/#comment-345345

      I don’t think Ron’s and Nick’s questions have been unanswered on that thread.

  18. Since the trend for the “unperturbed” Class 1 or Class 2 stations is derived from only 92 stations across the country it would be interesting to see the error bars/ uncertainty associated with this trend, compared to the uncertainly for the full USHCN

    • Heck I would Like to see a random sample of the 92 stations..

      he doesnt have to release them all… just half

      • Why are yout talking about 2012, Steven? It’s almost 2016. Evan is answering quewtions over on WUWT. You seriousluy think they aren’t going to release the data?

        evanmjones
        December 17, 2015 at 4:45 pm

        It’s not perfect, but it’s as good as it can reasonably be. We define our terms and what we think is going on in the paper, itself.

        We will also be archiving the data and formulas in Excel, which will put it in a format that anyone can dicker with it or change the parameters — add or drop stations, change ratings, add categories (i.e., subsets), add whatever other version of MMTS adjustment you like, that sort of thing. (And I have some iconoclastic notions of how MMTS should really be addressed.)

        But the thing is, we welcome review. Some station ratings are obvious at a glance, but there are a few close calls. So it will all be open for review, complete with tools to test and vary. This paper is not intended as an inalterable doctrine. It is just part of a process of knowledge in a format it is easy to alter and expand.

        If anyone has any questions, I’ll be glad to answer.

      • Based on this comment and others below, it seems Mosher will get all that he is asking for. What’s with all the whining?

      • Don.

        people asked for my comments. that comment has changed.
        show me the data.

        Last time Zeke and I made comments… Do you think either one of us
        were asked to be co authors? Let take zeke because he is nice and I am not.

        Anthony and Company work for a year. They give it to Steve Mc. and Christy. Those two paste their names to garbage!!! Zeke finds the problem in TWO FRICKING SECONDS…. and he gets a polite
        thank you. he should have been invited to co author.

        so. show me the data and you’ll get my comments.

      • “Comments hasnt changed:

      • “Those two paste their names to garbage!!! Zeke finds the problem in TWO FRICKING SECONDS…. and he gets a polite
        thank you. he should have been invited to co author.”

        Then you really ought to feel really bad for McIntyre, who finds really deep rooted problems that take I would guess many days/months to find, carefully explains them, and as I recall in a couple of occasions doesn’t even get the credit for finding the problems.

        It’s not as if he finds some on the surface problem in 2 fricken seconds and expects to be a co-author, or something.

      • David Springer

        “he [Zeke Hausfather] ]should have been invited to co author”

        John Neilsen-Gammon was invited instead. Zeke is a piker. No PhD and his masters is in the wrong field. Of course Zeke’s inadequate qualifications exceed yours by a step or two as evidenced by mostly not lowering himself to perpetual climate science blog comment warrior.

        Gammon on the other hand has world-class qualifications exceeded by essentially zero others.

        Professor John Nielsen-Gammon is an American meteorologist and climatologist. He is a Professor of Meteorology at Texas A&M University, and the Texas State Climatologist, holding both appointments since 2000. Born: 1962 Education: Massachusetts Institute of Technology

        MIT, doctorate in meteorology, full professor, state climatologist for Texas.

        If Texas was a country it would have the 11th largest economy in the world.

        And you suggest Zeke as co-author. Unphuckingbelievable.

      • mosher, “Those two paste their names to garbage!!! Zeke finds the problem in TWO FRICKING SECONDS…. and he gets a polite
        thank you.”

        Didn’t Zeke find that if you use a crappy method you have to include TOS adjustments to fix the choice of a crappy methodology? TOS has virtually zero impact on Tmax and can have significant impact on Tmin, but the adjustment is based on Tmean. If you are trying to isolate issues with Tmax and Tmin you would look at the more pristine sites that have the least need of any adjustment including TOS. This is a bit like the slicing method with less cutting.

      • It’s unfortunate and annoying that the release of data is going to be after the press release for the paper.

        Mosher’s right to be impatient.

      • Scott, “It’s unfortunate and annoying that the release of data is going to be after the press release for the paper.”

        I believe if someone had not stolen portions of a unique data collection that all of that would be online now. Then the first stage would have been the station rating recommendations and the second stage would have been methodology. However, once a preemptive paper is published it complicates the “uniqueness” requirement for most peer review journals.

      • It was a press release from the AGU promoting a AGU poster presentation, Scott. It ain’t a freaking paper, yet. Try to pay attention. Don’t let Mosher’s shiny objects distract you.

      • Somebody here is very bitter…and should withhold comments until a calmer moment of rationality it regained.

      • Most of the time, when data is released, it is upon publication, not before.

      • remeber what I wrote about the SAME PAPER, and the SAME DATA in 2012

        “But Hu.

        1. Anthony has put it out for blog review and cited muller as a precedent for this practice. that practice included providing blog reviewers with data.

        2. Anthony brought Steve on board at the last minute even though hes been working on this paper for a year. Steve has a practice as a reviewer of asking for data. Since we bloggers are asked to review this, we would like the data.

        3. if, they want to release the data with limitations, that is fine to. I will sign a NDA to not retransmit the data, and to not publish any results in a journal.

        4. You have to consider the possibiity than Anthony and Steve could now stall for as long as they like, never release the data and many people would consider this published paper to be an accepted fact.

        Steve: Mosh, calm down. this is being dealt with.”

        ######################################

        How many times, including today, have people considered this to be a published fact.

        Hint. I think you will find something in micro site. BUT

        Hint 2: They use a method that ross mckittrick has roundly criticized.

        go figure.

    • And they are not looking at the trend only for these 92 stations. They are using them, presumably via gridding (and interpolation?), to calculate the temperature anomaly for the entire CONUS each month.

      • That wasn’t a criticism Evan, just a clarification.

      • Criticism is fine.

        Thing is that the poorly sited stations outnumber the well sited by almost 5:1. So the pairwise (between both types] is primarily with poorly sited stations. Homogenization does not take an average, it identifies a majority-mean and adjusts the minority to conform.

        So which set of stations do you suppose are getting adjusted? And in which direction do you imagine they are adjusted?

        That’s what’s going wrong.

    • There is a statistically significant difference. In spades.

      • I thought so. My post using just the much smaller subset CRN1 showed this, but not statistically. Cannot wait to read the paper when it gets published.

    • I put them in the bar charts. J N-G is doing the stats, and a full analysis will be included in the paper. (For the full set of Class 1\2 vs. 3\4\5, he gets over 99% confidence, FWIW.)

  19. Anthony Watts is on the record with this:
    ‘When the journal article publishes, we’ll make all
    of the data, code, and methods available so that
    the study is entirely replicable.’

    • Of course, he first published his conclusions back in 2012, so there’s no knowing how long it might be before he makes anything available for people to check his work. I would think that’d be a reason not to publish a press release, but, you know, apparently publishing press releases when people have nothing they can examine is cool by him.

      • And those were faulty. The errors identified then appear to have been corrected now. A press release about a poster presentation giving the poster conclusions is fair and ordinary. Warmunists and universities do it all the time. The data and code will be released with the paper. Cool your jets.

      • McIntrye versus rud.

        Who makes more sense? Mcintyre in 2012
        or rud, who tells Brandon to cool his jets.. over three years later?

        “Steve Mcintyre commented

        “Steve: I agree that there is little point circulating a paper without replicable data – even though this unfortunately remains a common practice in climate science. It’s not what I would have done. I’ve expressed my view on this to Anthony and am hopeful that this gets sorted out. Making the data set publicly available for statistically oriented analysts seems far more consistent with the crowdsourcing philosophy that Anthony’s successfully employed in getting the surveys done than hoarding the data like Lonnie Thompson or a real_climate_scientist.

        It would have been nice if you’d spoken out on any of the occasions in which I’ve been refused data. You are entitled to criticize Anthony on this point, but it does seem opportunistic if you don’t also criticize Lonnie Thompson or David Karoly etc.”

      • Mosh,

        How does what Steve Mc said help your case? He said it was a common practice in climate science and that you should complain just as strongly about others if you are going to complain about Anthony. Also consider that a non-climate scientist (yes, he is a meteorologist) running one of the world’s most popular skeptic blogs is going to have a harder time getting published and being criticized for minor choices in data analysis, that those in the CLUB don’t have to deal with and I understand why he is reluctant to release it all too soon. Maybe he should trust you if you sign a NDA. I really don’t know how to judge that.

      • Vintage McIntyre (2012) has very little relevance to a poster presented at the 2015 AGU meeting. Do you have any quote from Steve Mc on the current controversy, which is mostly about you whining about Tony not giving you data that he is not giving to anyone else? You are making a spectacle of yourself, Steven.

      • ==> “Vintage McIntyre (2012) has very little relevance to a poster presented at the 2015 AGU meeting.”

        Don makes an excellent point. Just because someone offers a set of standards for one situation doesn’t mean that there should be an expectation that they would be applied in very similar situations three years later.

      • “He said it was a common practice in climate science and that you should complain just as strongly about others if you are going to complain about Anthony. ”

        CLUE FOR YOU EINSTEIN

        1. I did complain about others. remember who coined the phrase
        “free the code, free the data”
        2. NOT A SINGLE SKEPTIC , least of all Anthony, complained
        when I went after the data of Jones and the code of Hansen.
        NOT A ONE. No skeptic ever called my demands for data “whining”

        On two occassions now Anthony has posted stuff asking for help and criticism to do ‘open science’ of sorts.. And as I pointed out
        if he wants good criticism, he has to supply the data.

        On many occassions Anthont has complained about science by press release… I AGREE ! but now he wants to do his own science by press release.

        Imagine I did a study that proved c02 was the cause.. of all the evil
        And I did a press release… buut I refused to give you all the data..
        and YEAR after year I said…. wait for the data… I need to publish

        At some point folks are within their rights to say… shut up until you do publish.. or Publish outside the standard “science” and “nature” collection of journals.. If the data is true and the method sound, folks like me could give a rats ass about the journal name or “impact factor”

      • Mosher: Don’t you love newbies who weren’t online when you had WUWT shilling your Climate Gate book to deniers? Don Don is right, this temperature stuff is a dead horst.

      • Horse, “Don Don is right, this temperature stuff is a dead horst.”

        Most of it is a dead horse, not all of it. It seems like a dead horse because the defenders stick to the dead parts. The majority of the temperature record is LIG max/min which limits accuracy. Once the MMTS was introduced there was a different set of problems. No matter which you pick as your “standard” you will get different uncertainty ranges and variance. To me, “ideal would be a method that maintains the more consistent uncertain for whatever length you want the record to be so you don’t have “almost unbelievable” accuracy at one point and +/- a degree at the other end. You just end up confusing what types of error you are messing with.

        There is the same problem with paleo.

      • OMG! It’s just like 2012. Wheeeeere’s McIntyre when you need him? Tony WUWT is at it again. Poster in the hall at AGU in front of three or four people. Poster science by press release! Oh, the freaking humanity! Somebody please report former Mosher mentor Tony WUWT to the freaking AGU!

        I will spell it out for you, Steven. Tony won’t show you the steenking data because you have been Svengalied by Muller and you can’t be trusted. You will just have to live with your choices.

      • > remember who coined the phrase “free the code, free the data”

        The third bit is missing. First it was “free the debate.” Then a book got published. Then it became “open the debate.”

      • “I will spell it out for you, Steven. Tony won’t show you the steenking data because you have been Svengalied by Muller and you can’t be trusted. You will just have to live with your choices.”

        I remember when Jones didnt trust Hughes, and said why should I give you my data when you are just going to find mistakes.

        So, what is Anthony afraid of?

        1. That I will take his data and publish before him? Not gunna happen,
        we took the data he gave us before and didnt publish before him.
        In fact we SUPPORTED HIS CONCLUSIONS IN OUR PAPER!!

        2. take his data and find errors? Wouldnt that make his submission BETTER?

        so what is he afraid of? that I will share it? Nope. he can sue me if I do
        That I will publish before him? nope he can sue me if I do
        plus I only ask for half the data…

        But thanks for arguing that scientists Dont have to share data with people they dont trust…. wait… Jones didnt McIntyre or Willis or me..

        You’ve set a fine standard for science.

      • Steven Mosher: “So, what is Anthony afraid of?”

        Judging by your extraordinary level of agitation and entirely unprovoked attempts to discredit AW et al even before they publishes their paper Mosher, I think the question should be “so, what is Steven Mosher afraid of?”

      • I haven’t set a standard for science, Steven. I am just an outside, objective observer telling it like it is. I don’t know why you scientists, quasi-scientists and wannabes can’t just get along and share your little data things. Especially you and Tony. You used to be tight.

        Anyway, it’s PR about a poster. It may never be a paper. We have more important things to think about. Our brainpower is wasted on this, Steven.

      • It appears Steve Mcintyreis no longer an author. I true, why is Steve Mcintyre no longer an author of the paper?

    • I would even take a RANDOM SUB SAMPLE of the data.

      At one point I asked for a sub sample of the 92 stations.

      For an entirely un related project.

      Answer.

      No.

      • You’d think he might not like you much or something.

      • Anthony says you will get your data. I can see why he wants to keep it to himself given the history. Plus, he goes to all the trouble to gather it, why shouldn’t he get first shot at analysis? That’s only fair.

      • AW has now posted links to the two prior times he got burned on this early provision of data thingy. The second was by BEST, and in express contravention of the data agreement. You will just have to be patient for the paper, as your organization PROVED itself untrustworthy in this regard.
        You all made your bed. Now lie in it and stop complaining here. Why not apologize for your organization’s previous bad behavior over there.

      • jim2.

        I have promised him that I will NOT publish anything using the data.
        I will sign a NDA.

        Further I ask for only a SUB SAMPLE of the 92 stations. or a subsample
        of the BAD stations

        And NOT for the reason you think. basically, I want to use the station
        data to build a classifier that can work on world wide stations.

        basically take a subsample and see how well I can predict which other stations in his collection belong to the groups.

        I dont need all 410 stations. For this project I DONT WANT all 410.
        just a sub sample. With just a sub sample I can prove out the classifier.

      • “..he two prior times he got burned on this early provision of data thingy. “

        Can someone explain the harm caused to Anthony when he provided data prior to publication? Is the problem that when he made the data available, and his errors were criticized, that his feelings were hurt?

        Or were his loses financial? Reputational? In what way was he a victim that has so many bleeding heart “skeptics” feel compelled to defend his refusal to make his data available along with the publication of the conclusions he drew from that data?

      • Can someone explain the harm caused to Anthony when he provided data prior to publication?

        They used a preliminary, unedited version of his data to try to discredit his work in advance.

        There’s also claims of this paper being a “death blow” to the surfacestations project. I’m sure in some circles, they believe that to be true. However, it is very important to point out that the Menne et al 2010 paper was based on an early version of the surfacestations.org data, at 43% of the network surveyed. The dataset that Dr. Menne used was not quality controlled, and contained errors both in station identification and rating, and was never intended for analysis. I had posted it to direct volunteers to so they could keep track of what stations had been surveyed to eliminate repetitive efforts. When I discovered people were doing ad hoc analysis with it, I stopped updating it.

        I know your game is just to try to waste people’s time with your dirty insinuations. We all do. It’s clear from the way you make these demands without even bothering to follow links and find out for yourself that you’re operating in very bad faith.

        Your comments here are a prime example of dishonest rhetoric.

      • Mosher, I truly believe your motives are pure in this case. But if I had had to organize volunteers, collate all the data, double and triple-check it, get burned by letting others have it, etc, etc. … I’m just saying I can understand AWs position. Besides, you will get it anyway, a little later perhaps, but still get it.

        I wish I had time to dig into BEST’s code. Maybe someday. What I’m really curious about is how to generate error bars using the sparse, but somewhat global, temperature measurements. If I run an experiment in the lab, I can set up 10 runs of it as similar as possible; take measurements at predetermined intervals, etc. Easy to calculate SD and other stats.

        But in this case, a given measurement may be unique in space – ship measurements for example. Even though you can use relationships to extrapolate the data, like lat/lon (and maybe altitude someday); still the proper statistical technique to get valid error bars, say daily, is not obvious to me. And, you have (others but at least) the two cases of a daily temperature where a stationary station exists and where the temp for that location is calculated. Is there a name to the technique BEST uses to create error bars? Is it standard in the sense that others have developed and proofed the method on synthetic data?

      • jim2

        jack knife and monte carlo

        pretty standard stuff.

      • ==> “It’s clear from the way you make these demands without even bothering to follow links and find out for yourself that you’re operating in very bad faith.”

        Lol! Demands? Too funny.

        They used a preliminary, unedited version of his data to try to discredit his work in advance.

        You seem to be confused in thinking that you’ve answered my questions:

        Here, I’ll link them so you can read them again:

        https://judithcurry.com/2015/12/17/watts-et-al-temperature-station-siting-matters/#comment-752424

        What was the material harm? Try to answer again.

      • Was your point that he was materially harmed because people tried to discredit his work but failed to do so?

        Presumably his work should be able to stand scrutiny. If someone invalidly attempts to discredit it, then it causes him no harm.

        I don’t see how that is material harm.

        Try again. You might consider leaving off the insults when doing so.

      • What was the material harm? Try to answer again.

        More dishonest rhetoric. Question for the class: why is that sentence dishonest?

      • AK –

        Substantial harm. Significant harm.

        Use your own freaking definition. What was the material, substantial, significant harm caused to Anthony in the past? From what you wrote, it seems that someone used the data to try to discredit his work, and according to your excerpt, they failed to do so.

        Do you consider that to be material/substantial/significant harm?

      • David Springer

        Mosher you’re a scumbag and Watts doesn’t trust you. Deal with it.

      • Do you consider that to be material/substantial/significant harm?

        More dishonest rhetoric.

      • Be a little patient, Mosh. We are on the final lap. Sorry I took so long. We are tweaking it for submission. You’ll get it, all of it. In easily alterable Excel so critical reviewers can play around with it. I want people to see our work. I’m proud of it. I didn’t put in thousands of hours to have my hard work chucked into some inaccessible archive.

    • So we got two little soreheads whining about something that happened in 2012. That effort blew up on them and they have worked several more years to get it right. It’s not like your tax money is supporting the gig.

      You are like two little eager Starwars fanboys who want to see the big show on the first day, but don’t have the patience to stand in the freaking line.

      • What happened to “show me the money” Don? Must be an impostor.

      • Some willy is starting to rub off on you, horse grabber. You should avoid his vicinity.

      • haha.. they are afraid to give the data to an English Major.

      • haha.. they are afraid to give the data to an English Major.

        What data Steven? What makes you think they’ve finished making the changes to their cleaning process required to respond to critical reviews of their paper?

      • if they need to “clean” the data in order to respond to criticism I would have to question the integrity of their data

      • or their workflow at the very least

      • if they need to “clean” the data in order to respond to criticism […]

        The whole thing is an exercise in data cleaning. They throw away the majority of reporting stations because they’re too “dirty”. Valid criticisms of their methodology might well require a few changes to which stations they throw away. All this should certainly (IMO) be accounted for in the final product. Personally, I’m going to remain skeptical till I see it replicated.

        And till I find out what meta-data they’ve preserved WRT their data cleaning exercise.

      • haha.. they are afraid to give the data to an English Major.

        Well, you know how it is, Mosh. English majors are the worst. It’s the way of the wicked world. I’m sure you knew that going in.

      • David Springer

        Actually they are afraid to give it to someone with a demonstrated lack of integrity.

  20. If we can embrace Karl, Cowtan, Way and the others taking a new look at the data sets then why not Watt?

  21. Well, sigh… this again.

    Until they deploy a network of sensors in pristine only areas with sensors designed, tested, and calibrated to an exact engineering standard this whole land temperature thing is a joke. There is no reason to put global climate sensors anywhere there is any UHI. “Rural areas” isn’t good enough. I lived in a rural area – it went from dirt to blacktop in 30 years.

    The moment they start “adjusting” and “homogenizing” it becomes subjective. It comes down to which cherry picker is better at picking cherries.

    As a side note, it was pointed out by a link from SM the “length of day” change indicates a 20th century (pre 1990) sea level rise of 1.2 mm/Y. The rotation change this century is about 1/3 what it was last century. It is hard to argue that there is much current warming. If there was significant ocean warming or ice melting the planet would be slowing down like someone put the brakes on. That isn’t happening.

    The rate of warming this century is about 1/3 what it was last century going by the rotational change – and that is the only measurement that is known to be very accurate. So either it isn’t warming much or Antarctica is gaining a lot of ice. The rotation rate indicates the pause is real and attempts to “kill the pause” are misguided.

    • PA: “The moment they start “adjusting” and “homogenizing” it becomes subjective.”

      No, no, no!

      According to Mosher, it’s all done on magic computers by “AlGore-ithms”, so it’s entirely untouched by human hand.

  22. Denizens may appreciate a thoughtful review

    http://variable-variability.blogspot.co.uk/2015/12/anthony-watts-agu2015-surface-stations.html

    It was not clear from Judith’s post which journal had published the paper. It appears from Victor’s review it is in fact a poster and has not yet been submitted for review.

    • Here’s what Watts says on his thread:

      We are submitting this to publication in a well respected journal. No, I won’t say which one because we don’t need any attempts at journal gate-keeping like we saw in the Climategate emails. i.e “I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow — even if we have to redefine what the peer-review literature is!” and “I will be emailing the journal to tell them I’m having nothing more to do with it until they rid themselves of this troublesome editor.”.

      When the journal article publishes, we’ll make all of the data, code, and methods available so that the study is entirely replicable. We feel this is very important, even if it allows unscrupulous types to launch “creative” attacks via journal publications, blog posts, and comments. When the data and paper is available, we’ll welcome real and well-founded criticism.

    • Also on the Watts thread, there’s this

      evanmjones says:
      December 17, 2015 at 8:50 pm
      You’ll have to wait until publication. Twice already we’ve had reason to regret releasing preliminary data. So we must tread with caution. But you shall have it. All of it. That’s a cross-my-heart promise.

      • So let’s wait until publication to see how important this is.

        Judging from Victor’s post, it could be a long wait yet, but we’ll see.

        Watts’ conspiracy ideation you quote is amusing.

      • Trolls are amusing.

      • Projectors project

      • Why can’t you annoying little chumps just be glad that Obama has saved the planet.

      • Chill, Don. Have a read of Victor’s post. You’ll learn something.

      • I read venoma’s post, trollguy. Do you chumps do similar deconstructions every time one of your consensus boys presents a poster at AGU, or publishes a pal-reviewed paper?

        In the grand scheme of things, I don’t expect Anthony et al.’s paper to have a significant impact. What’s all the gnashing of teeth about? You don’t have to try to marginalize and demonize the well-funded deniers any more. The planet has been saved. Merry Christmas!

      • Don, all I posted was a link to Victor. You’ll have a stroke if you can’t calm down a little.

        Look after yourself. Relax over the holidays. Maybe take a break from blogs, they seem to cause you stress.

        God Jul!

      • I usually always like Aussies, trollguy. I will have to make an exception in your case. I am the calming influence on this one. You clowns are making a mountain out of a mole hill. It’s a freaking AGU poster. A WUWT AGU poster. Get over it. Wallow in the glory of your big victory at the Soiree d’ Paree. Weren’t you invited? Why you so mad?

      • Don, I agree with you, as I so often do.

        And everything I’ve posted agrees with you too.

        Your anger saddens me, Don.

        For you. For Christmas.

        http://www.psychology.org.au/publications/tip_sheets/anger/

      • That’s a really dumb tactic, trollguy. You people are crapping your drawers over A WUWT poster presentation. It’s like Tony et al. have breached the walls of the consensus goon’s fort and they have to charge into the breach. Little climate soldiers rushing about whining and debunking. Clowns. This is why the vast majority of the folks in the world don’t take your climate alarmism seriously.

      • > This is why the vast majority of the folks in the world don’t take your climate alarmism seriously.

        Why would anyone take alarmism seriously, Don Don?

        Alarmism is alarmist, after all.

        Enjoy the season, and don’t drink when driving that bandwagon.

      • Willard: you lost the catastrophe argument, no matter what your acolytes say back over at ATTP where you and ken licked your wounds from the last drubbing. Now, you doth proteth thoo muth. Dimmy little Don Don nails the whole thread: Much Ado About Nothing and you people are having an aneurysm over a pathetic denier poster.

      • verytallguy: “Chill, Don. Have a read of Victor’s post. You’ll learn something.”

        I doubt it.

      • Willard: “Why would anyone take alarmism seriously, Don Don?”

        Dunno Willikins.

        Tell you what, as a staunch alarmist yourself, why don’t you tell us?

      • > you lost the catastrophe argument

        Are proofs by assertion the new fad in geologists’ networks, Regions that Lie Between Normal Faults?

        ***

        > Much Ado About Nothing and you people are having an aneurysm over a pathetic denier poster.

        This mind reading not only minimizes Willard Tony’s big moment of science by press release, but also Judy’s opinion of it:

        Anthony Watts has presented an important analysis of U.S. surface temperatures, in a presentation co-authored by John Nielsen-Gammon and John Christy.

        It’s amazing what Denizens could discover just by reading.

      • Assertion? Networks? You admit fear that little Auntie J has real influence beyond Ted Cruz and the muttering purchasers of trivial ebooks (bit coin and paypal gladly excepted). It’s ever so mystifying the daily chaff that turns Willards crank. Evolution is a clearer concept to grasp than a goal to achieve, word-boy.

      • > You admit fear […]

        A quote might be needed, unless that’s another way to blindly mine for mind states like a Kriging king would. After having minimized AGW in a recent thread, you know minimize Willard Tony’s science-by-press-release moment, an episode where NG got us all covered.

    • “Even paranoids have real enemies.”

    • The main noted issue seems to be the conjecture that choosing sites that have no meta-data indicating a move but that are well positioned will include those that were moved because they were poorly positioned. Plausible. But it would have been nice if Victor had also mentioned the author claims he interviewed curators, when discussing the methods he used to ensure pristine weather stations.

      The second claim the paper is weak because there is no reason given for the divergence. Of course, building things would be one, and I think that’s mentioned. Then the ancillary claim that there has been a slow down in divergence. That could be because the economy is sucking. It’s data, and hopefully data that isn’t lying.

      We all ought to be glad if there isn’t as much warming as thought, right (so long as we aren’t in fact cooling into a little ice age or worse)? It will give people more time to do what’s necessary.

    • “Denizens may appreciate a thoughtful review”

      Yep. Let us know if you ever find one. Here’s a hint, if the first thing your climate hero does is link to Hotwhopper then there wasn’t much thought involved.

      • Schitz, closing your mind to expertise merely condemns you to ignorance.

        It’s not big or clever.

      • David Springer

        “Schitz, closing your mind to expertise merely condemns you to ignorance.”

        You should know after living in said condemnation for so long now.

      • Ok tall, since you apparently need it, here’s your second hint.

        Many suspect that all the adjustments and homogenization of the temp records is adding a spurious warming trend, or at least make it larger. Anthony’s work tries to check this out by looking at just the stations that have never had station moves, changes in observation times, or anything else that would require their data to be adjusted. And apparently they found a significant difference compared to the official adjusted data.

        Now Double V comes along and says that we don’t know FOR SURE that those stations or really good. Yes, the meta data says their good, and the people Anthony talked to say their good, but he can imagine some really unlikely situations were they might not be. So what Victor thinks Anthony et al need to do with this is… wait for it…

        HOMOGENIZE IT!

        That’s right, the very same process they just effectively proved is causing greater warming in the adjusted data. And what would you have to adjust the good station with if you homogenized them? Why, the bad stations that needed all the adjustments in the first place, of course.

        So, do I need to give you any more hints? Or do you think you can figure the rest out on your own.

      • Schitztee: “HOMOGENIZE IT!”

        Homogenization – AKA Mannipulation

      • Schitz,

        yes, I understand the issues. You would understand them much better if you respected the views of those, like Victor, who have deep expertise.

        Your perfunctory dismissal of them, whilst protectively insulating your worldview, prevents you from developing anything beyond a knee-jerk, caps on denial of the issues a genuine expert, very respectfully pointed out.

        If you wish to be truly sceptical, you need to consider expertise that disagrees with you. Otherwise, it’s just denial.

      • verytallguy: “You would understand them much better if you respected the views of those, like Victor, who have deep expertise.”

        Heh, you’re funny!

      • David Springer

        I read Venema’s response. I did indeed learn something. I learned that Victor Venema is a shallow thinker. In his first criticism Venema argues that there must be far fewer than 410 unperturbed stations because there are, on average, two detectable breaks per station per 30 years so we should expect to find only about 158 unperturbed stations.

        That’s flawed. He is assuming a random distribution of perturbations which is not likely to be the case. A station that is perturbed once is far more likely to be perturbed more than the average of two times. This would be like saying that the average person goes to church twice per year so there should be X number of people who have never gone to church. That conclusion fails to take into account that people who go to church at all go far more frequently than twice per year and are likely to go once per week or 52 times per year. This drives the average way up. Similarly a weather station that is perturbed at all is likely to be perturbed more than the average number of times.

        To not recognize that perturbation distribution is not likely to be random is a sure sign of a shallow thinker. Or a dishonest one.

      • Springer,

        a weather station that is perturbed at all is likely to be perturbed more than the average number of times.

        An interesting assertion.   I have no idea if it is factual or not.  Evidence for this assertion would convince a reviewer that it was true.   Can you point at evidence for this? 

        a sure sign of a shallow thinker. Or a dishonest one.

        I’d advise against ascribing malign motives.   It’s a classic party of conspiracy ideation and takes away attention from any substantive point you wish to make.

        Victor is hugely more expert in this than you or I.   If learning about the issues is your objective,  engaging at his blog  (minus the insults) would be my recommendation.  You’ll get far more from him than from me or any of the denizens here. 

      • David Springer

        Venema also makes the statement that neighboring stations experience about the same weather.

        That’s not necessarily true. Hills and valleys, lakes and rivers, trees or grass, all make the “weather” four feet above ground level different despite being at about the same latitude and longitude.

      • David Springer

        Germans protecting their livelyhood through cheating isn’t an insult it’s a statement of fact.

        http://www.wsj.com/articles/vw-scandal-tests-auto-loving-germany-1443217183

        Venema makes his living riding the coat tails of climate alarmism. I suggest you take that into account, dummy.

      • David Springer

        Venema and you could learn from the denizens here. This for instance just upthread:

        https://judithcurry.com/2015/12/17/watts-et-al-temperature-station-siting-matters/#comment-752429

        You both lack common sense. Maybe that isn’t something that can be learned.

      • David Springer

        There are none so blind as those who refuse to see.

        re; insults

        In Venema’s opening paragraph he advises the reader to first go read a primer that supports his thinking and links to that website. The first thing we see on said website (hotwhopper) is a mission-statement banner that reads:

        Eavesdroppong on the deniosphere, its weird pseudo-science and crazy conspiracy whoppers.

        From this there’s a bit of a dichotomy in conclusion. Is Venema a passive-aggressive turdball who knew he was linking to something that insults a broad class of people who happen to disagree with his climate change co-conspirators or is he just too dense to realize what he did?

      • Springer,

        Scattering scatological insults through your posts makes any sensible dialogue impossible.

        It is very telling that given the opportunity to interact with a bona fide expert in a field you reject that chance and instead choose to behave in such a manner.

      • I find it amusing that so many people assume that stations located near to each other should record very similar results.

        I was involved a few years ago with installing an automatic meteorological station at an airport in China. It was state of the art stuff.

        We were allotted a location 1km from their existing instruments. It took me 2 years to get them to accept the readings from our station because they were so different to their old system. Rainfall for example was out by ~50%, even after I had both the old and the new rain gauges re-calibrated at an independent laboratory in Switzerland. The only reading that matched exactly was pressure.

        That was for two sets of instruments 1km apart across an exposed runway. Too many people forget that no matter how good in theory such instruments are, we are measuring mere pinpricks.

      • Jonathan,

        In what way are your observations incompatible with the views of experts in the field and what is incorporated into homogenisation techniques?

      • David Springer

        If you need to ask why Jonathan’s experience that weather at neighboring stations isn’t, in Venema’s words, “about the same” then you’re too clueless to be participating in this discussion. Are you stupid or just trolling?

      • vtg,

        The point I am making is that assuming nearby stations will record similar data is not necessarily a safe choice. I am sure that all experts in the field appreciate this.

        You would do well to remember that Victor is not the only expert around here, and even experts can disagree about fundamental assumptions. Your continual appeals to authority on his behalf reflect poorly on you.

      • Jonathan,

        I’ve not appealed to any authority- I’ve made no claims whatsoever, in fact.

        All I suggested was an opportunity to learn from an expert. I’m not sure why that is controversial.

        On nearby stations, it’s not at all clear to me what you are claiming which is different to the expert view.

        Even small station moves are well known to require adjustment.

        Perhaps you’d like to clarify?

      • I always feel a bit silly talking about min/max as if they really represented “temperatures”. Ignoring cloud and general volatility within the period of measurement is hard for me…But put that down to a lack of grounding in “climate science” and too much time spent in the paddock and scrub.

        However…

        If there is a single authority anywhere who does not understand that min/max temp and rainfall often differ critically between nearby sites then I’m surprised. If more than one such authority exists I am staggered.

        Maybe people live in places where a few kilometres do not make a difference. If so, surprise me. Where I live, the min/max temp differential between nearby stations (official town – official AP) is marked and unpredictable. The rainfall differential is enormous. (And the rainfall diff between those sites and my place? And between my place and a mate’s place, just a walk away on the other side of one low ridge? Enormous again!)

        Does this just apply to hill country off river valleys between the Pacific and Great Divide? Hmmm…

        Remember some excitement when Sydney Obs just beat the old daily max record of 1939 by reaching a searing 45.8C on Jan 18 2013? While the heat was not extraordinary away from Sydney that day (unlike 1939) the max readings from other stations around Sydney were a match for the Obs. It was brief, but it was extraordinary.

        However!

        Sydney Harbour (Wedding Cake West), which is just paddling distance from the Obs, only recorded a 34.3C on that very same day. It’s a station which usually runs cooler for obvious reasons, but on that day it ran 11.2C cooler! It wasn’t even Wedding Cake’s hottest measurement for the month.

        All of which should make one think. But will it?

      • vtg,

        The point I am making has nothing to do with resiting stations, I am addressing the idea that one can expect stations sited within a few miles to produce similar readings, and that homogenisation between such stations could be a valid technique for improving data quality. I am not saying that such homogenisation is wrong outright, but I would be very sceptical regarding claims of high accuracy and repeatability in data following such a process.

        Unfortunately you are still making the lazy assumption that I am not an expert in the practical capabilities of surface stations, and the quality of data they produce. Until a few months ago I managed a team of engineers designing, building and installing aviation meteorological systems. In particular I was responsible for obtaining operational approval from national safety regulators. Data quality was paramount in the audits we were subject to.

      • I can attest to the fact that it sometimes rains in my backyard and not in the front. And sunshine can do as well when clouds roll by. In the mtn SW temps can easily drop 20degF on a partially sunny day when a cloud passes between me and the sun! Nice when it is warm out, but not when it is cold!!

      • Correction: the diff between Wedding Cake and the Obs was, obviously, even bigger.

        Makes life still harder for homogenisers…but due to something called CLOUD I often wonder if we should even have these conversations about “temperatures”.

        Clouds, to quote the old song, really do get in the way, don’t they?

        And so, to quote another old song, till the clouds roll by…

      • Jonathan,

        I’m not sure where I made any comment or assumption about your expertise.

        If I maligned you anywhere, please accept an apology.

      • David Springer

        @Jonathan

        Obviously the fearful verytallguy is enamored of Victor Venema and suggests no one else as an expert to “learn from” in regard to temperature series and associated problems. He accuses me of refusing to interact with bona fide experts just because I don’t want to interact on Venema’s blog. I’ve been to enough warmist blogs, which are all more or less cult worship, to know how heretics are treated by both the owners and other participants. Real Climate, Skeptical Science, And Then There’s Physics are prime examples. Censoring is rampant on them. Why should I want to interact on such sites?

        I interact with at least two bona fide experts who don’t delete comments because the content is disagreeable and don’t act like they’re members of a cult with inalterable beliefs. Judith Curry and Roy Spencer are both bona fide experts in the field, don’t delete contrarian content, and treat everyone with respect. I frequent their blogs.

        Jonathan you should give Roy Spencer’s site a try if you haven’t already. The blog comment section is quite well trafficked and the owner is primarily responsible for the only trustworthy (for the purpose of determining precise, accurate to tenth of a degree) global temperature sensing system.

      • vtg,

        I was just struck that throughout your comments you refer to experts as if they are all other people, elsewhere. Sometimes, even on the internet, they are right here ;)

      • David,

        I do occasionally read Roy Spencers blog. But I find I struggle just to keep up with the posts and comments here at Judy’s, so I rarely comment anywhere else. Even here I restrict technical comments to subjects I know first hand, or have had time to check for myself.

        I have never bothered to visit Victor Venema’s blog, as I seem to remember him associating with William Connolley in the past, and I consider Connolley to be somebody who values political beliefs above scientific fact. Knowing Connolley, now I’ve mentioned him he’ll probably pop in to say ‘boo!’.

      • verytrollguy protesteth: “I’ve not appealed to any authority- I’ve made no claims whatsoever, in fact.”

        very next words out of verytrollguy’s pudgy little fingers: “All I suggested was an opportunity to learn from an expert. I’m not sure why that is controversial.”

        These characters are not big on self-awareness.

      • http://www.conservapedia.com/William_M._Connolley
        William M. Connolley is a British Wikipedia editor known for his fanaticism in promoting the theory of anthropogenic global warming (AGW) and in censoring the views of critics and skeptics. He is the ringleader of the infamous global warming cabal at Wikipedia, a powerful pro-AGW group that has an iron grip on global warming-related articles.

        William M. Connolley was banned from Wikipedia for a while – and they are pretty liberal and pro-warming.. had misused his administrator privileges to further his point of view in a content dispute….Connolley’s editing on Wikipedia is widely acknowledged to be a conflict of interest

        Part of the problem with global warming is the global warmers, if they weren’t so vicious, unethical, and dishonest their viewpoint would be more persuasive.

        The views and writings of an associate of Mr. Connolley should be considered biased and suspect.

        That was for two sets of instruments 1km apart across an exposed runway. Too many people forget that no matter how good in theory such instruments are, we are measuring mere pinpricks.

        How would you suggest that homogenization algorithms be tested and what is a successful result?

      • PA,

        I’ve never really thought about it, but in broad terms, if one followed the methodology used in aviation safety certification, one team would define the homogenisation method. The regulator would seperately set the pass/fail criteria. Another set of people would carry out field trials with as large a population of co-located systems as possible. The trial period would be a minimum of 12 months. The regulator would then have the trial data analysed and decide the result.

        Something like that.

      • Wow. Meta-hominem arguments.

        Finding reasons not to pay attention to expertise assures you of ignorance. Conservapedia is a great example of this: Don’t like the facts? Simply make up your own and ignore the nasty outside world.

        Also, a suggestion that rather than assert that homogenisation is not possible or applicable, reading the relevant literature would educate you as to the evidence which supports it.

      • Because VTG demanded it: Hint number 3!

        Just because I didn’t fall prostrate at the feet of your Climate Hero, it doesn’t mean I didn’t understand his argument.

        I don’t actually know that much about Double V. I’ve heard of him before but never really come across anything he’s written before. (Unless he writes science fiction. That name sounds familiar) But he linked to Hotwhopper, and I DO know Sou. I know her abject hatred of anyone who disagrees with her opinion, I know how she twists the truth until it looks like a balloon animal, I know how she considers pointing out that some ‘Climate Scientist’ (anyone saying what Sou wants to hear) got a different answer counts as a ‘debunking’. Been there, watched the circus, wrote posts at The Blackboard about it.

        But just because I don’t care much for Victors choice in webcomics, that doesn’t mean I didn’t read his article. I did read it. I understood it. And I spotted the logic flaws in it immediately. So no, not impressed by your Climate Hero.

        Now it took me awhile to get back to you, and it seems others have taken up the good fight to help you take a hint. Alas, it appears it was all in vain. You have your chosen expert and nothing we mere denizens say could ever make you question his proclamations. Not that you’re promoting an argument from authority, oh no. Not with all the skeptics learning about logic fallacies from a certain Lord. (Even if some only apply them to the other side)

        Now me, I’m definitely not a Climate Expert. And while you’re welcome to disregard anything I have to say on that grounds, please don’t try to lecture me on what or who I’m, in your mind, unqualified to question. Just because I haven’t published a paper on the changes in migration pattern of African Swallows brought on by Global Warming, that doesn’t mean I can’t spot a glaring hole in some climatists argument, any more then I need to be a better economist then Karl Marx to point out that Communism is a less efficient system the Capitalism.

        So here, even if you’re not going to take them. A free additional hint, just for you: Don’t assume someone is less intelligent then you just because they have a different opinion. Especially when it’s an opinion you need an ‘expert’ to defend for you.

      • Jonathan Abbott | December 19, 2015 at 1:15 pm |
        PA,

        I’ve never really thought about it, but in broad terms, if one followed the methodology used in aviation safety certification, one team would define the homogenisation method. The regulator would seperately set the pass/fail criteria. Another set of people would carry out field trials with as large a population of co-located systems as possible. The trial period would be a minimum of 12 months. The regulator would then have the trial data analysed and decide the result.

        Something like that.

        That sounds reasonable.

        NOAA and NASA must follow some sort of similar sensible procedure like this – because they get billions in budget and could cause trillions in costs depending on how their data is used.

        Perhaps someone knows how NOAA and NASA certify their algorithm changes? And who the independent second party is that tests them? And who the independent third party is that approves them?

      • Schitz,

        I made no comment as to whether Victor was right or not, I merely pointed to his article as something people interested in the issues would want to read. An opportunity to interact with an expert in the field (Victor is chair of the Task Team on Homogenization of the Commission for Climatology of the World meteorological organization) would surely be welcomed by anyone wanting to learn more of the topic?

        The heaping of opprobrium upon me for making such an innocuous suggestion has been quite remarkable, wouldn’t you say?

      • verytallguy: “The heaping of opprobrium upon me for making such an innocuous suggestion has been quite remarkable, wouldn’t you say?”

        NO!

        Self-awareness isn’t your strong point, is it?

      • David Springer

        “The heaping of opprobrium upon me for making such an innocuous suggestion has been quite remarkable, wouldn’t you say?”

        Not at all remarkable. Your anonymous cowardly climate troll reputation precedes you.

        The suggestion wasn’t innocuous. You refuse to acknowledge that Venema’s opening line linked to hotwhopper which is among the most vile of the warmunist snake pits.

        There are many other experts in the field of temperature sensing. If not for your tunnel vision and trollish behavior you might have suggested a second source.

      • David Springer

        Watts Gores Sacred Cow; Climate Cult Reacts Predictably

        I don’t seek out interactions with cultists on their own turf. It’s really just that simple.

      • verytallguy,

        Richard Feynman said something to the effect that science was belief in the ignorance of experts.

        You propose someone as an expert. I think Feynman was quite a lot smarter than your expert.

        Tell me why I should not believe Feynman, if you wish.

        Cheers.

      • Springer,

        Your continued inability to interact without trading insults is noted.

      • David Springer

        It takes two to trade.

        Freudian slip noted.

      • So, do I need to give you any more hints? Or do you think you can figure the rest out on your own.

        I do believe he’s got it.

        BTW, J N-G did an apples to apples set of pairwise and got a very similar result to mine although I chopped the major flags and he did not. Thank of that as the first step.

        It was the dratted VeeV who got me started on homegrown homog. I played with it until Excel yelled ENOUGH, already. He is the fiend who led me astray. Homogenize just once and you are a homogenizer for the rest of your life. I feel so dirty. Recurse You, Red Baron!

      • I’ve never really thought about it, but in broad terms, if one followed the methodology used in aviation safety certification, one team would define the homogenisation method. The regulator would seperately set the pass/fail criteria. Another set of people would carry out field trials with as large a population of co-located systems as possible.

        That sounds much like my role in the homogenization community. For my most influential paper, I generated a dataset with inhomogeneities (only known to me) and my colleagues would homogenize this dataset. Then I compared the homogenized data with the data before I put in the inhomogeneities. Conclusion: statistical homogenization using neighbouring stations improves temperature trend estimates.

        http://variable-variability.blogspot.com/2012/01/new-article-benchmarking-homogenization.html

        The homogenization of precipitation is more difficult. As already mentioned above neighbouring stations are not correlated very well. Instead of using a difference series, the homogenization is typically performed on a ratio series.

        Naturally I understand that people who have made opposing mitigation a core part of their identity do not like the tone of HotWhopper, but the science is normally accurate as far as I can judge.

        Accurate science is something you cannot say of WUWT and the tone of WUWT is certainly not better. If I am associated with William Connolley, it might be for the blog post making toilet jokes about WC and Enema. Stay classy.

      • Deep thinker Springer, I agree the situation may be more complex than independent random breaks. Reality normally is more complex.

        That is why I carefully wrote: “likely only 12.6% of the stations do not have a break (154 stations). According to Watts et al. 410 of 1218 stations have no break. 256 stations (more than half their “unperturbed” dataset) thus likely have a break that Watts et al. did not find.

        Otherwise I would have written: “thus only …” Watts et al. (2015) are naturally free to show that there is a case where their numbers are reasonable, but 256 stations missing a break is not a subtle effect.

      • David Springer

        Oh… only “likely”. Not very likely or extremely likely. Thanks for using such precise terms. Not. Krauts are such weasels. I’m still laughing over Volkswagen rigging its cars to cheat in emission tests. Is that how you managed to get a PhD?

  23. Here is the real travesty.

    Why is it that the a basic inventory of siting quality of US temperature locations has been done by an outsider in a crowd sourcing project that includes volunteers?

    Keep in mind that the global surface temperature record is just about the single most important time series in climate research, so making sure that it meets the highest data quality standards, which includes constructing an inventory of siting quality, should be top priority of the funding agencies and governing bodies, and is of interest to everybody.

    Not so much, it has turned out.

    The scientific community rather leaves that task to a few good-willing men and women. Instead, funding agencies and governing bodies rather spend billions of dollars on other topics like computer modelling than spending a few million on data quality.

    And that is just plain stupid, I have no other words for it.

    • Yup. We are spending about $ 22 billion a year on climate change.

      And it isn’t really clear the US is getting warmer.

      This is an actually USCRN site:

      We should recreate the USCRN correctly. The enabling legislation should require by law that a design/testing/calibration standard for the climate sensor be developed. Further the legislation should make it a felony to deploy a non-compliant sensor. It would cost $50-100 million to put standard sensor stations on 1/2×1/2 square mile plots of land in about the same numbers as the USCRN network. All vestiges of humanity would be removed from the plot by law. The sensor would be a guaranteed 1/4 mile from any human influence. For $200-400 million you could buy 1 square mile plots and be 1/2 mile away guaranteed from any human influence.

      This would, by law, be the US climate network and the official measure of global warming. There is no need for adjustments, homogenizing, or anything. Just read it and weep.

      The global warmers claim this won’t make any difference. If the global warmers are right they should be the most enthusiastic backers of the plan. This would pull the teeth of the “deniers” best arguments. Spending less than half a percent (1/4 sq mi) to a little over 1% (1 sq mi) of the annual climate change budget from just 1 year, to deploy a real climate change measuring network is pretty cheap for honest data.

      • PA..

        HAHA… that’s not the only :gold standard” CRN site that has issues.

        I was saving that for my next surprise.

      • You are wasting all the space that Phil Jones, saved. What’s up now?

      • And it isn’t really clear the US is getting warmer.

        That’s why it’s good to have the MSU everyone grouses about.

        UAH LT data over CONUS shows a striking correlation with BEST CONUS surface temperature obs. The extents are different, but the peaks and troughs correlate very well.

        The trend for UAH-LT CONUS (Dec 1978 through Nov 2014 ) is 0.19C per decade, which is close to the AGU announced 0.204C per decade for the good stations ( 1979 though 2008 ).

      • Except UAH shows no warming significant overall from 1978 to 1997 and 2000 to today. The only warming is a step up coincident with the big ENSO. So is there is actually a warming trend in the LT it is just a heat shift within the atmosphere, not GHG warming. GHGs cannot warm the surface and LT without warming the T.

      • David Springer

        Mosher could surprise us all by not acting like a little bitch just because Watt’s no longer trusts his duplicitous ass with proprietary data

      • Yeah like Watts is going to give his data to Mosh when Muller is his boss.

    • Why is it that the a basic inventory of siting quality of US temperature locations has been done by an outsider in a crowd sourcing project that includes volunteers?

      Oh, NOAA did their bit: They affected greatest of fear that our thuggish volunteers would harass their curators. So they removed from their metadata all the names and addresses where the stations were located.

      (That alone added at least a year to our efforts.)

      I managed to locate and interview a few dozen (thanks mainly to Mac’s partials). They absolutely loved talking about their stations. I was on the phone with some of them for over an hour, heard some great stories. I certainly never dissed their stations or said they had a bad rating. All I did was get the info I needed and then thank them for their patriotic civic-mindedness. And listen to their fascinating tales.

      I also made a point of telling them their stations were a select elite — the USHCN, pride of the fleet. None of them even knew. Not one. A Weather Service station out west didn’t even know. “We’re HCN? No one told us.” Another curator said (with renewed pride), “I’ll never miss a reading again.” NOAA should inform their USHCN volunteers of their stations’ rare status.

  24. I’m confused. The WUWT post says:

    When the journal article publishes, we’ll make all of the data, code, and methods available so that the study is entirely replicable. We feel this is very important, even if it allows unscrupulous types to launch “creative” attacks via journal publications, blog posts, and comments. When the data and paper is available, we’ll welcome real and well-founded criticism.

    Unless I’m misunderstanding the situation in a rather drastic way, they just published a press release and encouraged everyone to promote their results while saying nobody gets to look at any paper, data or analysis supporting those results until some unspecified and unknowable future date. That would mean this is just science by press release, something skeptics have criticized for years.

    This is wrong. You shouldn’t just get to say, “Hey guys, I just proved X” while not showing anything at all which actually supports the idea X is true and have thousands of people believe you’ve really proved X just because you said you did. If you’re not going to share any sort of data, analysis or code, you shouldn’t be publishing press releases.

    “Hey guys, we just got some amazing results. Tell everyone about them! No, you aren’t allowed do anything to verify our results are true. Why would we want to let people examine our work when we can just tell them what our results are?”

    • Watts’s defense is:

      But, the feedback we got from that effort [a July 2012 draft paper] was invaluable. We hope this pre-release today will also provide valuable criticism.

      • I wonder if he realizes that in no way justifies putting out what he called an important press release. If he had just wanted to update people on his project and perhaps get feedback, he could have posted about. He could have been up front about the fact he wasn’t releasing anything for people to actually look at or examine, and point out that means people shouldn’t blindly accept what he says about his results.

        Of course, his previous pre-release actually involved releasing a draft of his paper. This pre-release involves a release of… I don’t even know what. I’m not sure how much feedback he can really expect with how little he’s actually provided. It seems more like an effort at making headlines.

      • > We hope this pre-release today will also provide valuable criticism.

        Brandon just gave one.

        Take note, Willard Tony.

      • Journals won’t accept papers whose contents have all been pre-released, right? My guess is that Watts hasn’t been able to get a journal to accept the paper yet (not necessarily a negative if it’s a skeptical paper), so that’s why he’s keeping the data under wraps. (He should maybe have said this himself, if that’s the case.)

      • rogerknights, it’s not even clear they’ve submitted the paper to any journal yet. The post says they are submitting it to a journal which makes it sound like it’s something they are going to do, not something they have done.

        In any event, he should have waited until the paper was published to issue a press release. Running to the media and trying to get headlines while not actually publishing anything to support what you say is… wrong. To put it kindly.

      • Brandon S? (@Corpus_no_Logos) | December 18, 2015 at 10:43 am |

        In any event, he should have waited until the paper was published to issue a press release. Running to the media and trying to get headlines while not actually publishing anything to support what you say is… wrong. To put it kindly.

        This is silly. The climategate files make it abundantly clear that warmers engage in gatekeeping.

        There is no reason not to do prepublicity on skeptical studies. If the warmers want the data and methods so they can poke holes – they have to let the study get published first.

        The study wasn’t funded by the government so there is no FOIA access right to any of the study, unlike government funded studies where they take delight in frustrating FOIA requests. Recipients of government grants who ignore FOIA requests should be permanently debarred – even if they put athletic equipment in their graphs.

      • The press release is related to the AGU presentation. The press release is here:

        https://fallmeeting.agu.org/2015/press-item/new-study-of-noaas-u-s-climate-network-shows-a-lower-30-year-temperature-trend-when-high-quality-temperature-stations-unperturbed-by-urbanization-are-considered/

        If you monkeys have a problem with the press release, you should complain to the AGU.

      • > Journals won’t accept papers whose contents have all been pre-released, right?

        The GWPF might not mind, and don’t forget that the GWPF set new standards in peer review:

        The review of Golkany’s paper was even more rigorous than the peer review from most journals […]

        https://www.documentcloud.org/documents/2642410-Email-Chain-Happer-O-Keefe-and-Donors-Trust.html#document/p6/a265727

      • Your descent into incoherent irrelevance is almost complete, willy. Pathetic. Where’s your boss kenny? Is he still mad?

      • Do you know if this research has been funded by the Heartland Institute, Don Don?

        Meanwhile, note that Willard Tony’s post contains this sentence:

        We do allow for […] one adjustment in the data, and this is only because it is based on physical observations and it is a truly needed adjustment.

        and that the leading sentence from the press release contains “do not require adjustments to the data.”

        Just imagine if the Editor found out that the IPCC said something like that.

        Willard Tony, take note.

      • I get the impression dinky dimmy Don Don must be pals with some of the WUWT cabal as he holds them to a very relaxed standard as opposed to his normal born hard kick arse and take the names persona. My guess is that he and Chuk the Mod have a Koffee Klatch over in Belmont.

      • Your impression is faulty, little horse grabber. Mr. Tony has banned my humble and gracious self from WUWT for pointing out the flaws in little willis’ nasty character. I just don’t see what all the angst is about. It’s a freaking AGU poster.

      • Why do you say the A word, Don Don?

        NG got everyone covered.

        Let’s enjoy Willard Tony’s moment of “science by press release” as much as he does.

        ‘Tis the season, after all.

        Grab some more egg nog.

        No, put that brandy and that rhum down.

      • Don Monfort: “Your descent into incoherent irrelevance…”

        Don’t you mean “ascent” ?

    • First they need to correct the temperature record by adjusting it.

      John N-G did the statistics with the help of some veterinary students at Aggie State University, so that odor that has you confused actually is BS. He’s a climate scientist and four out of two climate scientists do almost completely perfect statistical work, as is well known, so there is no problem here in jumping the gun before peer review.

      (Texans who did not attend Texas A&M love to make fun of the place because it’s hicksville as it gets, but it is actually an excellent school and J N-G is an excellent state climatologist).

      • Face facts, everyone can see except our betters.

        “I always thought it was other schools, not our schools,” Manweller said. “But then The Washington Post did an expose on Washington State University where they had uncovered all the syllabi saying, ‘if you used these words, if you write these words, you will fail or be punished in some way’.”

        Yeah, it’s for our kids… you bet.

      • JCH:

        Your comments of late have been more than 100% insults in the same statistical manner by which recent global warming has been more than 100% anthropogenic.

        In addition to being an excellent climatologist John Nielsen-Gammon has 3 degrees from MIT. So you might as well toss in some nerd insults while you’re at it.

      • (Texans who did not attend Texas A&M love to make fun of the place because it’s hicksville as it gets, but it is actually an excellent school and J N-G is an excellent state climatologist).

      • Solid, liquid or still more gas?

      • Yes, I saw your backhanded compliment the first time which is why I dropped the “state” limitation you insist on repeating.

      • I don’t know how it is a limitation. How would it be possible to be an excellent state climatologist, which is his job title, and not also be an excellent climatologist? I am very familiar with his work. I’m a fan of his. I followed his blog in the Houston Chronicle. He concerns himself mostly with providing valuable and interesting services to Texans. I was thinking there could be a warmest year with an ENSO neutral year, and within days he wrote a blog post with the same idea, and then it happened.

    • You are just babbling and sinking, willy. Merry Christmas!

      • Seems you can’t walk straight in the threads, Don Don. Hope you did not restarted to drink that early. Tis’ the season when it’s the harder.

    • “It’s a freaking AGU poster”
      Thanks for bringing me back to rational misanthropy

    • That’s the way it works in the real world. People submit posters and present talks. If they have a blog, they are free to advertise that they are presenting. If a group feels marginalized, would they not logically promote their work a bit more than others in the mainstream? Most people analyze their data (some of which may have shown up in several years worth of talks and posters), make the figures, write up the paper and submit for publication and then when accepted, they can but don’t always) make their data and programs available. This paper is being handled the same way most papers are except here they are saying the will make EVERYTHING available upon publication, which is rare. Most people, myself included, wait for someone to ask for it, which rarely happens in my field.

      • Thanks. We will. Sooner rather than later. it’s been a long haul.

        Did I like the publicity? Sure. Very modest pay for all our work. And it got me the chops to get the serious response I needed to make improvements (all of which worked against our hypothesis). That was the real gold speck. Just as Anthony posted back in 2012. I also got to field a ton of questions and read and answer a whole slew of objections. I always learn more about the battlefield from my opponents than from my allies. It was invaluable, indispensable.

        Besides, we are about to submit for review. And I expect a very hairy eyeball. So when the review boyz shoot us them snappy questions, I’d just as soon have some snappy answers.

  25. By the way, has anyone else had trouble commenting at WUWT? I tried submitting two comments, and they both just disappeared. They didn’t show up as awaiting moderation or anything. I was logged into the same account I use to comment here, so I can’t imagine the problem is ensuring I’m not a spam bot, and my comments didn’t include any links/language which would have seemed to trigger any filters.

    I’m wondering if there’s maybe something inadvertently catching innocent comments there. I know sometimes spam filters can act up.

    • I had trouble with the appearance of the site, and with Reply boxes popping up in unexpected places. But I re-launched my browser and the problem’s gone away. But maybe that’s a coincidence, and there was something shonky going on there. Try posting again.

      (I feel for you. A long comment of mine upthread is in moderation because I used too many exclamation points!)

      • David Springer

        How many is too many?????????????????????????????????????????

        Enquiring minds want to know!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

      • David Springer

        How many is too many?????????????????????????????????????????

        Enquiring minds want to know!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

        ———————————————————————

        The number above didn’t land it in moderation!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

      • I guess I was wrong that the exclamation points were the cause of the delay in moderation. It’s out of moderation now, at https://judithcurry.com/2015/12/17/watts-et-al-temperature-station-siting-matters/#comment-752233

    • Maybe you got lost, I found them !!

  26. Talking about min/max temps? Please don’t call them just temps. Drives me nuts. Completely different things. Hot day can have a low max, cool day can have a high max. Max refers to a moment in a day. You can have cloud cooling max and boosting min not just for short periods but for years. (More cloud about Eastern Oz in eg 1950s or 1970s than eg 1930s or 1990s. Bound to mess with min/max, right?)

    Don’t make me change my moniker to And Then There’s Cloud.

    – ATTC (just a warning this time)

  27. Steven Mosher and Victor Venema both point out that some of the stations that were selected for this study may have been disturbed even though there aren’t any records of them having been so. Disturbances of any sort can introduce discontinuities that are normally dealt with with homogenization (or so I understand). But the authors are distrustful of homogenization since they fear that this process can contaminate well sited stations with a warming bias that allegedly afflicts the worse sited stations. Whatever the merit of this worry, would it not possible to remove the discontinuities potentially caused by undocumented disturbances through only homogenizing the well-sited stations among themselves? This procedure ought to satisfy everyone, unless I am missing something.

    • It would theoretically be possible to only use the “well-sited” stations for homogenization. The reason this is normally not done is that the quality of the end product depends on how well correlated the neighboring stations are. If the neighboring station experienced almost the same weather, you will see jumps in their difference signal much easier than when the weather is more different.

      Watts et al. (2015) does apply MMTS corrections. They were computed by comparing the stations with these transitions to their neighbors. Thus it seems as if Watts et al. (2015) accepts the homogenization principle sometimes.

      A way to avoid “contaminat[ing] well sited stations with a warming bias” would be to only detect breaks to make a subset without “perturbations”. There is no need to correct the data. The only disadvantage of this method would be that you remove a large part of the stations from your analysis.

      • Actually, in our JRG paper on UHI we reran homogenization using only rural stations to homogenize, for four different definitions of urbanity. Wasn’t too hard to do, and it helped us demonstrate that the adjustments weren’t “spreading” urban warming to rural stations. That said, we could use the whole co-op network for that purpose; might be a bit harder with only 400 or so HCN stations as you would probably miss some issues.

        Regarding the Watts paper, it will be interesting to look at his results in more depth when the data is available. Until then it would be premature to speculate.

      • Thanks Zeke, I’m glad to hear someone already thought of doing that. It will be nice to see if it can be done again with this new set of “unperturbed” stations from Watts et al. 2015.

      • …I meant, for the subset of the “unperturbed” stations that are deemed by them to be well-sited.

      • Yes, yes, yes (but the devil is in the details).

        I think missing metadata is probably the biggest problem.

    • One thing that skeptics dont get is that ALL RECORDS HAVE ERRORS.

      records of temperature
      records of station moves and changes.

      So I kinda have to laugh when folks say they went over a B-91 and that the record “proved” something. or lack of documentation “proved” something.

      As for the siting criteria CRN1-5

      Ask about the field test performed to establish that specification.

      • So, when are warmunistas going to stop saying that any of their favored temp trends is proof that it’s co2 that done it, and that urgent and drastic action is required to decarbonize and impoverish once wealthy nations to save the planet?

      • One thing that skeptics dont get is that ALL RECORDS HAVE ERRORS.

        And evidently, some records have more errors than others, something a data series that referred to itself as BEST should have been telling us, not waiting on a third party.

      • David Springer

        Steven Mosher | December 18, 2015 at 9:30 am | Reply

        “One thing that skeptics dont get is that ALL RECORDS HAVE ERRORS.”

        That’s the thing that “skeptics” get better than anyone else poseur boy. The consensus (including milquetoast luke-warmers) doesn’t get the fact that older land surface records are so flawed as to be unacceptable for the task of establishing global average temperature trends with tenth-degree accuracy. Duh.

      • David

        You are right.

        I have looked at historic records more than most. We can accept their generality in terms of broad bands, such as very cold, cold, mild, warm, very warm etc.

        However, if we want scientific accuracy to tenths of a degree they can probably only be obtained from the automatic weather stations from the 1980’s onwards, always assuming they were sited and maintained correctly.

        I certainly wouldn’t base public policy on the accuracy of a global land record to 1880 or a global ocean record to 1860.

        Glacier records are also pretty good in their generality as we can go back some 1 or 2 thousand years without harming a single tree,

        tonyb

      • Actually, Tony, even if the stations are perfectly accurate, the statistical methodology is not accurate to a tenth of a degree. Statistical sampling theory is based on probability theory, so the first rule is that the sample must be a random sample of the population, period. The corollary is that no valid statistical inference can be drawn from a convenience sample, which these surface samples most certainly are. The surface sampling system needed to produce accurate statistics has yet to be built.

      • A lot of people are going to be upset when they learn the millihair scale doesn’t exist.

      • “One thing that skeptics dont get is that ALL RECORDS HAVE ERRORS.”. Behind this perverse comment lies Mosh’s perverse belief that “skeptics” of any hypothesis have a duty to provide a successful counter-hypothesis. To do so, he supposes, they must employ some kind of temperature record. Said record will have flaws, so the skeptic’s hypothesis will be no better than the one he is challenging.

        One day he’ll grasp the concept of disconfirmation, but until then…

      • David Springer

        “One day he’ll grasp the concept of disconfirmation”

        Don’t bet on it.

      • “Behind this perverse comment lies Mosh’s perverse belief that “skeptics” of any hypothesis have a duty to provide a successful counter-hypothesis. To do so, he supposes, they must employ some kind of temperature record. Said record will have flaws, so the skeptic’s hypothesis will be no better than the one he is challenging.

        One day he’ll grasp the concept of disconfirmation, but until then…”

        1. There is no intellectual requirement to produce a counter theory.
        2. Pragmatically speaking, you lose if you dont.
        3, the end goal is to produce a better explanation. Mere criticism,
        loses.

      • 1. There is no intellectual requirement to produce a counter theory.

        Fine. This is the second coming of the MWP. Barley was easy to grow during the MWP on Greenland and the sea level was 6 inches higher.

        When we get to where indisputably the current time is warmer than the MWP, the sea level is just as high, and Greenland is having bounteous barley harvests we can revisit the “humans make it warmer” theory. Until then there isn’t even a potential problem to address.

      • PA,

        Bugger Greenland. I’m waiting for Antarctica to become ice free and fertile again, as it was. Maybe the permafrost in the North will unfreeze, and large grazing animals will repopulate the areas.

        The Golden Age awaits! More CO2 is what we need – for plant food, of course – it’s not worth a cracker for warming anything!

        Wotcha reckon?

        Cheers.

      • Bugger Greenland.

        Too large a target for me – I will leave that in other people’s capable hands.

        On other subjects. Now that I look at the leap second adjustments I’m honestly worried we are slipping back into the ice age.

        https://en.wikipedia.org/wiki/Energy_subsidies

        I’m not even sure renewable energy level subsidies for fossil fuels (25+ times current levels of fossil fuel subsides) can keep us out of an ice age. But 5 times higher fossil fuel subsidies are easily justified, and we can cut renewable subsidies by 80-90 % to bring them to parity (5 up x 5 down = 25).

        Just the additional food it will bring us and the potential to stave off future starvation would more than justify the higher fossil fuel subsidies. Averting an ice age would just be a lucky fringe benefit..

        I am concerned that global warmers deliberately want to starve people and bring on an ice age. I’m not sure what their disturbed and twisted reasoning is, just that it is disturbed and twisted.

      • PA.

        Skeptics lost. even with republican “deniers” running the show
        we ended up with billions of subsidies for renewable energy that
        “solves” a problem that you guys argue doesnt exist.

        that is pretty funny.

      • David Springer

        Only a warmist trying to save face would say “skeptics lost”. We won and won big. The 2016 budget includes a 5-year extension and gradual phase out for tax credits for wind and solar. Republicans voted for that in trade for Democrats allowing an immediate end to a 40-year ban on crude oil exports.

        Warmists want TRILLIONS, Mosher. Instead they got some measly tax credits on wind and solar plus a glut of US crude oil on the world market that will keep supply up, price down near $35/bbl and work to foil any OPEC plans to run crude price back up to $75/bbl+.

        How that can be spun into a loss for skeptics is beyond me. I’d have voted for it in a heartbeat. I’d have voted for it without lifting the ban on US crude oil exports. That’s because wind and solar have a place in the grand scheme of things. Not a huge place but a place nonetheless. Fossil fuels won’t last forever. I couldn’t care less about CO2 emission or global warming. I care about running out of finite natural resources which necessarily includes fossil fuel.

        Write that down.

      • Skeptics lost. even with republican “deniers” running the show we ended up with billions of subsidies for renewable energy that
        “solves” a problem that you guys argue doesnt exist.

        Skeptics won: subsidies for solar and wind are tiny compared to what the alarmists want(ed) to do (as Springer mentions above). Their impact on the economy, even if they don’t work, will be minimal.

        BEST won: opening LNG exports will provide huge support for development of gas as a “bridge”.

        Proponents of solar won: if solar PV continues its exponential decline in cost, this will be enough to push it over the hump. Similar for wind, although I’m skeptical about its scaleability.

        The only real losers are the socialists who wanted to use “global warming” as a stalking horse for their own agenda. Everybody these days is saying that capitalism can solve the problem.

        And skepticism certainly played a part in that: the uncertainty about the magnitude of the problem, and whether it even exists, certainly (IMO) influenced people’s willingness to impose the known problems/risks of giving up “free-market” capitalism.

    • would it not possible to remove the discontinuities potentially caused by undocumented disturbances through only homogenizing the well-sited stations among themselves? This procedure ought to satisfy everyone, unless I am missing something.

      You do not seem to be missing anything I can see. I have been urging the VeeV to do just that. Save the GHCN. Be a hero. Win a place of dishonor in the Deniers’ Hall of Shame.

      Of course, with the GHCN, he might have to infer some of that metadata. (Ick.) State of the wicket is not good.

  28. Trend differences are not found during the 1999- 2008 sub-period of relatively stable temperatures, suggesting that the observed differences are caused by a physical mechanism that is directly or indirectly caused by changing temperatures.

    But temperatures are relatively stable over 1980-1997, which is when the discrepancy opens up, so this suggestion doesn’t seem to have any merit.

    • That paragraph is an error.

      To clarify:

      There is cooling from 1999 – 2008. Poorlly sited stations cool faster.

      There is an essentially flat trend from 2005 – 2014. COOP and CRN show no significant diversion (seeing as how there is no trend to exaggerate).

  29. Well,

    1. Though significantly less, ~2C per century is still warming.

    2. People take veracity of observations for granted:
    “Everyone believes the measurements, except those who take them.
    No one believes the models, except those who make them.”

    3. Homogenized crap is still crap.

    4. Because of agricultural concerns, the US has one of the oldest histories of meteorological observations. Older doesn’t necessarily mean better, but I would have to think the problems are even worse in the rest of the world.

    • TE writes: 4. Because of agricultural concerns, the US has one of the oldest histories of meteorological observations. ”

      LOL. American parochialism is so quaint.

      • You might have a point, but you don’t:

      • Keyword: “oldest”

        American parochialism is so quaint.

        Many non-american climate records go back *beyond* 150 years. Maybe try a new graph. Perhaps one that goes back 400 or more years.

      • David Springer

        Stupidity isn’t quaint. The oldest city in the US, St. Augustine Florida, has been continuously occupied for 450 years. Several others have been occupied over 400 years and very many over 350 years.

        Hopefully the Oneill’s in Wisconsin read this and become a tiny bit less stupid. One can only hope.

      • Ah, the new nations stake their claim. There are cities which have been in continuous occupation slightly longer than that elsewhere in the world.

      • Indeed, the city nearest me has been continuously occupied for around 1000 years, and that’s young compared to many others

      • David Springer

        It’s not how long the cities been there but what those cities have accomplished. European superiority is so quaint. Any of you boys have cities whose citizens drove golf carts on the moon? LOL

      • “You might have a point, but you don’t:”

        Have you noticed how badly under sampled spatially large chucks of the world are. Shall we just estimate/model the rest?

      • RichardLH: “Have you noticed how badly under sampled spatially large chucks of the world are. Shall we just estimate/model the rest?”

        Don’t worry, Richard.

        REAL climate “scientists” don’t have any use for data anyway.

        “The data doesn’t matter. We’re not basing our recommendations on the data. We’re basing them on the climate models.”

        ~ Prof. Chris Folland ~ (Hadley Centre for Climate Prediction and Research)

        What “scientist” in their right mind would bother with the readings of $10 thermometers when they can use $100,000,000 computer games climate models to just make stuff up?

      • “It’s not how long the cities been there but what those cities have accomplished. European superiority is so quaint.”

        And there’s me thinking CET was a short record :-)

      • Climate scientists model the global temperature to compare to the models.

        They don’t’ say that is what they do, but they do it anyway.

    • Yes (though I am a bit more lukewarm than that).

      Yes (but I bore that in mind and tried to be good).

      Yes (but — if properly applied — it is arguably better crap).

      Yes (metadata: crap). But if it must be inferred (ick), at least do it right, and include new factors as they arise. It couldn’t be any mushier than it is, anyway. Welcome to homogland, where every datapoint is an outlier (but some being more equal than others).

      We shouldn’t be in a position where we need homog to record temperature going forward (yet we are). But we can’t redo the past. Missing metadata is missing metadata. Raw data won’t do. Just won’t. Some sort of inference is necessitated.

    • Modelling the 3D Temperature Field from the point sampled data in order to compare it to the models allows errors at both ends of the exercise.

  30. There’s also cooling from the output of heat pumps.

  31. The problem of people using straight line ‘trends’ of any climate series is that I want to utter the rather dry observation

    “The data capture widow available does not support the bandwidth required to get to that frequency”

    • Linear trends are climate porn. Like Shaw, we are merely haggling over a price. And the going rate is 30 years. But there’s a rumor Madame ENSO is taking about upping it to 60. And love may grow, for all we know.

  32. And that was in general, not directly related to this paper!

    • The whole idea of ‘homogenization’ is not to actually remove errantly introduced warming bias (UHI effect). Rather, it is to indelibly redistribute the amount of the error throughout the record. The end result of any consequence is at best, illumination of a trend. But, when it comes to global warming, we already are aware of the many trends that exist and they all depend on the start point.

      • My point was that for whatever reason you wish, you can’t draw a straight line on a time series and have it carry any meaningful information.

        The best you can do with any series probably is to limit what you do claim as a ‘trend’ is that it is bounded by a single sine wave over the series as a lower frequency limit if you are going to be ‘accurate’..

        Sure the line you draw MAY be right. But you can’t call it a scientific fact. You can’t tell the future and you don’t know the past beyond your series. Outside of the capture window the calculations are blind.

      • True, climatists pretend they can tease out the eerie solitude of a lonesome flugelhorn amidst the cacophony of an orchestral warm-up by turning a deaf ear to everything other than what they want to hear.

      • The whole idea of ‘homogenization’ is not to actually remove errantly introduced warming bias (UHI effect). Rather, it is to indelibly redistribute the amount of the error throughout the record.

        That’s what I thought, at first. Would that it were! That would hide the divergence, but at least it would not make it worse.

  33. It really is interesting how so much of climate science effort is spent defending questionable methodology instead of looking for better methods.

  34. Pingback: Quote of the Week – Watts at AGU edition | Watts Up With That?

  35. Let’s see how this fares:

    Here, in my opinion as 30 year TV/radio/web media reporter on science is what should be in any professionally produced science press release:

    [1] The name of the paper/project being referenced

    [2] The name of the journal it is published in (if applicable)

    [3] The name of the author(s) or principal researcher(s)

    [4] Contact information for the author(s) or principal researcher(s)

    [5] Contact information for the press release writer/agent

    [6] The digital object identifer (DOI) (if one exists)

    [7] The name of the sponsoring organization (if any)

    [8] The source of the funding for the paper/project

    [9] If possible, at the minimum, one or two full sized (640×480 or larger) graphics/images from the paper/project that illustrate the investigation and/or results
    .

    http://wattsupwiththat.com/2012/09/24/science-by-press-release-where-i-find-myself-in-agreement-with-dr-gavin-schmidt-over-pr-entropy/

    No mention of the title.

    No mention that the paper is unpublished.

    No contact information.

    No mention of the writer of the press release (notice the “Lead author Anthony Watts said of the study”).

    No mention that there can’t be a DOI.

    No mention of the sponsors.

    All we got is [3] and [9].

    Take note, Willard Tony.

    • We will certainly be paid on Tuesday, if you can just get your hamburger today.

    • There isn’t any journal paper, wee willy. The PR is about a little poster presentation that Tony made to three or four people in a hallway at the AGU meeting. It’s an AGU press release promoting a little poster presentation. The AGU will surely be interested in your hyperventilating over it. They will probably retract and profusely apologize.

      You are deteriorating, willy. Get checked.

      • Willard Tony’s science-by-press-release doesn’t seem to meet Willard Tony’s guidelines for “science by press release,” Don Don. There’s no need to minimize this. It’s no big deal. You’re a fun chap, mostly non-violent when you don’t drink.

        Relax. ‘Tis the season.

        NG got us covered.

      • Did Tony write the press release, wee willy? Do you know whose web site published the press release, wee willy?

      • > Did Tony write the press release, wee willy? Do you know whose web site published the press release, wee willy?

        Thank you for making my point, Don Don. Which was also Willard Tony’s point, you know. Seems the spirits of Christmas Present make us all agree.

        Please don’t minimize this agreement too!

      • I don’t have any more time for you, willy. I hope kenny doesn’t stay mad. At least he is coherent.

      • NG got us covered.

        You can say that, again. And again. And again.

    • I think you are suffering from some sort of premature … something … disease.

  36. Note to all regarding Steve M’s comments (many above):

    Anthony Watts has responded on his own blog here:

    http://wattsupwiththat.com/2015/12/18/quote-of-the-week-watts-at-agu-edition/

    with this:

    “I’ve been reading the comments about my press release at WUWT, Bishop Hill, and at Dr. Judith Curry’s place and most have been positive. There is the usual sniping, but these aren’t getting much traction as one would expect, mainly due to the fact that there’s really not much to snipe about other than Steve Mosher’s usual whining that he wants the data, and he wants it now.

    Sorry Mosh, no can do until publication. After trusting people with our data prior to publication and being usurped, not once but twice, I’m just not going to make that mistake a third time.”

    It is very difficult to regain someone’s trust once lost.

    When Anthony et al. publish, they will include all the data and the code and the methods.

    But, ONLY after publication (and the expiry of the publication embargo).

    • Anthony is a straight up guy. I believe he will give us the whole banana. I just wish UAH would do the same. I don’t believe they have, but if so, I apologize.

      • Apologize. Each version has had two forms of documentation. Spencer blogs about the changes. His post on V5.6 to V6 beta was very detailed as to what why and how. Then they publish a peer reviewed paper on what was done. They also provide data set version control.

        OTH, take for example the NCEI switch from homogenization 1 to homogenization 2 (IIRC about 2008-9). Yes, they published papers on the changes at the time. But very provably there have been multiple additional changes since that are ‘dark’ (no information anywhere). All you have to do is compare successive years versions of past years to see what, but not why or how. Worse, NASA’s website explains its correction for UHI using Tokyo as the example, and then provably does just the opposite for major urban GHCN stations around the world. Essay When Data Isn’t has multiple examples. for both NCEI and GISS.

      • I want the actual code, not papers.

      • Note to Steven Mosher ==> Can it be possible?

        You HAVE written Anthony Watts a letter or email asking politely for the data, right?

        Right?……

        Is it possible, somehow, that you forgot that step? ASKING….?

    • Kip I am shocked! I believe Gavin Schmidt posted, “If he didn’t want his data stolen he shouldn’t have posted it on line.” Nothing quite like scientific warmth and fair play.

    • ““I’ve been reading the comments about my press release at WUWT, Bishop Hill, and at Dr. Judith Curry’s place and most have been positive. There is the usual sniping, but these aren’t getting much traction as one would expect, mainly due to the fact that there’s really not much to snipe about other than Steve Mosher’s usual whining that he wants the data, and he wants it now.”

      Actually I wanted it back in 2012. and I predicted that people would resort to special pleading… like someone will “steal” the data and publish.
      So I promise, I will sign a document and even put MONEY ON IT, that I won’t use the data to publish a paper or blog post or anything.
      And faced with that Anthony gives “mann like” responses.

      Now here is what is going to happen.
      The reviewers will ask them to address the criticisms that Victor raised.
      They are SOLID criticisms… maybe not paper killers, but they are SOLID criticisms. issues that MUST be adressed, like taking the analysis through
      2015… like addressing ALL of the siting criteria ( like shading ).
      Anthony and company will refuse to address these criticisms and the paper will languish and the data will remain.. unpublished.

    • Kip

      “Note to all regarding Steve M’s comments (many above):”

      Which of Steve McIntyre’s criticisms did Anthony Address?

      • If I had meant Steve Mc, I would have said so…..of course, it is perfectly clear if one reads the Anthony Watts quote.

      • Kip.

        Since Anthony hasnt addressed my criticisms I had to assume you meant Steve Mcintyre

      • Reply to Steven Mosher ==> I don’t think Anthony is going to play your little game.

        Neither am I.

      • It simple kip.

        #1. Skeptics have been rightly suspicious of peer reviewed science.
        Me too.
        #2. When someone makes a scientific claim, regardless of the venue,
        the scientific method requires that other people be able to reproduce
        your work. That requires data and method, typically code.
        #3. These principles hold REGARDLESS of where the claim is made,
        in a journal, on a blog, whereever.
        #4. When someone makes a scientific claim, we get to ask for the
        data and methods. Even in science fairs for 5th graders.
        #5. If they dont produce the data and methods… they are not doing
        science. They are doing advertising.
        #6. In 2012 Anthony published a paper on his blog with mcIntyre and
        Christy as co-authors. Data was not released. This is not science.
        there are no scientific findings in that paper. No data. no code.
        no science. There was a promise of science.
        #7. In 2015 A poster was presented. not peer reviewed. No data.
        no code. Not science. Its an advertisement about something
        that might be science.

        The bottom line is that Anthony and evan are not doing science. They are talking about science they might do someday. They could publish the paper and data today. Then they would be doing science.
        They could post it on any number of open journals. That would be science. provided they gave the code and data.

        They give one reason for not releasing data.

        In the past Someone used data that they freely posted on the web to write a paper. Imagine that!! someone used data to do science.
        Their sole reason for refusing to release data is that somebody
        might take data that they have been sitting on for 3 years plus and write
        a paper. But if the data contain the truth what kind of paper could be written? Are folks afraid that we will take the data and do what we did before? And what did we do? we showed that Anthony’s first paper was corrrect!! Imagine that. If the data contain the truth.. shouldnt we get that truth out ASAP before our economy is ruined?

        Where is snowden when you need him?

      • Stevem Mosher: “The bottom line is that Anthony and evan are not doing science.”

        Anthony and his colleagues have really thrown a scare into you and your friends Venemous and that silly woman with a fantasy about Hot Whoppers haven’t they Mosher?

        There is a school of thought that believes you would not recognise science if it bit you on the rump.

        But “science” now, that’s a different thing altogether.

      • Reply to Steven Mosher ==> It’s simple, Steven.

        Anthony DOES NOT TRUST YOU ANYMORE. It is not just you — he doesn’t trust the guys at the NCDC either — but your BEST team did blindside him once.

        He promises will share everything in the proper order at the proper time.

        (eGads! You and Willis — I demand this, I demand that, as if you were the royal princes of Climate Science.)

        Just to clear the air —

        Please post here, below,

        1. the colleagial letter or email you have sent to Anthony and his Team asking for the data on which they based their AGU presentation

        *and*

        2. their reply, if any.

        I would like to see *exactly* what it is that you say they have refused.

      • Note to Steven Mosher ==> Can it be possible?

        You HAVE written Anthony Watts a letter or email asking politely for the data, right?

        Right?……

        Is it possible, somehow, that you forgot that step? ASKING….?

    • Kip: The situation that you and the rest of the team are in I think I perfectly understand. In this Internet age you can get to the point where the mere placing of a proposed thesis may prompt a race. The fact that your thinking may well have leaked out over multiple blogs, comments, etc. ahead of time is now real.

      As to the data. Well that’s a race between paper and the Internet. It I easy to see how that will come out.

  37. Good timing for this to come out while NOAA is desperately trying to avoid complying with Congressional oversight into whether politics is driving ever-higher surface data adjustments. This farce has gone beyond mere confirmation bias and entered the realm of Lysenkoism. Do they really not understand that billions of tax dollars come with certain legal strings attached? “Shut up and go away” is not an acceptable response to a Congressional subpoena. I hope these savages end up in jail for obstruction, their behavior is horribly corrosive to both scientific integrity and the rule of law.

    It’s gotten so ridiculous, conspiracy theorists like Scott K Johnson at Ars Technica seem to think ordinary government oversight is some beyond the pale Inquisition, even as prominent Democrats are vocally trying to outlaw skepticism.

    The degree and speed which inconvenient facts are now memory-holed by AGW proponents is breathtaking, it was not that long ago that the respective accuracies of satellites and surface stations was uncontroversial.

    • noaa released the code and data and email from non scientists.

    • NOAA released only what was nonresponsive to the inquiry into what role motivated reasoning and political bias may have played in the adjustments, questions that are totally reasonable given graphs like this and this even before the Watts study.

      The taxpayers paid for the scientists as well as the nonscientists. No oversight? Fine, no funding. Shut NOAA down until they comply.

    • Ooh, you mustn’t go there. Today we try them? Tomorrow they try us. Let’s just do science.

  38. That siting matters greatly in setting the “trend” observed in station records has been known by professionals for many decades. Conrad and Pollack in “Methods of Climatology” called attention, in particular, to the problem of diminished correlation between urban and nearby non-urban records. The UHI evidence from megacities that have developed subsequently is unmistakable. While Watts et al. add further evidence, the stark effects of UHI are manifest not in 30-yr “trends,” which are particularly responsive to 60-yr oscillations, but in much longer sojurns of mean temperature resembling a logistic curve in various stages of saturation.

  39. Very many people now have personal weather stations online.

    Absolute temperatures are not the same as temperature anomalies over decades, but you can get an idea of siting problems by looking at the range of different temperatures, over a small area, on WunderMap.

    • Winter road closures can tell us something –e.g., the Tioga Road is currently closed due to snow. When the Tioga Road is closed it is not possible to drive to Tuolumne Meadows or enter Yosemite National Park from the east. Usually, that’s something that occurs sometime in November. When the Tioga Road is closed, there is no global warming. We’re doomed!

    • They should really stop reporting anomalies and just report absolute temperature, the anomalies make it too easy to game the numbers by cooling the baseline.

      A cynic might suspect anomalies are preferred because the absolute temps would tend to make people laugh.

      • I feel your pain. I never did like turning an item of data from what it is to what it is not. something I have done a million times (est.).

        But anomalies are necessary. If there is a gap in the data or station dropout, you can throw a mondo offset in even if the trends are the same. If you anomalize, then you wash away that error and apply an adequate band-aid.

        When we were cranking up for the final version, for our unperturbed 1\2s, I kept getting around 0.205C/d, while another on our team was getting 0.151. That was because of a dropout in Region 9 that threw the trend from +0.040C/d (correct) to around -6.95. If we do not anlomalize, our butts will be hanging out, soon to be in a sling.

        If you have complete data and no dropout, then you have it so golden, the issue never comes up. There is no need to anomalize no-dropout, infilled data to do your trends. (But it doesn’t hurt if you do.)

      • David Springer

        Like he said, anomalies make it too easy to game the system.

    • BTW the best proxy might be Great Lakes ice — record extents and record late ice were reported during what non-adjusted weather stations reported as record cold, but NOAA reported the Great Lakes temps as about average.

      Utterly ridiculous.

  40. There is a perfect 1:1 relationship between the upward bias on the temperature trend and the upward bias on the size of government, suggesting that the upward bias on the temperature trend will be corrected when the size of government reduced.

  41. Year
    Tioga Rd Closed

    2016

    2015
    1-Nov

    2014
    13-Nov

    2013
    18-Nov

    2012
    8-Nov

    2011
    17-Jan

    2010
    19-Nov

    2009
    12-Nov

    2008
    30-Oct

    2007
    6-Dec

    2006
    27-Nov

    2005
    25-Nov

    2004
    17-Oct

    2003
    31-Oct

    2002
    5-Nov

    2001
    11-Nov

    2000
    9-Nov

    1999
    23-Nov

    1998
    12-Nov

    1997
    12-Nov

    1996
    5-Nov

    1995
    11-Dec

    1994
    10-Nov

    1993
    24-Nov

    1992
    10-Nov

    1991
    14-Nov

    1990
    19-Nov

    1989
    24-Nov

    1988
    14-Nov

    1987
    13-Nov

    1986
    29-Nov

    1985
    12-Nov

    1984
    8-Nov

    1983
    11-Nov

    1982
    15-Nov

    1981
    12-Nov

    1980
    2-Dec

    Average
    5-Nov

    (’80-’11)

    Median
    12-Nov

    (’80-’11)

  42. Brian G Valentine

    Where are all the HOTTEST YEAR EVER EVER HOT HOT HOTTEST 2015 HOTTEST EVER IN HISTORY EVER! people?

    I’m guessing they will be our Christmas present from NOAA. I can’t wait!

    • 2015 is likely to be the hottest year in the instrumental period according to existing data and methods.

      Pffft.. not very important factoid.. like grape records in england.. a small piece of a larger picture.

      • Brian G Valentine

        yuh, but where are they are they? Where is the mass apoplectic fit over it? It is like waiting for an explosion

      • Steven Mosher,

        You wrote –

        “Pffft.. not very important factoid.. ”

        As is the futility of believing that by detailed examination of chaotic data, one can divine the future. The practice of arithromancy used by Warmists is no better or worse than reading the Tarot.

        The “instrumental period” you now claim as important, is yet another Warmist attempt to deny, divert and obscure.

        Deny. The Earth was obviously far hotter before the instrumental record. Molten surface, the boiling seas, and all that.

        Divert. Pretend that any evidence contradicting silly Warmist assertions are merely unimportant “factoids”.

        Obscure. Studiously avoid acknowledging the Warmist lack of physical knowledge, by claiming the surface temperature is being measured. Claim that actual temperatures are meaningless, so anomalies must be calculated and used. Use a lot of made up sciencey words, that not even Warmists can really explain.

        Delusional foolishness, all of it. CO2 does not “assist the Sun to make things hotter”, as I saw recently.

        You might be better off learning to cast runes. I can predict the future better than you, just by casting the runes. If you want a few pointers, please let me know.

        Cheers.

      • “Arithromancy”…I do hope that’s not copyright. Because I am gunna take it. Mine now!

      • I had to look it up, very usable!

        Per Harry Potter: Arithmancy is an elective subject offered from the third year on at Hogwarts School of Witchcraft and Wizardry. Little is known about the class, but the study of Arithmancy has been described as “predicting the future using numbers,” with “bit of numerology” as well.

      • But not according to 1999 methods, or satellites, or other proxies.

      • David L. Hagen

        Mosher – “instruments” are also aboard the Satellites to record atmospheric temperatures vs depth by microwaves. Those show 1998 as the warmest year in “the instrumental period”!

      • Steven, what about the nonexistent data and what was the method used to ‘dump’ it?

    • Probably Nunavut, Canada… -31°C about 196 minutes ago (20:00 UTC). According to NASA that probably is considered unseasonable warm this time of year and needs to be cranked into the homogenization machine.

  43. Congratulations to Watts et al for an important addition to the climate jigsaw puzzle. Well done. One does wonder why NOAA etc have not undertaken this properly before, including and especially BEST! Very frustrating.

  44. I visited the nearest reporting station this afternoon just to look.

    I think things have gotten worse since it was last surveyed.

    Big pond of drainage water 12 meters away ( on two sides of the station ).
    Some concrete and gravel pavement within 10 meters.
    Asphalt more than 20 meters away but on all sides.

    • I visited the nearest reporting station this afternoon just to look.

      (Grin. You just did science.) Okay, let’s “look”.

      For Leroy (1999), it’s simple: Concrete within 10m? Class 4.

      For Leroy (2010), if 10+% of area within 30m. is heat sink, then it’s Class 3 at best. Your asphalt and pond sound like they would easily do it alone. As for it being Class 4, 10+% of area must be sink. That’s ~31.4m^2. (ignore the false precision).

      So, unless you can fine down how much is paved within 10m., all we can say for sure is that it’s a Class 3 or Class 4.

      That’s one of the questions I have about Leroy. The whole area outside of 10m. could be an inferno, but the site would still be a Class 3. I have others.

      I might take a different tack and try to count every sink area and weight it by distance/area (and possibly by type) and get a bottomline number.

      There are possibilities that that approach would be completely invalid. But if not, then you have a nice easy, unified top-down system, then you have “coverage” and you could just drop meso/macrosite right in, or anything else you wanted to. (When the howls about circular logic start rolling in from the boyz, then I’ll know for sure I’m on the right track.)

      Leroy’s system is completely effective for his purposes — initial [sic] siting. But it is too Byzantine for what I want to do (esp. the 2010 version). I am guessing he is not a game designer. Sometimes when I’m rating a station I feel like I’m doing multiplication using Roman numerals. There may be a better way.

      I think things have gotten worse since it was last surveyed.

      I know a couple of notable examples, myself. But I looked at as much GE wayback as I could on these, and I was surprised how little microsite of an unmoved station changed over the years.

      We started out thinking that there was a spurious warming because of continually encroaching microsite. Instead, what we found was that spurious trend amplification (warming or cooling) will occur even if the microsite is unchanging.

      Mesosite no doubt encroaches, but, for whatever reason, when we removed the well sited urban stations from the mix, it didn’t take out trends down even a jot. (Yet our urban Class 1\2 sample is too small to be definitive.)

      Of course if direct, heavy urbanization rapidly encroaches (as in some parts of the world), then that would cause a continually increasing offset which would spuriously jump the trend. But I don’t think a few paved sidewalk additions a hundred meters down the road are going to make a dime’s worth of difference. [+/- 1.2421 dimes?]

      • David Springer

        “Instead, what we found was that spurious trend amplification (warming or cooling) will occur even if the microsite is unchanging.”

        That is expected if working off the hypothesis that the cotton region shelters get darker as they age.

  45. What correct physics is telling us is explained here where you are invited to make a submission for a reward of several thousand dollars if you can prove the thermodynamics wrong and produce a study showing opposite results to mine which showed that more moist regions have both lower daily maximum and minimum temperatures than drier regions at similar latitude and altitude.

    Q.1: What is the sensitivity for each 1% of water vapor in the atmosphere?

    Q.2: Based on your answer to Q.1, how much warming does a mean of 1.25% of water vapor produce?

    Q.3: Also based on the above, how much hotter should be a rain forest with 4% WV compared with a dry region with 1% WV?

    Q.4: Taking into account the fact that solar radiation reaching Earth’s surface ranges between zero and about 1,000W/m^2 with a mean between 160 and 170W/m^2 and that radiation from the colder atmosphere is known not to penetrate water more than a few nanometers (thus unable to “warm” it) explain, using the Stefan Boltzmann equation and a typical range of flux between 0 and 1,000W/m^2 how the ocean surface reaches observed temperatures.

    For answers, study the new 21st century paradigm shift in climate change science which will be widely publicized in 2017 and common knowledge by 2025 whilst the current hiatus continues until about 2028 to 2030. Long-term (500 year) natural cooling will start before 2100 and mean temperatures will not rise more than about 0.4 to 0.6 degree before the cooling starts, as shown here.

    Who’s next to take me on?

  46. Evan Jones 2014/09/11: “We will, of course, be hitting it from the physics angle, as well. So it won’t be a statistics-only study. It will be backed by a mechanism that explains why and how (and to what extent) this occurs.

    Neither the AGU poster nor the press release hint at finding an actual physical mechanism.

    There *is* a known physical mechanism that produces similar results and has already been written up in the scientific literature – Hubbard & Lin (2004),Air Temperature Comparison between the MMTS and the USCRN Temperature Systems.

      • Your “Gossip Girl” writing style screams physics. Note to Willard: This is another one like you and Ken whom “Do Science” apparently behind the “green” door.

        Welcome to my world, guys. I will go on a bit.

        The heat-sink hypothesis is an unphysical one. This was pointed out to Evan Jones over a year ago in discussion at Stoat’s. The press release makes no mention of having found a physical explanation. “Heat-sink” in this context is merely a euphemism for: We haven’t found a physical explanation.

        And there I was, thinking it was a euphemism for, “Gosh, those trends sure average a heck of a lot higher when those houses and cementy things are near the sensor. Wow, look at those Tmin numbers. Well it seems pretty obvious why that is.”

        As Dr. Leroy put it: the quality of observations cannot be ensured only by the use of high-quality instrumentation, but relies at least as much on the proper siting and maintenance of the instruments.

        He refers to “heat sources”, writ large. We refine the observation to distinguish that which generates heat (“heat source”) from that which does not generate heat, but absorbs and re-radiates it (“heat sink”).

        Well, anyway, you don’t seem to think much of the term, that’s obvious. Or we wouldn’t still be going on about it after all this time. Is it possible that what you find bothersome about all this is that the words “heat sink” sit so well on the tongue?

        Dr. Leroy wasn’t looking at the trends when a station is exposed to “heat source” (which, by his definition includes sources and sinks), but offset. What we do is use his rating system and then look at the trends of the stations thus rated. In your haste to remind be to stick with the trends, I fear you have strayed into the land of offsets a bit, yourself. Besides, being colder does not mean you are not warming faster, as the Arctic guys like to say.

        Anyone that reflects on what a heat-sink does

        What a heat sink does is reflect.

        and how they’re used

        Well, in greenhouses, they’re used to take the edge off Tmin and bump up Tmax. That’s the offset effect, anyway. You wouldn’t know how that would affect trend during a warming interval until you measure it, of course. You guys remind me of the story of the dude who got tossed out of the Aristotellian tribe for the crime of instigation to commit empiricism.

        quickly realizes this is bass ackwards.

        I recommend realizing a little slower.

        Heat-sinks reduce trends, not exaggerate them. We don’t put heat-sinks around CPUs in our computers because we want them to run hotter.

        You are talking offset. You need to be be thinking trend. I could just leave it at that.

        A CPU is a heat source. It is generating its own heat. It is the hottest thing in the room. A CPU is generally located in an enclosed space, and is likely not exposed to get much sun. So the heat sink is taking up energy generated from the computer — a closed and trendless system.

        Placing a heat sink next to a computer when sitting outside on a sunny lawn is not going to cool it down. Both the sink and the computer are receiving radiation from both the sun and the surrounding atmosphere. The heat sink is absorbing more energy from the sun than it is from the CPU, then re-radiating some of it back towards the CPU, recorded only at at Tmax and Tmin. Not to mention the general lack of nocturnal/dinurnal variation of a room in a building. When is Tmin inside a closed, artificially controlled environment?

        So if anything, the heat sink will be marginally increasing the heat of the CPU at either Tmax or Tmin, which are only times the temperatures are recorded by USHCN. Not that this is much of a practical issue outside a closed room.

        I find this whole explanation – or lack of one – especially disappointing because Evan assured us this was easily figured out by their co-author physicist.

        I have no doubt that you do. I think I can feel your disappointment radiating off you at Tmin. We never managed to land him, unfortunately. We’ll have to get back to it.

        Please note that I was being starkly open about our process, far more than any other paper I’ve seen. Perhaps too open. But the idea is to operate as much as possible in the open. That’s what we do.

        First he said, “Our physicist co-author thinks this factor is easy to nail and he does know about the Hubbard paper.”

        Well, that work hasn’t been done yet. It will have to wait for followup.

        Later he said, “We will, of course, be hitting it from the physics angle, as well. So it won’t be a statistics-only study. It will be backed by a mechanism that explains why and how (and to what extent) this occurs.”

        The best laid schemes of mice and men gang aft agley. We can (and do) describe the mechanism, but we are going to need someone to add in the formulas. We’ll address this in followup.

        OTOH, there is a known component of the measuring system that *does* exaggerate highs *and* exaggerate lows – the Dale/Vishay 1140 thermistor used in the MMTS stations. This was documented by Hubbard and Lin, Air Temperature Comparison between the MMTS and the USCRN Temperature Systems (2004).

        Groovy. We already add an MMTS adjustment offset. When we publish, I will supply a tool that will allow you to drop in whatever MMTS numbers you like better than ours. Either by formula or by swapping in a new MMTS-adj dataset.
        Let us know when you do. We would find the results interesting.

        But in any event, it won’t be enough of a bump to change things much over what we already did. Maybe 0.01C/decade on the outside.

        And speaking of gluteal direction, all you guys think about is how to horsewhip the MMTSs in line with the CRSs. It never seems to occur to you that it’s the CRS units that are the actual problem in the first place — carrying your own personal heat sink around on your shoulders wherever you go will do that. Especially as the paint fades (net).

        It’s the CRS units that are giving the spurious results. And, as the MMTS units were calibrated to the CRS units, I see little real justification even for adding in the offset jumps. Either that or the calibrators have some ‘splaining to do. But, being a swell guy, I’ll go along. For now.

        It is possible that the offsets should remain — and don’t think I won’t be looking at pairwise to check. But it is glaringly obvious that the CRS trends, esp. Tmax are going to have to be adjusted down. Way down. And that has implications that are going to shake the chain all the way back to 1880.

        I think it’s youse guys, not me that have things reversed.

        Since the Menne MMTS Bias adjustments were based on all stations, regardless of microsite, it’s easy to envisage that Menne’s MMTS adjustment isn’t entirely applicable to a subset of the stations. The Hubbard MMTS Bias adjustment is instrument specific – regardless of location or microsite – since it’s just a description of the physical response curve of the sensor itself. But Menne relies on pairwise homogenization while Hubbard & Lin did a year-long side-by-side field study comparison.

        Just plug in Menne’s data. MMTS adjustment only data is available from NOAA if you care to do that. Or H&L. Besides, a little bigger or little smaller offset isn’t going to matter here. What’s going to matter is the bad CRS bias. You are the ones looking at this backwards.

        While there is nothing wrong with homogenization per se, using the average result from a large group of stations and expecting it to be applicable to all subsets is a leap of faith. It is also unnecessary considering the Hubbard MMTS Bias I adjustment is available. If nothing else, obtaining the same results also using Hubbard would make the results more robust and eliminate the MMTS sensor as a potential physical explanation.

        There is nothing wrong with homogenization per se, if there is no systematic error in the data. Then it is kindly Uncle H. but when a systematic error is introduced to the data series, Kindly Uncle H goes postal. This is a known thing.

        Yet I see no reason you can’t sub in Hubbard’s data. You could even do it station by station. You can be provided with excel sheets that will enable this process when we publish. But even if the bump in trend is double ours, it’s not going to affect our results much.

      • > This is another one like you and Ken whom “Do Science” apparently behind the “green” door.

        TL;DR.

        I thought it was a curtain, Kriging King. Was it a green? Hard to notice when you’re behind it.

        I did not notice I “Do Science” either. That curtain is too opaque.

  47. Nice plot: http://www.climate-change-theory.com/planetcycles.jpg
    When do you expect the next deep freeze that empties 125m of water from the seas to ice in the polar and N/S latitudes per the Ice Core 120,000y cycling pattern? And what will drive things below the LIAs as you have presented?

    • Dynamical excitation of the tropical Pacific Ocean and ENSO variability by Little Ice Age cooling

      BSTRACT
      Tropical Pacific Ocean dynamics during the Medieval Climate Anomaly (MCA) and Little Ice Age (LIA) are poorly characterized due to lack of evidence from the eastern equatorial Pacific. We reconstructed sea surface temperature, El Niño–Southern Oscillation (ENSO) activity, and the tropical Pacific zonal gradient for the past millennium from Galápagos ocean sediments. We document a “Mid-Millennium Shift” (MMS) in ocean-atmosphere circulation ~1500-1650 CE, from a state with strong zonal gradient and dampened ENSO to one with weak gradient and amplified ENSO. The MMS coincided with deepest LIA cooling and was likely caused by southward shift of the Intertropical Convergence Zone. Peak MCA (900-1150 CE) was a warm period in the eastern Pacific, contradicting the paradigm of a persistent La Niña pattern. …

      • JCH, a ref for this “BSTRACT”?
        Also, how do you propose the 120,000y cycle occurs and when is the deep freeze likely to start?

      • Thanks.
        Nothing about the 120,000y cycle? Surely, the last 3 such cycles would indicate that one is “eminent“. So, you must have the reason’s why and a good idea when the global temp will drop.

      • Joel – you graphic mentions MWP and LIA, which is what the paper I linked is about. When the Eastern Pacific chills, the GMST chills. Well, it used to be that way. Until 1985, Then natural variation found the ACO2 knob was twisted hard to the right, and it had no answer… man = 100% plus.

      • So, JCH, per your implicit pronouncement above, there will be no more deep freezes? Man has found the knob that will allow the earth to stay above the LIA cool level, forever. I doubt it, as I believe cosmic factors are BIGGER than man and his doings and depositing 125m of sea water up on the poles and land as ice was NOT a trivial matter, but at least I know where you stand regarding what are the controlling factors – only the pacific ocean and CO2 matter, nothing else – with man being 100% per your expressed opinion in control of all that will happen. Again, I doubt man has that much control of the situation.

        BTW, got a copy of the sciencemag paper you quoted the abstract of?

      • It’s behind a pay wall.

        http://mashable.com/2015/04/09/rapid-global-warming/#QBRUe5stCSqI“>The results suggest that when a cycle known as the Pacific Decadal Oscillation, or PDO, switches to a “positive mode,” the world will see faster temperature increases than it has since about 1999. The PDO, as it happens, has just switched into strongly positive territory.

      • JCH: Sounds like you just discovered PDO, which is a +/- 30 year cycle. Last positive Phase was ~1975-2005. In negative phase now. PDO does go positive for a bit during negative phases and negative a bit during positive phases.

        You are getting excited over this years El Nino weather event, which could extend for a few years before PDO goes negative again. When you feel the urge to compete with WUWT the whole fighting ignorance with ignorance just makes them look not so idiotic.

  48. Y’all are being too hard on Mosher. He is an integral part of BEST. BEST’s sole function was to pre-empt Anthony Watt’s surface stations project to debunk UHI and station sitings in general as even a factor in the mythical “Global Average Temperature”.

    Mueller succeeded in pre-empting Watts in the only arena that matters to warmists like he and Mosher – the political/media arena. That is what BEST is all about, no matter what they say.

    That is why Mueller reneged on his promise to Watts to keep the data Watts shared with him private. That is why Mueller ran to the press as soon as he could with his take on Watt’s data.

    What more do you expect from Mueller’s mini-me Steve Mosher? He has been carrying on for years about the sacrosanct purity of the “Global Average Temperature” product produced by BEST. It doesn’t matter which stations you pick, or what you do to the data. BEST always comes up with the same answer as far as trend. and is proud of that fact.

    If, however, properly stationed sites show a significantly lower warming trend than the remainder, everything Mosher has been proclaiming as holy writ for the last several years is garbage.

    It doesn’t matter that there is no such thing as “Global Average Temperature”. It doesn’t matter that neither BEST, nor NOAA, nor Anthony Watts for that matter, can come up with a GAT accurate to within a tenth of a degree.

    What matters is the press releases. Mueller understood that, which is why he conned Watt’s data out of him. Mosher understands that, which is why he wants Watts’ date NOW NOW NOW.

    BEST and Mosher don’t need to publish Watts’ data, or release it, or anything of the kind. What they need is a competing press release, and they need it now.

    Mueller has no chance of getting it. So Mosher is trying to trade on the former good will he used to have with the skeptical community, back when he pretended (more convincingly) to be a luke warmer.

    If Watts gives Mosher his data, any part of it, Mosher will come out within a week or so with an explanation of why Watt’s conclusions are absolute nonsense. The publicity is all that matters.

    If this were about science, time wouldn’t matter. This is about politics,. so time is everything.

    • ==> “BEST’s sole function was to pre-empt Anthony Watt’s surface stations project to debunk UHI and station sitings in general ”

      That’s beautiful, Gary. Never let obvious explanations suffice when you can dream up a conspiracy theory.

    • GaryM, love you man! You nailed it.

      Muller got his titty in the ringer with his Berkley/APS Physics buddies when he criticized the Mann hockey stick in his on line video. He has been trying to dig his sorry butt out of the ditch since then and Mosher is his attack dog.

    • Do you need any help writing the screenplay, Gary. The plot is a little far-fetched, but it has possibilities. Jack Warden could play Mueller, if he is still alive. Strother Martin looks just like Mosher, but I am pretty sure he is gone to that silver screen in the sky. Anyway, have your people call my people.

      • Here you go, Gary:

        We could use this footage and pay the Strother Martin estate a few hundred bucks. There is some even better stuff with Strother in “Hanny Caulder”. And Raquel Welch is in that one, wearing a short pancho, and nothing else.

      • Brian G Valentine

        Richard Boone as Anthony Watts, but Boone’s gone

      • Don Monfort,

        The thought of Raquel Welch wearing nothing but a short Pancho is titillating. Even more if said Pancho was, in turn, wearing nothing more than his birthday suit!

        Cheers.

      • Boone would be OK, Brian. Not too many people know he’s dead. But I am thinking Frank Sinatra. We soul make it a semi-musical. I guess we could use Pat Boone. But live guys cost more money.

        Slow down, mike. We are going for a PG rating.

      • Brian G Valentine

        This whole concept does not give the flavor of the story of Redemption of the troubled and doubting Skeptic, Richard Muller, who, witnessing the downward spiral of possible doubters into the abyss of Denial, came to see the One Truth and a hero to all of those who live by the faith

      • This is what Mosher said at the end of his stint as a WUWT hero:

      • The thought of Raquel Welch wearing nothing but a short Pancho is titillating. Even more if said Pancho was, in turn, wearing nothing more than his birthday suit!

        We are more concerned with heat sinks than heat sources.

    • So skeptics are keeping the data secret to stop anyone finding anything wrong with it?
      Hmmmm

      • Just to emphasise, you’ve pretty much admitted that you think withholding scientific data is a good idea to prevent criticisms of it.

        Any you don’t even seem to self aware at what you’ve said…

      • Just to emphasise, you’ve pretty much admitted that you think withholding scientific data is a good idea to prevent criticisms of it.

        There’ll be plenty of opportunity to manufacture “criticisms of itafter the paper is published.

      • that would make sense if Gary voiced a concern that Mosher run with the data and jump in and release a paper of his own saying the SAME thing, therefore stealing the work.

        But Gary wasn’t voicing that concern, he was instead worried that Mosher might find something WRONG with the data that undermined the paper’s conclusion.

        That alone was enough for Gary to support hiding the data.

        I am reminded of an email by Phil Jones:

        “Why should I make the data available to you, when your aim is to try and find something wrong with it?”

        I think it has subsequently been agreed by all, even by Phil Jones himself, that you can’t pick and choose who gets the data just because you don’t think someone else is acting in good faith.

      • But Gary wasn’t voicing that concern, he was instead worried that Mosher might find something WRONG with the data that undermined the paper’s conclusion.

        That wasn’t how I read his comment. My understanding was that he expects Mosher (or BEST, whom Mosher supposedly represents) to issue a press release (if they get the opportunity) using some rationalization based on the data to assert that the whole study didn’t matter.

        Here’s what he said, just to remind you [all bolds mine]:

        What matters is the press releases. Mueller understood that, which is why he conned Watt’s data out of him. Mosher understands that, which is why he wants Watts’ date NOW NOW NOW.

        […]

        If Watts gives Mosher his data, any part of it, Mosher will come out within a week or so with an explanation of why Watt’s conclusions are absolute nonsense. The publicity is all that matters.

        […]

        If this were about science, time wouldn’t matter. This is about politics,. so time is everything.

      • So skeptics are keeping the data secret to stop anyone finding anything wrong with it?
        Hmmmm

        Hmm. Anyone who knows me would know better than to ask such a question.

        If you do not look at it and find at least one thing wrong with it I shall feel like a absolute wallflower.

      • I think I need to add a further word on the data (non)release. You all know it will be released. When we publish, wild horses couldn’t prevent me from releasing it.

        We are not withholding the data because we think someone is going to find something wrong with it. I fully expect that there will be folks picking around every edge and I am fully expect they will find at least something that is incorrect or can be interpreted otherwise.

        I am looking at methods to include partial unperturbed records, and that will change the results, too. I don’t think by much, but I won’t know till I do it.

        As for “review without data”, that has been what I have been after. And if you haven’t figured it out by now, that means review of method. It also means I know what kinds of questions to expect and will have considered them when peer-review time rolls around. That is what I got from the review. What I paid for it was an explanation of our own methods (which ought to be pretty well known by now).

        It was a good bargain for all sides in this. I paid good coin and received good value.

        When the data is released, we can have a whole new nice argument over it. I look forward to it.

      • evan

        “And if you haven’t figured it out by now, that means review of method.”

        You havent revealed your method either.

    • GaryM:

      Spot on! While human motives are often difficult to decipher very accurately, the fact that BEST’s efforts are concentrated on producing press releases, rather than scientific advances, is difficult to miss. The daily dose of Mosherisms only reinforces the conviction that their interest in understanding geophysical variables is entirely incidental.

      • Or maybe we are all just doing our science the best way we know how. Speaking personally, I have enjoyed the ride.

        C’mon, y’all. The politicking isn’t what’s going to endure, anyway. When the dust clears, it’s the work that counts.

        Science is the dog. Politics is the big fluffy tail. No one sees the dog for the tail. But the tail ultimately goes where the dog goes.

      • Maybe Mosher, who’s but an amateur programmer, is doing “science the best way” he knows. Muller’s M.O., however, cannot be explained that way; surely, he must know what he’s producing is numerology, not science!

      • From 2005 going forward, there is no real net trend, and therefore no net divergence between COOP and CRN. This supports our hypothesis. A diversion would challenge it.

        And note how the CRS cool a bit faster during the cooling interval and warm faster during the warming. Note that these are all classes though, and we are primarily concerned with Class 1\2. The divergence is larger for CRM/MMTS Class 1\2 than for Class 3\4\5.

        Our MMTS adjustments currently add only the offsets. I think it incorrect for Menne to adjust the trends using seven years backward and forward. I think it is an attempt the put the MMTS units in line with CRS, when they should be doing it the other way around.

        In any event, we are looking at a Microsite bias of >0.1C/decade and MMTS adjustments are on a much smaller scale. But you’ll be able to drop Menne’s USHCN MMTS-adjusted data for comparison if you like (when we release our data) and compare his adjustment and ours.

      • Actually, as a historical modeler and wargame designer and with much hands-on experience in that sort of “numerology”, I find the BEST approach quite intriguing. I also have acquired a (somewhat horrid) fascination with the concept of homogenization that I will never be able to cure myself of.

        They are both dangerous tools. They are fire. But, if used with discretion and proper direction, they have great potential.

        Look at what we are doing. it’s the flip side of what Mosh is doing. The net results of both could well turn out to be similar to ours (or vice-versa) in the end — which is not nigh.

        I am not encouraging VeeV to abandon Uncle H nor am I telling Mosh not to beat his splits into ploughshares. I am trying to figure out how to harness the advantage of both approaches. And insinuate Microsite considerations into them both.

        In terms of the science writ large, by dropping the perturbed stations, we have created a “check sum”. But we have the advantage of doing a mere 30-year stretch out of the data-metadata rich USHCN. So we can afford to drop perturbed stations.

        Mosh and VeeV have to cover the entire 140-year patch and have the entire GHCN to deal with. You cannot know how bad the data/metadata/coverage problems are (outside the UHCN, I have had but a few horrifying glances).

        They cannot possibly afford to blithely drop, as we have done. So to have a shot at redeeming the USHCN, one is going to have to rely on said “numerology”. So rather than discard it, we must work to improve it.

      • evanmjones:

        Being “intrigued” with BEST’s methods is a far cry from any contextual legitimization of that methodology as a scientific proposition. There’s a vital difference between designing fantastic war-games and establishing the realities of geophysical processes.

      • ” I am trying to figure out how to harness the advantage of both approaches.”

        Simple. our code is on the web.

        Approach 1. you release the data and We redo the station quality paper.

        Approach 2. You take the code for station quality and use your data.

        its been there for 3 years!!!!!

  49. thisisnotgoodtogo

    “What matters is the press releases. Mueller understood that, which is why he conned Watt’s data out of him. Mosher understands that, which is why he wants Watts’ date NOW NOW NOW. ”

    Thank you, Gary M. That is it.

  50. What will be done about all of this? Probably nothing. Wood for Trees would have to start all over.

  51. So in layman’s terms the ‘professional climos’ corrupted the good data to make it match the bad data.

    They aren’t supposed to do that.

    Is ‘intellectual integrity’ a phrase any climo would recognise or understand?

    Or is the shysterism of Climategate still rife in this shoddy apology for a ‘science’?

    • Thats it in a nutshell. It is inherent in any homogenization that uses any form of regional expectation. So GISS, NCEI, BEST, BOM,… Because the micro siting issues do not necessarily crop up all at once to be detected by some breakpoint. They accumulate over time and population growth and economic development.

      • If I ever have the misfortune to shake hands with a ‘climate scientist’ I will be very careful to count my fingers afterwards.

        And should anyone be foolish enough to invite one into their home I advise that they lock up their daughters, put any spare cash in the bank and dine with very long spoons.

        In any walk of life other than academe, climos would be getting struck off for gross misconduct and/or bringing the ‘profession’ into disrepute.

        It is an irredeemably dishonest trade.

      • easily testable.
        Homogenization does not change the values of CRN stations.
        in over 90% of the cases, CRN values are unchanged.

      • In Business, ten percent of the sales force will typically make ninety percent of the sales. Now what?

    • ‘Homogenize,’ verb (used with object/)

      ‘to form by blending unlike elements.’

      aka …
      acclimatize,
      accommodate
      acculturate …

      • Kinda’ makes yr think of dendroclimatology,
        yarmalising.

      • I thought no that can’t be a word:
        ac·cul·tur·ate.
        [əˈkəlCHəˌrāt]
        VERB
        1.assimilate or cause to assimilate a different culture, typically the dominant one:
        “those who have acculturated to the US” … became a liberal green spouting for the arrest of denialists·

    • So in layman’s terms the ‘professional climos’ corrupted the good data to make it match the bad data.

      They aren’t supposed to do that.

      I don’t think they meant to.

      • David Springer

        You are too kind.

      • I have been eyeball-deep in the data, both raw and adjusted, and that is my honest opinion from the trenches.

      • Danny Thomas

        Evanmjones,

        Thank you for this above: “In terms of the science writ large, by dropping the perturbed stations, we have created a “check sum”.”

        I’m late to this discussion, but in a nutshell this describes what I perceive as the value of this offering and can in no way find an issue with this approach. After all, it’s the ‘trends’ which are important. Having a “check sum” or ‘control’ (unadulterated) seems like it should have value to all sides. It seems to me that those whose results (predictions) lay further away from the ‘check sum’ should ask more questions as to why.

        Mosher himself has stated that the GAT (as it’s currently manifested via numerous means and sources) is no more than a prediction and certainly is not an observation. I cannot see how your method is any worse than the other offerings. Thank you for the effort and the sharing.

      • the problem Danny is they are not unadulterated.

      • Danny Thomas

        Steven,

        Okay. How have they been manipulated? My understanding is this is a subset with a long history and has been selected based on criteria not involving changes. Where did the manipulation occur, how, and how can it be stated if the data set you desire from the authors has not been reviewed by you? The impression I’m working under leads me to believe that changes (external and instrumentation) led to stations being removed leaving the balance of 410 (+/-) ‘pristine’ sites.

      • Remember, Phil Jones said that he is sure that the original old stuff he ‘dumped’ would have looked just about the same to him today as if it were yesterday. Was it a long list, I still can’t find it on the net?

      • Until this ‘dump’ thing is cleared up with the facts…

        https://startthinkingright.wordpress.com/2009/11/30/global-warming-scientists-admit-purging-their-raw-data/

        we have all been wasting electrons. The servers are heating the TOA.

      • “Okay. How have they been manipulated? My understanding is this is a subset with a long history and has been selected based on criteria not involving changes. Where did the manipulation occur, how, and how can it be stated if the data set you desire from the authors has not been reviewed by you? The impression I’m working under leads me to believe that changes (external and instrumentation) led to stations being removed leaving the balance of 410 (+/-) ‘pristine’ sites.”

        1. your claim is that they are unadulterated. THAT is the claim that requires proof.
        2. Evan did not use the entire Leroy classification system which would
        have included “shading”
        3. The only evidence you have is what the site looks like “Today” or at the date of the last photo. So a site that was shaded by trees 30 years
        ago that has the trees chopped down today.. will be “undisturbed” using
        evan’s criteria.

        In short, You cant claim they are unadulterated. Extraordinary claims require extraordinary proof. All that can be proved is that Given
        a belief in metadata, given a belief that some of LeRoy criteria DONT matter, the stations show no signs in that metadata of being changed.
        and check your numbers again.

        Further, the first time Anthony and Evan published this they published maps of the stations. guess what you can do?

      • Danny Thomas

        Steven,

        Try turning over a new leaf in the new year. My “impression” was that the recorded temperatures were ‘unmanipulated’. As the homoginization process is what give Mr. Jones so much angst the presumption follows that the data has not been modified. I addressed the comment to him and would also presume that if I’m inaccurate a correction will come from him and invite that.

        As you’ve not been supplied the data you seem to desire, is it not an ‘extraordinary claim’ for you to assume that somehow the data has been manipulated? After reading some 800 odd comments posted, my choice is to take the advice that Mr. Jones suggested and via this response I’ll just ask him if it was.

        As suggested originally, the GAT is nothing but a prediction and this approach should be as valid as the approaches of others and I look forward to the full presentation while expression appreciation for allow us (me) to participate in it’s evolution.
        Happy New Year!

  52. “We believe the NOAA/NCDC homogenization adjustment causes well sited stations to be adjusted upwards to match the trends of poorly sited stations.”

    What hope for Africa , one fifth of the world’s land mass, and been under such turmoil, wars etc, over the last 50 years- truly a basket case for temp data. The few stations that give out any data, which is mostly less than 50% of the time, are based around airports, in cites or by the road.

    The WMO flag up Africa needs 9000 temp stations.

    And they estimate to tenths of a degree!!

  53. I love how nothing changes about the surface temperature debate. “Scientifically” land surface temperature and US surface temperature are about as close as they can get and if there wasn’t any media hype by NOAA that 2014 was the warmest year EVAH, the stray more likely negative 0.05 C impact wouldn’t be meaningful.

    0.05 to 0.10 is a pretty good ballpark estimate of UHI or suburban heat island effect (land use) which isn’t CO2 related or accounted for completely in the NOAA land temperature product. That tiny amount is about 10% of the total warming to date which has a value of roughly 500 billion dollars and could change the order of policy priorities.

    This is the price that is paid by over hyping over confidence. Since the over-hyping has also caused issues in commodities prices which has been linked to civil unrest and death, using the EPA own methods for assigning a value to human life, there could be around 200 billion in loss of life cost. Human life is consider a bit more precious than money by a few.

    So this little handbag fight is a great illustration of how a “wicked problem” get wicked.

    • Pretty much a nail on the head statement of the issue, capt.

    • capt,

      You’re in great form on this. The endless surface temperature debate is absurd.

      Richard Lindzen, wrote it off as such years ago and he now refrains from engaging.

      Kudos to Anthony and others who go above and beyond to try to quantify some of the more egregious issues. But unfortunately it’s like tilting at windmills in the CAGW crazed environment we live in.

      • It is a continuing process, and one which has been built on the work of others, most definitely including the NOAA. If they had not made a decision to oversample, we’d never have had enough unperturbed, compliant stations for coverage, much less for statistical significance. If they had not hugely improved their metadata, we would have far less basis to go on.

      • NOAA didn’t make a decision to over sample. Stop making stuff up

    • Captain

      The US monthly weather review was the journal for the US weather service as it evolved. Here is a sample from January 1895

      “Monthly weather review Jan 1895 edited by Prof Cleveland Abbe
      Jan 1895 (data) based on 2762 responses from stations occupied by regular and voluntary observers classified as follows;
      162 weather bureau stations
      Numerous special river stations (162)
      32 from Army post surgeons received through the surgeon general us army
      2385 from voluntary observers (of the weather bureau?)
      96 through the southern pacific railway co
      29 from life-saving stations
      31 from Canadian stations
      10 from Mexican stations
      7 from Jamaica
      International simultaneous observations are received from a few stations and used, together with trustworthy newspaper extracts and special reports

      Jan 1899 midsummer weather was being experienced in California-midday temperature from 70 to 80 f were observed in the great valley and southern California. At San Francisco a max temp of 78f was registered on the 26th the highest Jan maximum recorded during the past 27 years.

      The Richards self -regulating thermometer and the Draper were accurate to 2degrees F. ”

      tonyb

  54. When you look into the the records of well-sited stations, the lack of warming is obvious, as is the effect of adjustments. My study of USHCN stations meeting the CRN#1 standard is here, with supporting Excel workbooks:

    https://rclutz.wordpress.com/2015/04/26/temperature-data-review-project-my-submission/

    • Bravo for taking the look.

    • David L. Hagen

      Ron well done.

      it is clear that adjustments at these stations increased the trend over the last 100 years from flat to +0.68 C/Century. This was achieved by reducing the cooling mid-century and accelerating the warming prior to 1998.

      The warming is thus very strongly “anthropogenic” – due to improper “adjustments”!

      • And some of them may have been proper. TOBS-bias, for one. I do not accept (nor do I reject outright) the adjustments made for that; I will look at them myself.

        But raw data won’t do. Just is.

        Maybe that means we are “just haggling over the price”. But it is a price over which we must haggle. Not even our own data is raw; it’s only as raw as it can be.

  55. It strikes me that the siting ratings are dynamic.

    At our local station, parking lots, concrete slabs, gravel covering, drainage ponds have all encroached, some within 10 meters, some within 30 meters, all within 100 meters, over the course of only two decades.

    Even getting the highest rating today doesn’t preclude the trend of degradation.

    • Yep, the general degradation would most likely be a small warming bias that should skew uncertainty to the negative side slightly. I believe digital sensors also have a small warming bias as they age. You don’t really have to fix the problem, just recognize there could be a small problem.

    • It is dynamic, but once urbanization is complete around a site, there is no additional warming trend from urban heat sources; that is, the warming is now baked in. Guess what that means? Another explanation for the plateau in temperatures this century.
      https://rclutz.wordpress.com/2015/06/22/when-is-it-warming-the-real-reason-for-the-pause/

    • David Springer

      Surface stations have NEVER been adequate for the task of detecting global average temperature trend to tenths of a degree per decade.

      You can’t make a silk purse out of a sow’s ear by adjusting the ear. You can put lipstick on a pig but it’s still a pig.

      The ONLY instrumentation we have that is adequate to the task are the globe spanning orbital microwave sounding units.

      Interestingly the usual suspects used to point to the satellite data as confirmation of the sparse ground data and adjustments thereto but as soon as the satellites stopped producing data needed to confirm the globe was warming then suddenly the sparse adjusted ground data became the gold standard.

      This is so transparent it’s sickening. Global warming is a product of ideology not science.

      • The satellites have always been the worse measure. It was the satellites that FAILED to detect the late 20th century warming and the error wasn’t discovered until the early 2000s

      • The satellites have always been the worse measure. It was the satellites that FAILED to detect the late 20th century warming and the error wasn’t discovered until the early 2000s

        All of the global temperature data sets have issues.
        But the satellite observations do have the benefits of:
        1. having the greatest coverage and
        2. having a check with another sampling means, namely the RAOB data

        Looks like this:

        Note the differences of all with the model ( Upper Left ).

        And note the latitudinal similarity with the surface obs ( Middle Left ).

      • “The satellites have always been the worse measure.”

        In your dreams.

        Stop making stuff up.

      • If you roll one die, your standard error is great. If you roll many, it is much less.

        The problem is a paucity of dice and the fact that they sometimes come with different numbers of sides and we don’t even know it.

        Surface stations are a crude tool and I think they can be made more useful than they are.

        In followup, I will be attempting to bring in more stations with partial records, using a regional pairwise. That will increase the die roll sample. Reduce the chance of Yamal occurrences. (I don’t know yet what effect it would have on the trends because I haven’t done it yet.) Other things too, we might be able to do.

        The price is yet another adjustment. The benefit is an increased sample size.

    • That’s what we thought at first. But it is not as dynamic as I suspected. I used the GE wayback machine and it only resulted in a couple of stations being dropped. Microsite is pretty stable.

  56. Naomi Oreskes labels as denialists, four self-anointed establishment climate experts who represent the views of official global warming scientists in Western academia and only then is being called a denier something that raises the Eukocommie eyebrows of the Leftists?

    • And strangely, when I returned to the beaches I swam at forty years ago, everything, the piers, the beach houses, the roads, and dunes, all appeared about in the same place, just as I remembered them.

      • “Water is seen on part of the glacial ice sheet that covers about 80 percent of Greenland…”
        Oh no! Water! A whole puddle! There’s an actual picture! In the WaPo! (Well, we think it was a pic of Greenland. As Ed Wood would have said, never waste good stock footage. In today’s journalism you have to be ready at a moment’s notice with those stranded polar bears and photoshop-blackened smoke stacks.)

        You can’t buy a decent sea level rise around my part of the world. It’s a phenomenon that exists where there’s erosion or post-glacial rebound, maybe. (No, sillies,not the rebound that sends the sea levels down in places like Stockholm and Juneau. We don’t talk about that sort of thing in mixed company.)

        And SLR exists as a kind of imaginary friend of those who read the WaPo, Guardian and HuffPo. The Happy Few.

    • Some points to ponder:
      Atmosphere – water temperature and viscous flow interaction (drag); does melting Greenland just indicate warming of water all over with air-water vs air-ice drag?

      How much extra ocean surface area results from a mm rise? Shouldn’t the earth spin faster with a larger, smoother area provided by water and not slower?

      Consider this: the earth spins its fastest going into the deep freeze portion of a 120,000y cycle (less constant heating of land and seas) and then slows slowly as the seas dropped creating a more drastic land mass to sea elevation difference as water is converted to ice. When the seas are 125m lower than today, the slowest spin rate would allow more constant heating of each area of the earth; the ices would begin to melt and the earth would begin to speed up with greater sea area. Note that there is SIGNIFICANT lag in sea level rise to rewarming.

      Consider, also: terrain difference caused by the seas dropping and created ice domes would have a greater effect on the spin rate [more/much more (?)] than a small rise in seas level that are nearly flooding coastal areas already.

      Consider: When will the coastal sum-total elevation of land over seas be minimized? So, how does climate change relate to coastal land-sea elevation difference? Can there a dramatic decrease between the two over what we have now? What will a meter or two do?

      Has anyone considered the effect that the continued building of coastal skyscraper cities has on the earth’s spin drag?

      • How much extra ocean surface area results from a mm rise? …

        Hunch – could, at times, be net negative.

      • OK, JCH,
        since you think a mm rise could actual create LESS (how?) sea surface area (!), how about a meter which is supposed to flood major global coastal cities per Hansen and his followers? What are we talking about anyway?

      • “How much extra ocean surface area results from a mm rise? Shouldn’t the earth spin faster with a larger, smoother area provided by water and not slower?”

        It is not surface you probably need to think about. As you mentioned, fast = cold, slow = warm.

        Mass transfer between the Poles and the Equator will have that effect.

        As to your ‘smoother surface’ thought, what do you think is the source of the friction and how does it dissipate the energy?

      • David Springer

        RichardLH

        No. Smoothness has nothing to do with it. There’s no friction between the ocean and the vacuum of space.

        Slowing of spin results from mass movement close to the center of mass towards the perimeter. The surface of the earth at the poles is near the center of mass as it’s on the spin axis. Melting ice that sits above sea level nearer the poles ends up adding more more mass nearer the equator as the now liquid water distributes itself across the globe at sea level.

        The equator is farthest from the center of mass. The effect is exactly like a spinning ice skater moving their arms outward to slow their spin.

    • Lol – all indications are, they’re by the best sea level group. The rest are just standing aside and letting them take the top of the mountain.

    • “more evidence sea level rise is accelerating”
      This statement according to the urban dictionary, is best described by a word meaning, “communicating through your *ss”.

      http://advances.sciencemag.org/content/1/11/e1500679
      “A recent probabilistic analysis of a global database of tide gauge records (22) has estimated a GMSL rise of 1.2 ± 0.2 mm/year in 1900–1990. “

      http://www.nature.com/nature/journal/v517/n7535/full/nature14093.html
      “Here we revisit estimates of twentieth-century GMSL rise using probabilistic techniques9, 10 and find a rate of GMSL rise from 1901 to 1990 of 1.2 ± 0.2 millimetres per year (90% confidence interval)…
      also indicates that GMSL rose at a rate of 3.0 ± 0.7 millimetres per year between 1993 and 2010, consistent with prior estimates from tide gauge records”

      Number of papers from basically the same gang, claim that the far low end of sea rise estimates matches up with the observed rotational change of the earth, so that global warmers et. al. aren’t completely lying.

      They then go on to saying that sea level rise is going crazy and has effectively tripled (from 1.2 to 3.0). I would like to meet their pharmaceutical supplier because what they are on has to be pretty good.

      http://www.slate.com/blogs/future_tense/2012/06/26/age_of_miracles_by_karen_thompson_walker_could_the_earth_s_rotation_really_slow_down_.html
      The earths rotation is slowing 17 milliseconds per day per century or about 0.062 seconds per year due to drag from various things. Anything above that is due to a geoid/moment of inertia (for any object the size of the earth it is more of a year of inertia or millenium of inertia) change. That change would presumably be due to a global warming driven sea level rise/ice cap melt.

      The period of 2003-2004, from the rotational change, had zero sea level rise. Yet we are supposed to believe the sea level rise is accelerating? Really???

      • PA: “Yet we are supposed to believe the sea level rise is accelerating? Really???”

        It’s an article of faith in the Warmunist religion.

        Question it at your own risk.

      • I will take the catastrophic forecasts seriously when someone can discredit Houston and Dean, who found no acceleration in SLR after analyzing US Tidal Gauge records. Or when they can tell me why the trends in the NOAA record should be ignored for LA (.88mm/yr) or Honolulu (1.41mm/yr), or my favorite Sydney with a continuous record since 1886 of only .65mm/yr. Let the burden be on them to show why each of these records should be dismissed. Color me unimpressed about the hype of runaway sea level rise.

        At some point common sense and science have to intersect. When one looks at the above facts or reads the scientific peer reviewed papers finding geothermal activity under Greenland and West Antarctica Peninsula, which could be a cause for glacial instability and increased melting, then putting on a Colombo trench coat and doing a little detective work seems in order.

      • cerescokid

        There is a communal activity that goes by the acronym “CJ”. This is what the warming forces are engaged in.

        They are convinced it is warming and the ice is melting. The models say it is warmer than it is, so they find reasons to adjust up the temperatures and ignore reasons to adjust them down. They then look at the ice melting and go “oh my gosh” and generate large ice melt figures. Since the sea is getting warmer the sea level must be getting higher so there is a accelerating sea level rise. And once they have adjusted everything, when they compare the adjusted figures they are still short of the models and they still don’t quite match each other and they find new reasons to adjust them up again.

        It sort of is what it is. If the sea level really isn’t rising much (and from rotational data it doesn’t look like it is rising more than 0.6-0.8 mm/y currently) then the whole house of cards falls down and this adjustment thing and climate records generated by scientists becomes a joke.

      • cerescokid:

        I wouldn’t trust sea level readings in Los Angeles (adjusted or otherwise) since the entire coast is moving a lot faster than sea level.

        From Wikipedia:

        The Pacific Plate, to the west of the [San Andreas] fault, is moving in a northwest direction while the North American Plate to the east is moving toward the southwest, but relatively southeast under the influence of plate tectonics. The rate of slippage averages about 33 to 37 millimeters (1.3 to 1.5 in) a year across California.

  57. Pingback: Classical Values » Site, Cite, and Oversight

  58. The big problem with satellite sea elevation data is that big thing in the sky, the Moon. The satellites get pushed around, up and down, this way and that as they circle the Earth. The sea bounces up and down and back and forth Of course, it is all carefully accounted for but, given the rate of rise, even small errors will matter.

  59. Looks like the story on the ground is that the idea that modernity is heating the globe (AGW theory) is a lot of hot air and obviously more like political science than natural science.

  60. Dr Curry,

    Pr Morel, a French climatologist and former head of LMD (Laboratoire de météorologie dynamique = Dynamic Meteorology Laboratory) used to say that 2 thirds of temperature anomalies were actually resulting from data correction, and not from direct measurement.

    Here, Watts et al. paper brings convincing evidence that data correction and averaging processes introduce significant warming bias, increasing by about 50% the warming trend, which questions the validity and credibility of those processes.

    You have issued in climate etc… many post raising the issue of the growing discrepancies between series based on satellites’ measurements (RSS and UAH), and series based on stations’ measurements (HADCRUT, GISTEMP…), that also questions the validity and credibility of those correction and averaging processes used to produce stations’ based data series.

    But you also have contributed to Berkley Earth Surface Temperature project, that implements its own (averaging and correction) processes, but still producing significant discrepancies with satellites data.

    So, if I may ask a question, what is, in the light of those growing discrepancies, and of Watts et al. findings, your final view on the validity and credibility of data correction and averaging processes in general, and more especially on those used for BEAST ?

  61. Pingback: Weekly Climate and Energy News Roundup #210 | Watts Up With That?

  62. This merits a separate post.

    But Mr. Jones, you showed yourself that the raw data in the USA has a cooling bias. Thus when this bias is removed the trend becomes larger.

    TOBS is a strong cooling bias. But, OTOH, Microsite is an effect of similar scale and scope.

    So the effects cancel each other out to ~.015C (with Microsite bias being the slightly larger effect).

    In your “raw” data, the “unperturbed” subset has a trend in the mean temperature of 0.204°C per decade. In the “perturbed” subset the trend is only 0.126°C per decade. That is a whooping difference of 0.2°C over this period. This confirms that in the USA the inhomogeneities (“perturbations”) cause a cooling bias.

    That is mostly (if not entirely) the result of TOBS bias. OTOH, our raw unperturbed Class 1\2 stations show 0.204C/d while our unperturbed Class 3\4\5s show a trend of 0.319C/d. That bias is even a little larger than TOBS-bias (0.115C/d vs. 0.078C/d).

    You are not seriously arguing that you showed homogenization to be wrong without studying how homogenization methods work, but only on the basis of two numbers looking similar?

    C’mon, my dear Baron. It sticks out a mile. Bottom line. We both know what is basically going on and how. We both know that this result is precisely what happens when homog is applied to a dataset with a systematic bias.

    Besides, you don’t like it when I infer the obvious? Well, homogenization is inherently nothing but inference. You have no trouble with that.

    So I say, why should you have all the inferred fun? I’ll have some, myself. You need to explain why those “two numbers” are similar. Infer that.

    When we communicated elsewhere, you were putting your chips on data jumps, suggesting the Microsite-compliant stations would show aggregate jumps that would mismatch with the Microsite-noncompliant stations. Well, you’ve seen the graph, and you know that isn’t so. We have a gradual divergence, just as I said.

    So the ball is in your court.

    • > You need to explain why those “two numbers” are similar.

      Why?

      • > Hmm. Let’s see …

        Clicking on the correct “reply” helps, otherwise Judy’s comment threads become quite raw.

        ***

        > Because they are a stereotypical result of homogenization applied to a dataset containing a systematic error? Because there is an identified systematic error evident?

        It doesn’t answer the question as to why “VeeV” would need to explain why the two numbers are similar.

        I doubt NG got you covered for that kind of rhetorical questions, Evan.

      • It doesn’t answer the question as to why “VeeV” would need to explain why the two numbers are similar.

        Are you serious?

      • > Are you serious?

        As much as this science-by-press release can be but contingent on how long you’ll continue arguing by questions and how much your teases towards VeeV are covered by NG’s work, Evan.

        Should I speak of open hostility instead of tease?

      • What on earth makes you think I am in any way hostile to VeeV? He has been of assistance to me and I am grateful to him. I would like it even better if he were to address this issue using his own homogenization methods.

        P.S., It’s nice being covered by J N-G. At this very moment he is re-checking our metadata list. We may have to drop one of our Class 4s. Since minor TOBS shifts occur, he may want to include a TOBS-adjusted version for our unperturbed set. Nothing feeds the bulldog like a little good review.

      • > What on earth makes you think I am in any way hostile to VeeV?

        Are you serious?

        Here’s your lead author of this science-by-press-release episode:

        Even input from openly hostile professional people, such as Victor Venema, have been highly useful, and I thank him for it.

        http://variable-variability.blogspot.com/2015/12/anthony-watts-agu2015-surface-stations.html

        So you go first: tell me what on earth makes your lead author declare that VeeV was hostile, and I’ll see what I’ll respond to your question.

        I’m not bluffing, by the way. Your question can easily get answered, but I’d rather have your team’s criteria for hostility first.

        Don’t worry. NG won’t need to cover my response.

      • David Springer

        Willard is a troll.

        Don’t feed the trolls.

      • Merry Christmas Willard. I see you are still too busy to waste your precious life duking it out with Don Don. Then you go and give Don Don a warmist he might like. Dr. Denning does a nice job versus Dr. Spencer. I am surprised that a warmunist can has hair-brained notions that the free-market will solve the problem Denning sound like a Heartland libertarian freak in sheep clothes.

        Below is a nice summary of climate future and mitigation optionss outlined by Dr. Denning.

        http://www.cmmap.org/scienceEd/summercourse/summerCourse12/docs/12.SatPM.Mitigation.pdf

      • Good to note that someone from the warmista enclave
        dares to enter into debate with the discredited ‘other’ ‘n
        even appreciate the free market.

        http://wattsupwiththat.com/2010/05/20/a-warmist-scientist-embraces-the-heartland-conference/

        Open science debate and free markets ever
        addressed problems via feed-back loops …
        real world consequences of evolutionary nature … oops,
        survival rules. Reality tests, like engineers’ hammurabi
        consequences, are so different from hallowed-halls politics.

    • David Springer

      https://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html#QUAL

      Above is a description from the horse’s mouth of all USHCN adjustments. The magnitude of each is shown in the lovely graph below which I’ve linked to many times over the years.

      This is from the days when metadata alone was used to detect perturbations. Detecting perturbations via difference analysis between neighboring stations was under development at the time and is mentioned in the text.

      I love the graph because it shows where and when and how much each individual method of torturing pencil whipping cleaning up the data accomplishes. Note that two methods alone account for nearly all the warming trend in the entire record – TOBS (warrantedt) and SHAP (probably not warranted).

      • Springer never even looked at SHAP

        tell us Dave, which SHAP procedures were most problematic?

        Answer…. you don’t know cause you never looked at the code.

      • David Springer

        http://www.homogenisation.org/files/private/WG1/Bibliography/Method_Description/Climate/karl_etal.pdf

        The underlying assumption that neighboring stations have mostly the same weather is irreparably flawed.

        Thanks for asking.

      • It was also from the days where a record was mainly compared to itself. It was not extrapolated/interpolated to try and estimate the temperature field which the ground (and balloon) thermometers are point samples of.

        Volume sampling (aka satellite) have different ‘rules’ but do not suffer from the infilling problem that ground stations have. They have all the data (or as much as we can currently capture). The problem is making sure that the data is aligned to temperature correctly, which the balloon set seems to show is true.

      • David Springer

        @Mosher

        I know something is broken. I don’t need to isolate the cause in order to reach that conclusion.

        Example. My car won’t start. That’s an observation. A fact. It would be nice to know why but the fact remains that it won’t start despite not knowing why it won’t start.

        Adjustments to the entire temperature record add a warming trend. Stations singled out in a group that don’t need any adjustments don’t show the same warming trend. Something therefore isn’t working right in the adjustment process. That’s an observation. A fact. It would be nice to know what exactly is broken but I don’t have the time or expertise to figure out what’s wrong. My ignorance of the cause of the failure doesn’t change the fact that the adjustment process doesn’t work. I can make suggestions based on a casual investigation. I think the root cause is exactly what Watts et al are claiming – poorly sited stations are in the majority and they then become the “trusted” stations used to correct the minority of well sited stations.

        But again for my purposes I don’t need to know why it doesn’t work. Sometimes you just scrap a car that won’t start and drive something else instead.

        Write that down.

      • ” I think the root cause is exactly what Watts et al are claiming – poorly sited stations are in the majority and they then become the “trusted” stations used to correct the minority of well sited stations.”

        I suspect the whole interpolation/extrapolation exercise myself. The temperature field that is being sampled is most definitely not the smooth curve between stations that BEST and the rest model.

        It is a quasi-chaotic pattern between sample points and not a simple curve. Weather modified.

        Satellites sample it all. They are, therefore, more likely to be accurate as to the connects of the whole field in any one area. YMMV.

      • Contents – damn word complete

      • David Springer

        RichardLH

        No it was not from the days when a station was compared to itself. Read the description in the top link.

      • David Springer

        RichardLH

        There is no infilling of missing regions in CONUS. The problem is too many stations not too few where the majority of that overabundance are poorly sited.

        Your point however certainly applies to the global surface reconstruction which, outside of Western Europe and CONUS, is so lacking in coverage it’s laugh worthy.

      • David Springer

        Adjustments to the entire temperature record add a warming trend.

        Whilst true (for well understood reasons) in the US, this is unequivocally false for global temperature. The opposite is in fact true; adjustments to the entire temperature record add a cooling trend.

        See for instance

        http://variable-variability.blogspot.co.uk/2015/02/homogenization-adjustments-reduce-global-warming.html

        and the extensive references therein.

      • “There is no infilling of missing regions in CONUS.”

        As there is demonstrably no thermometers in each farmers field between stations that is obviously not true when it comes to estimating the Temperature Field between stations.

      • And thank you for tightening up my wording.

        “Global Temperature is a 3D Temperature Field. It is discretely sampled by both point (thermometer) and volume (satellite) instruments with varying methodologies, time windows, area coverage and data sampling lengths.”

      • David Springer

        verystallguy

        Yeah right. The U.S. surface station network is the worst in the world.

        Oh wait…

      • David Springer,

        regardless, the net effect of all adjustments is to reduce warming globally, yes?

      • David Springer

        RichardLH

        Don’t be absurd. Temperature isn’t infilled to each farmer’s field. You should probably stop making things up and start checking what you write against some other source.

        https://www.ncdc.noaa.gov/oa/climate/research/ushcn/gridbox.html

        In creating USHCN time series for the analysis of U.S. mean temperatures, two widely accepted grid box sizes (5° x 5° and 2.5° x 2.5°), have been used.

        There are no CONUS grid boxes without a surface station inside it.

        Write that down.

      • Regardless of whether pairwise is all it’s cracked up to be, they are doing it wrong.

      • David Springer

        verystallguy

        USHCN adjustments warm the recent past.

        GHCN adjustments cool the distant past.

        Net effect is a greater warming trend over the entire record.

      • There are no CONUS grid boxes without a surface station inside it.

        5 degree squares are roughly 345 mile squares or 119025 square miles.

        2.5 degree squares are roughly 172 mile squares or 29756 square miles.

        Yeah there is probably one or more stations in each box. The problem is what they do with more than one station. If there is a “pristine” station that should be the value for the square. The contaminated data from rural and urban stations should just be ignored.

      • David Springer

        verystallguy

        USHCN adjustments warm the recent past.

        GHCN adjustments cool the distant past.

        Net effect is a greater warming trend over the entire record.

        —————————————-

        Addendum: I suspect you tried to change the subject from land surface station siting to land/ocean temperature reconstruction. Adjustments increase the warming trend on all land stations i.e. both USHCN and GHCN. Even as we speak, however, SST adjustments have been pencil-whipped to erase “the hiatus” so that too is subject to torture when needed to sex up the warming trend.

    • W. appears to think that disagreement and hostile review necessarily equates to personal hostility. (I expect he’d even like to stir up some.)

      There is a history of bad blood under the bridge between VeeV and the Rev, something I have been trying to patch over. But I have no hard feelings whatever towards VeeV.

      We have burned through at least a forum and a half discussing homogenization. He has been civil and helpful and willing to discuss. I have nothing but friendly feelings towards him. I am very grateful for his time and efforts to assist my understanding in these matters.

      But don’t worry. In my travel there have always been a few who would like me to be more hostile to those towards whom I feel no hostility. They will try to make me feel hostile towards Anthony and try to identify hostility between me and the likes of Dr. Connolley and the VeeV where none exists.

      This is just another example. I’ll add it to the list.

      • > W. appears to think that disagreement and hostile review necessarily equates to personal hostility.

        Besides having difficulty to click on the proper “Reply” button, Evan appears to have injected “personal” in this exchange (and in my mind) without justification,

        The point is quite easy. If VeeV’s criticisms can be characterized as hostile towards Willard Tony & Evan’s work, we might need to characterize Willard Tony & Evan’s work toward the established viewpoint as being hostile too.

        ***

        > (I expect he’d even like to stir up some.)

        I hope Evan’s joking, for this implies that when he teases VeeV, it’s to steer some personal hostility.

      • I don’t think that W. gets it that hostile criticism need not in any way translate into personal hostility. And I do think that VeeV gets it.

      • Not only do I get that that hostile criticism needs not in any way translate into personal hostility, but Evan’s the one to inject the word “personal” in that thread, for mind probing effect to boot.

        If VeeV made hostile criticism, it should go without saying that Evan’s into hostile criticism too, and that Willard Tony’s whattaboutism is at best hostile criticism.

      • I am not sure that I understand. I have savaged his baby and he has tried to poke holes in mine. All without personal animosity.

        You say you understand this.

        The rest of what you say pretty much tautology.

        P.S.: My best hostile reviewer is J N-G. And?

      • Notice the evolution: hostile criticism, personal hostility, personal animosity. All this without personal animosity, no doubt. So why inject “personal animosity” in this exchange?

        Also notice this evolution: from burdening VeeV to explain the number similarity, it is now formed as a request. It would be nice indeed if VeeV would take a look. I don’t think NG covers for burdening others with commitments they don’t have.

        ***

        I don’t think VeeV quoted Willard Tony’s “hostility” as a token of appreciation, Evan. Try as hard as you might to minimize this usage, you can’t special pleading your way out of the consequences of this choice. An immediate consequence is that your work amounts to hostile criticism. Another consequence is that Willard Tony’s a place of hostile criticism.

        You can call this pleading homogenization if you will.

        The alternative would have been to disown Willard Tony’s “hostile” remark. It’s too late now. Nothing personal.

        Thanks for playing.

      • I think perhaps you are the one who is injecting things. #B^)

      • Willard eschews the meat at the table, preferring instead to nitpick the soup bones.

      • I think he’d be maybe a little pleased to see me in a personal row with VeeV, or better yet, take a piece out of Anthony. But he can’t affect me in this regard.

        Well, can’t blame him for trying.

      • David Springer

        Don’t feed the trolls.

      • > can’t blame him for trying.

        Evan’s mind probing is duly noted.

        So far, I’ve been trying to make Evan justify why VeeV would be committed to explain the similarity between two numbers and play along his whataboutism, and to commit himself to Willard Tony’s concept of “hostility.” Evan backtracked on the first and doubled down on the second while acknowledging his “personal row” with VeeV.

        Two out of three ain’t that bad if we consider that the third objective is a gift that may keep on giving. For instance, if we recall the O’Donnell Affair, NG might not cover him for that doubling down . Neither Willard Tony always does, incidentally:

      • Niggle on Willard, niggle on.

      • So far, I’ve been trying to make Evan justify why VeeV would be committed to explain the similarity between two numbers

        Okay, I’ll try typing slowly this time.

        The similarity of the two numbers is a scathing challenge to homogenization as currently practiced. It is a stereotypical fingerprint of systematic error.

        VeeV is chief boffin of homog. Therefore and explanation is called for. (Yelled for?)

        He has, of course, the right to remain silent. But I would rather he examine the problem. But if he doesn’t, that’s okay — we will. It is not as if there was never a possibility that a systematic error was unaccounted for. And here it is. So I say he should account for it. And if we are not right, he should find out what else is the matter. Because something is surely the matter here. He’s the expert.

        Our findings do not invalidate homogenization as a basic approach. Accounting for this systematic error would improve it. It is a seductive tool, after all.

      • > Okay, I’ll try typing slowly this time.

        Glad you do, although I doubt there was not a previous time where you were typing at all. Your “we both know” trick bypassed all the typing you needed to do to make a clear claim about the number.

        Now you do, so I thank you.

        Committing a claim matters because it’s now clear who’s the one with the onus to substantiate it. It also shows why VeeV’s question is quite legitimate:

        You are not seriously arguing that you showed homogenization to be wrong without studying how homogenization methods work, but only on the basis of two numbers looking similar?

        Shifting the burden on VeeV was more than suboptimal, Evan. It’s not the kind of move that is not allowed in the game scientists play among themselves. Don’t forget that science is more of a race than a boxing match. Dirty tricks only slow you down.

        Think about what Napoléon said of his foot soldiers.

      • > Glad you do, although I doubt there was not a previous time where you were typing at all.

        Scratch that “not.”

      • evanmjones: “The similarity of the two numbers is a scathing challenge to homogenization as currently practiced. It is a stereotypical fingerprint of systematic error.

        Sorry, I fail to see any challenge whatsoever.

        evanmjones: “VeeV is chief boffin of homog. Therefore and explanation is called for. (Yelled for?)

        I am not in the real world. Possibly I am within the blog-o-sphere. Whatever the case, it is completely irrelevant. Just as stupid as Pielke Sr trying to give Gavin Schmidt homework. In science you put your article in the pool of ideas (the Literachur as Willard would say) and someone who finds it interesting may continue working on it.

        Science is not a hierarchical right-wing think tank.

        evanmjones: “He has, of course, the right to remain silent. But I would rather he examine the problem.

        The right to remain silent. Interesting term.

        If you wanted a scientific reply you would provide the manuscript and the data. Without that it is possible to point to some inconsistencies, but very difficult to make constructive suggestions what the exact cause of the cooling inhomogeneities in your non-raw “raw” dataset are.

      • Victor Venema “Science is not a hierarchical right-wing think tank.”

        Correct.

        Unfortunately climate “science” is very clearly a Progressive Post-Normal think tank.

        Here’s Mike Hulme ( http://www.mikehulme.org/category/bio-and-cv/ ) on the subject:

        ….
        This is the wrong question to ask of science. Self-evidently dangerous climate change will not emerge from a normal scientific process of truth seeking, although science will gain some insights into the question if it recognises the socially contingent dimensions of a post-normal science. But to proffer such insights, scientists – and politicians – must trade (normal) truth for influence. If scientists want to remain listened to, to bear influence on policy, they must recognise the social limits of their truth seeking and reveal fully the values and beliefs they bring to their scientific activity.

        Chink of weakness

        Lack of such reflective transparency is the problem with “unstoppable global warming”, and with some other scientific commentators on climate change. Such a perspective also opens a chink of weakness in the authority of the latest IPCC science findings.

        What matters about climate change is not whether we can predict the future with some desired level of certainty and accuracy; it is whether we have sufficient foresight, supported by wisdom, to allow our perspective about the future, and our responsibility for it, to be altered. All of us alive today have a stake in the future, and so we should all play a role in generating sufficient, inclusive and imposing knowledge about the future. Climate change is too important to be left to scientists – least of all the normal ones.

        http://www.theguardian.com/society/2007/mar/14/scienceofclimatechange.climatechange

      • David Springer

        “Science is not a hierarchical right-wing think tank.”

        Correct. The climategate emails have pegged the global warming subset of science as a hierarchical left-wing think tank.

      • Shifting the burden on VeeV was more than suboptimal, Evan. It’s not the kind of move that is not allowed in the game scientists play among themselves. Don’t forget that science is more of a race than a boxing match. Dirty tricks only slow you down.

        The ball is in his court. It is a burden. What is, is.

        Don’t forget that science is more of a race than a boxing match.

        I never did. I also never forgot the racers always seem to be wearing boxing gloves.

        Dirty tricks only slow you down.

        Dirty tricks means hitting below the belt.

        He is the expert. He is better equipped for it. He knows homog inside and out, so he can adapt it faster and better than anyone else. With great expertise comes great expectations. And other burdens. So I encourage him to do that. I think it would help him in his race to improve the HCN.

        He is under no obligation, whatever. He has not yet expressed any particular desire to do so. If he doesn’t go for it, we’ll give it a go, ourselves, or someone else will. But the ball is in his court, if he only takes the opportunity to hit it.

      • Science is not a hierarchical right-wing think tank

        Why, no, it isn’t. Let us endeavor to see that remains so.

        There was one thing you said in our sojourn over at the HotL W that hit me, and I had a hard time taking it in. You casually remarked that you needed to get your Ph.D. so you would be allowed to continue research.

        Now I know you didn’t mean that literally. But it leaves a bit of a fingerprint of implicit hierarchy. We live a looser existence in the US. We see non-science degreed people getting into the climate peer-review game. Up from the ranks. There is a severe filter, but the ones who pass the ‘skins often have something to say.

        Climatology is a very diverse, chaotic field, and there’s a lot of intellectual elbow room going on. No need to worry about straining the sink capacity. (Besides, like W. the peacemaker says, we got J N-G for cover.)

      • The ball is in his court. It is a burden. What is, is.

        No, it’s not. It’s in your court. Not every study has to have some kind of response. It is entirely reasonable, in some cases, to simplty ignore what others have done. Your study will only get some kind of response if you make it something worth responding to. You don’t get to decide this.

      • They don’t even pretend to be open-minded. The rabid debunker goons of the 97% consensus are anticipating your study with evil intent, evan. These clowns have been in charge of climate science for decades and from Kyoto to Copenhagen to the Le Grand Farce d’ Paris, they have failed. If the world needs saving, somebody else is going to have to do it.

      • They don’t even pretend to be open-minded. The rabid debunker goons of the 97% consensus are anticipating your study with evil intent, evan. These clowns have been in charge of climate science for decades and from Kyoto to Copenhagen to the Le Grand Farce d’ Paris, they have failed.

        Now if that doesn’t sound like a conspiracy, I don’t know what is, Don. I thought they were just were just a bunch of incompetents who can’t persuade the public. Now you are bringing accusations of evil intent.

      • Scratch that “not.”

        My fingers have wings.

      • > The ball is in his court. It is a burden. What is, is.

        Don’t forget to complete Bishop Butler’s dictum, dear Evan: everything is what it is, and not another thing. Unless you were appealing to Parmenides?

        You now made a claim. It is your burden to substantiate it, not VeeV’s. The many appeals you made to pull VeeV in are what they are: cheap ways to reverse that burden of proof.

        Playing whataboutist games is not justified because your co-author has a website with a name that consecrates whataboutism.

        ***

        > Dirty tricks means hitting below the belt.

        Insinuating one’s claim with “we both know” was one. Reversing one’s burden of proof is another one. Playing “The Fox and the Crow” by appealing to his expertise is a recent one. And that’s notwithstanding the dirty tricks thrown in my direction.

        ***

        > I also never forgot the racers always seem to be wearing boxing gloves.

        And we’re to accept your “hostility” doctrine, it’s even a good thing. The main problem is when you stop racing. Which is indeed what you do when you burden VeeV with your own homeworks.

      • Don’t worry about Don Don, Joseph. The spirit of Christmas got hold of his yesterday when he revealed his most burning wish to join the cause. If only he could find someone to convince him!

        Christmas now being a thing of the past, and the spiked egg nog being digested, Don Don can now return to his usual protection program.

        There’s still something prophetical to his actual concerns, prefacing more solemnty in a few days.

      • Evan Jones: “He is the expert. He is better equipped for it. He knows homog inside and out, so he can adapt it faster and better than anyone else.

        I am an expert for homogenization methods. I am not an expert for all the instrumental changes in the observational methods in a country across the pond.

        This is your study, this is your paper. You are supposed to be the expert. Otherwise you are doing something terribly wrong.

        Evan Jones: “There was one thing you said in our sojourn over at the HotL W that hit me, and I had a hard time taking it in. You casually remarked that you needed to get your Ph.D. so you would be allowed to continue research. Now I know you didn’t mean that literally. But it leaves a bit of a fingerprint of implicit hierarchy.

        This is a complete reversal of what actually happened. It may be what how you read it, but that is your inferiority complex and not what I wrote. I tried to explain that this PhD is not important to me, but that I needed to get one to become scientist, the profession I love.

        How an I elitist when I ask you not to refer to me as Dr. Venema and to treat me like any other human being?

        http://wattsupwiththat.com/2014/06/28/problems-with-the-scalpel-method/#comment-1672822

        That is just like you need to go high school to be allowed to go to college. Or like you need a drivers license to drive a car. Like you need to be adult to drink and so on. Those are the rules, I cannot change them. In the past there were people who finished university to immediately become scientists. That no longer happens, at least in Europe, nowadays you need a PhD.

        But I would rather discuss you unicorn: Why does your “trend exaggeration” not work for 30-year trends?

      • > Besides, like W. the peacemaker says, we got J N-G for cover.

        That’s not exactly what I said, Evan. What I said was that NG got us covered. That “we” may not coincide with that “us.”

        Also note that NG covers for the stat part, a part that is missing from your “science by press release.” NG may not cover for your “hostility” doctrine, your fingerprint claim, or your burden of proof reversal.

        It’s nice to see that you agree with bender (ask Mosphit about that one), but do not suppose I am bringing peace. Even yesterday’s jubilee did not.

      • You 97% alarmist clowns need to do some self-auditing, willito. Try to understand why your efforts have only resulted in serial failure.

      • Here’s one of these clowns, Don Don:

        Regarding CG I, II, and III, NG got us covered.

      • Joseph: “Now you are bringing accusations of evil intent.”

        Some very clearly are motivated by evil intent.

        The late unlamented (except by the Warmunists) George Soros, for example.

      • Well there you go, willito. A fine example of a clown climate scientist circling the wagons by minimalizing and making lame excuses for the underhanded and unprofessional behavior of his goon colleagues. You are sharp today, willito.

      • > A fine example of a clown climate scientist […]

        The very same clown who got us covered in the Willard Tony & Evan’s science by press release episode, Don Don.

        How many of the 3% that remains are not clowns, according to you?

      • The late unlamented (except by the Warmunists) George Soros, for example.

        Late? Far as Google knows he’s still alive.

      • I know which clown he is, willito. Doesn’t mean he can’t ultimately redeem himself. That’s what the self-auditing is for. Try it. In the meantime, send in some more clowns. We are amused.

      • Your concept of “self-auditing” might deserve due diligence, Don Don:

        Under some systems of communism, party members who had fallen out of favour with the nomenklatura were sometimes forced to undergo “self-criticism” sessions, producing either written or verbal statements detailing their ideological errors and affirming their renewed belief in the Party line.

        https://en.wikipedia.org/wiki/Self-criticism

        I knew you were a socialite, Don Don, but a closet socialist?

        That would explain the hate.

      • You are deteriorating, willito. I tried to help you.

      • You’re trying to help me by expressing your concerns about your plan to continue to spout brutal nonsense because there’s nobody in the world to convince you of anything, Don Don?

        It’s like drinking, you know. Nobody’s making you do it. You alone can save yourself.

      • Poor, willito. You are straying off topic and descending into blithering incoherence, again.

        This is serious:

        I am not convinced that ACO2 is dangerous and I am not convinced that it is not. Around 7 billion people in the world live somewhere in the neighborhood of the unconvinced. Not losing sleep over global warming. More worried over a lot of other stuff.

        The burden of proof is on those who want us to give up the fossil fuels that power modern civilization. The 7 billion deniers are as unconvinced as ever. Popular support and consequently the political will for effective mitigation measures is not sufficient to get anything better than the Paris agreement to not do anything binding and meaningful. See Hansen for a realistic assessment of the latest failure, in a long line of failures.

        Strategies that result in serial failure should be re-examined. People who habitually fail might benefit from some serious introspection. Playing whack-a-mole with the likes of Judith Curry, Tony Watts and Evan Jones is not going to help you all get meaningful mitigation.

        You, veevee and the other alarmist activistas should confine your attempts to ridicule and marginalize skeptics to the alarmist echo chamber blogs that are rarely visited by other than your compadre true believers. Your behavior is causing the unconvinced to be less likely to listen to the legitimate arguments based on honest science. Say hi to your boss kenny.

      • > This is serious: […]

        No, you’re not, Don Don. Nobody’s here to convince you of anything. Looking at your last thousand comments at Judy’s, it’s quite clear you’re not even here to be convinced of anything. It’s been how many years now, and you just rediscovered concern trolling.

        Come on.

        You’re not representing anyone but your sorry self. You’ve already marginalized yourself by entertaining irrelevant, incoherent, and almost irrational beliefs about something you know zilch about just because the risks surrounding AGW conflict with your Far-Western vision of the economic world.

        If you really think that the stake of coming here and having to deal with you is if we’ll mitigate or not, you’re just so completely full of yourself it is no wonder you get so ferocious in your roles of freedom fighter and of Judy’s don. Mitigation’s happening, and will happen whether you like it or not. As Scott Denning once said to the Heartland libertarian freaks, right-wing ideologues won’t be able to pout for long, or decisions will be taken without them.

        Here’s Scott Denning reflecting on his experience:

        My communication objectives are more modest than that, and they’re none of your concern.

      • I struck some poor chumps last nerve. That’s quite the angst ridden tirade, willito. Complete with a video of some obscure clown yammering about something that we can safely assume is not worth the watching.

        Another poor effort from you, willito. But that’s what we we have come to expect from an old Emeritus Professor of Impractical Left-Wing Utopian Esoterica who used to be somebody at some exclusive little liberal arts college attended by the unemployable offspring of rich-guilty-white-liberals.

        Playing whack-a-mole is not a winning strategy, prof. willito. If you alarmist activista clowns are going to save the world, you need to get a new game plan. Try:

        Open and honest science.
        Open and honest debate.

        That’s all the time I have for you, until next year.

        Happy New Year!

      • Here’s a debate between Roy Spencer and whom you call an “obscure clown,” Don Don:

        As always, thank you for your concerns.

      • David Springer

        Willard is an obscure clown. No argument there.

      • David Springer: “Willard is an obscure clown. No argument there.”

        Objection!

        That is very insulting to obscure clowns. They have feelings too, you know.

      • This is your study, this is your paper. You are supposed to be the expert. Otherwise you are doing something terribly wrong.

        I am an expert on rating stations. I am not an expert oh homogenization.

        This is a complete reversal of what actually happened. It may be what how you read it, but that is your inferiority complex and not what I wrote. I tried to explain that this PhD is not important to me, but that I needed to get one to become scientist, the profession I love.

        You now say the same thing from the flip side. You felt you needed one to become a scientist, the profession you love. I think that may be more of a European standard, a stricter, a more formal-academic one, one that I have a hard time taking in.

        I live in a place with looser standards. I never felt the need for a Ph.D.; society never put that burden on me. In America, all I have to do is produce good work and get it published. So I just do that.

        How an I elitist when I ask you not to refer to me as Dr. Venema and to treat me like any other human being?

        I think you misunderstand me. You have not treated me as an elitist would at all. If anything, you possess great noblesse oblige. You are by no means an elitist, rather, you are of the actual elite. You have treated me with graciousness, politeness, without condescension, and have addressed a great many of the questions I asked.

  63. Re. 1st paragraph: Make that over 0.35C/d, not 0.15C/d.

  64. Kevin O’Neill asks why not compare to USCRN. Only the good stations needed for that. According to Watts et al the temperature of the good stations should be lower than not so good stations in warmer years. Expand this to 2012 and check if that is the case. If so: big trouble because the result would be lower temperatures than USCRN as well:

    Besides, their graph looks like this one that shows the effect of CRS-MMTS bias:

    (from here: http://rankexploits.com/musings/2010/a-cooling-bias-due-to-mmts/ )

    The difference appears from the late 80ies into late 90ies. Just like the Watts presentation.

    My guess is that the results from the Watts presentation has something to do with the application of their special kind of MMTS adjustment.

    • We would expect no significant net divergence between Class 1\2 and Class 3\4\5 from 2005. Bad microsite exaggerates a trend. If there is no trend, there should be no divergence.

      • “Exaggerates the trend” is no magic bullet Evan. There should be an exaggeration of the trend in the worst sited stations from 2008 to 2012:

        They happen to match USCRN.

        We can also see that “the exaggeration of the trend” from the eighties to the nineties is in the wrong direction. The worst sited stations should become colder during that period. Your results shows the opposite.

        How could that happen according to your heat sink hypothesis?

      • In the first example, four years is meaninglessly short stretch. We use a 10-year subset interval (1999-2008). And that is stretching it.

        In the second set, that is adjusted data, so it is problematic. We look primarily at a 30-year period with a strong trend. Using raw data for the best stations with as little adjustment as possible. The difference appears large and statistically significant.

      • Evan:

        The length of the period is irrelevant because your proposed physical mechanism is presence of heat sinks that exaggerates warming or cooling. Trend is not physics. Trend is statistics.

        Inadvertently you pinned the most likely mechanism for the divergence between best and worst sited stations in the second example. The adjustments during the MMTS transitiion. The origin og the trend difference for the different series is lowering of temperatures for the best stations during late eighties and nineties.. A lowering that moves in the wrong direction for your proposed mechanism.

        Your graph look too much like this to be a coincidence:

        http://rankexploits.com/musings/wp-content/uploads/2010/04/Picture-225.pnghttp://rankexploits.com/musings/wp-content/upload
        s/2010/04/Picture-225.png

      • In an open environment with, Dr Curry’s unknown unknowns affecting each station plus numerous other factors, a longer term trend is required. Even our ten-year patch is cutting it thin for a gradual divergence.

        Inadvertently you pinned the most likely mechanism for the divergence between best and worst sited stations in the second example. The adjustments during the MMTS transitiion.

        I see you noticed that. The fly in the ointment is that the Class 1\2 set is disproportionately made up of CRS units. MMTS conversion hit the 3\4\5s harder than it did the Class 1\2s.

        And the reason for the lack of divergence prior to that is that all of the stations were CRS and a Class 1\2 warms about as fast as an MMTS Class 3\4\5. Hopped up Tmin, but especially Tmax. And the reason for that is that CRS units carry their own personal heat sinks around with them.

        The divergence wasn’t the offset thing, really (assuming the calibrators did their job), it was the trend thing. And that explains why Menne does a centered 15-year pairwise to adjust for MMTS. If it weren’t a trend deal, a simple offset (if needed) would have made the patch quite neatly (relatively speaking).

        But (like what homog does to microsite) he’s making the wrong adjustment. He’s adjusting the wrong thing in the wrong direction. It’s not the MMTS units that are the problem. It is the CRS units that are the problem. Rather jacking up the MMTS trends, he needs to be lowering the CRS trends.

        And that one rattles the chain clear back to 1880. Yes, I am talking “warming the past”.

        I am guessing the good old past-adjustment approach will be less enthusiastically pursued if they have to warm it than it was when they were cooling it. But that’s fine. Just gives me more elbow room. If they won’t do it, I aim to have a crack at it. Who knows, maybe I’ll prove myself wrong.

      • David Springer

        @ehak (Kyle Hilburn?)

        The heat sink hypothesis is obviously wrong. UHI is largely accomplished by land use change that results in less evaporative cooling. Paved areas replace grass and trees which work to pull subsurface moisture out of the ground and evaporate it through stomata during the gas exchange that occurs during photosynthesis. Urban environments collect stormwater in street gutters then shuttle it underground into drainage systems which empty into rivers. All that serves to drastically reduce the time and surface area available for evaporative cooling. A secondary means of UHI generation is waste heat emitted to the environment. Every automobile, every structure’s heating and cooling system, every light bulb and computer, and anything else that consumes energy generates waste heat that is vented into the local environment.

        Heat sinks will only serve to lessen day/night temperature difference they won’t change a trend.

        Despite Evan’s heat sink hypothesis being wrong that does not change the observation that pristine, unperturbed Class 1 and 2 stations requiring no adjustment other than a very minor MMTS adjustment where indicated in metadata display a drastically lower warming trend than adjusted stations. The obvious, inescapable conclusion is that the adjustments overall add a false warming trend.

      • There you go Evan. The class 1/2 have more CRS… Then it is absolutely necessary to have a valid adjustment of the CRS -> MMTS stations. The reason for your divergence might very well be an inadequate adjustment. That might very well be why the trend difference originates in the period for that transition.

        Short version: You have probably applied a wrong MMTS adjustment.

      • evan

        “Bad microsite exaggerates a trend.”

        This is a testable hypothesis.

        When commenters showed it to be false, you objected that a 4 year
        trend was “too short”. That doesnt wash. You now need to explain
        how this unicorn physical process only operates over long periods

        Most likely explanations

        1. Site rating are wrong or incomplete
        2. MMTS adjustment is wrong.

      • David Springer

        The warmunist thesis appears to be there is no such thing as an unperturbed station.

        My thesis is there is no such thing as an unperturbed warmunist.

        HAHAHAAHAHAH!!!!!!!!!!!!!!!!!111 I kill me sometimes!

      • Kyle Hilburn?

        From RSS? The circles in which we move without even knowing it! JC’s unknown unknowns abound.

        As for heat sink vs. UHI:

        I do not think it is encroachment of UHI mesosite for the following reasons.

        1.) Removal of urban data does not affect the aggregate trend of the remainer. (Tmean matches to 3 decimals.)

        2.) Poorly sited rural stations have a higher trend than well sited urban stations.

        3.) Stations with poor microsite warm at a faster rate than well sited stations, even if not encroached and having the same rating throughout the study period.

        Heat sinks will only serve to lessen day/night temperature difference they won’t change a trend.

        That’s what NOAA says. Our observations differ.

      • There you go Evan. The class 1/2 have more CRS… Then it is absolutely necessary to have a valid adjustment of the CRS -> MMTS stations.

        What it means is that Class 1\2 stations are affected less by MMTS conversion. Not more.

        Then it is absolutely necessary to have a valid adjustment of the CRS -> MMTS stations.

        Oh, yes. Instead of the MMTS-> CRS adjustments we have been seeing. The wrong set is adjusted — in the wrong direction. Where have we seen that before? (Yes, homogenization, I am pointing at you.)

        I think y’all are looking for your answers in the wrong places. Try a 180-degree turn.

      • Mosh, If I tried to assert an HSE effect on trend based on four years of data, they would laugh me out of the room. We are not talking about a closed, controlled environment, here. And we are talking about a gradual effect.

        What if I tried to assert CO2 had no effect on warming because of of the lack of warming during the 1950s?

        Regardless, either I am right or I am wrong. Or there is some other sort of systematic error here which, for an unknown reason, correlates with microsite quality. Or it is some sort of monstrous coincidence or artifact of confirmation bias.

      • I’ve brought up to Mosher before using only “good” stations for adjustments. He said at the time there is no way to tell due to incomplete information: How do you select the “good” stations?

        I think you guys have done very close to the BEST job possible selecting the good stations. While there are still some unknowns, we do know there are problems with the others. I see no reason not to use this hard-won information.

      • And I see no reason not to query the extrapolation/interpolation that is required to get from point samples to the 3D Temperature Field that is then being discussed.

        Any farmer will tell you that some fields get frost early, Some never. Capturing that detail from a ‘remote’ point sample (no matter how good) is always going to be a guess. An educated guess maybe, but a guess none the less.

      • Even if one believes there could be a problem with the remaining 90+ stations, one could make a case for avoiding the others with known problems when searching for an adjustment.

      • The problem is not only with the long tem accuracy of any individual point sample, but how truly representative that point sample is for the wider area/volume it is supposed to represent.

      • David Springer

        Can you describe the heat sink(s) in question and how they work to change a long term trend?

      • David Springer

        Hoar frost is more/less likely depending on how still the air, albedo, and humidity. Temperature needn’t be different for it to form in one place and not another. You should do more reading and less writing, RichardLH.

        Regardless this still has nothing to do with gridded data sets.

      • David Springer

        A good station can become a bad station and vice versa several times in a single year by simply failing to cut back vegetation. Class 1 requires vegetation less than 4 inches and Class 2 less than 10 inches. As well the type and condition of the vegetation will also have an effect – lush green or dead brown. Snow cover or not will also have an effect. It’s really a hopeless task to get ground station consistency with the needed precision and accuracy for this task. I will never trust anything except satellites measuring temperature of a column in the troposphere isolated from ground level effects by altitude.

      • The energy ‘cliff’ that is freezing/melting has a LARGE effect on temperature. I would suspect that it should not be dismissed so easily.

      • David Springer

        Not 5 feet above the surface. I’ve seen hoar frost form when air temperature at 5 feet above the surface is 40F. It happens by radiative cooling of a solid surface with good clear sky exposure in dry air. The surface can cool radiatively far faster than heat from warmer air can replace it through conduction. The stiller the air the slower the energy can be replaced via conduction.

        Stick to software. You suck at basic physics.

      • “Regardless this still has nothing to do with gridded data sets.” So you say. Observation says otherwise. And, yes, I do read a LOT as well as write.

      • David Springer

        Read harder.

      • And I have seen quite large clouds appear and disappear faster than the satellites pass overhead. A point sampling instrument correctly placed could have captured that. A satellite may miss it.

      • David Springer

        Just stop. You’re babbling. A grid cell in USHCN is tens of thousands of square miles with typically a dozen or more surface stations within it. A single cloud won’t have any effect on it.

      • And I feel that you are ignoring a large energy path because it is not captured by the instruments you may or may not favour.

        Nquist requires that any discrete sampling of an underlying Field must obey his rules.

        Both point and volume sampling methodologies are discrete sampling for this observation (because satellites move).

        How about the lower Nquist limit on frequency analysis so overlooked by those who draw straight ‘trend’ lines. How do you justify the implied infinite lower bandwidth required for straight lines? You do get that question don’t you?

        Perhaps you need to think more and read/write less?

      • David Springer

        I learned what the Nyquist rate was in the 1960’s. We are establishing decadal trends not hourly. Duh. I repeat, you’re babbling. Stop.

      • Ok. Just stand back and think for a minute. Everything I have said is a cold hard logical description of what we are dealing with. Point and volume sampled data. Of a 3D Temperature Field. Or do you believe we are discussing something different?

        The example I gave is of where a satellite series would differ from a thermometer.

        The other example is where the same sort of thing can happen in reverse.

        You may wish to step over the inconvenient details. I happen to believe they are important.

      • HAHA… that’s not the only :gold standard” CRN site that has issues.

        I was saving that for my next surprise.

        Say it ain’t so, Mosh! The ten or so I surveyed are cleaner than a hound’s tooth. So Class 1 it hurts. I would hate to think that CRN is going to be anyone’s “next surprise”.

      • I suspect your comment about Nyquist is what most would say. Perhaps you ought to think more carefully about what I said and what it means. Sampling theorem is quite precise about what and what is sensible. Ignoring it isn’t one of them.

        It doesn’t matter at what resolutions you use the results, days, months, years, decades.. It does matter that you irrevocably distort what you present if you do not honour it.

      • I’ve brought up to Mosher before using only “good” stations for adjustments. He said at the time there is no way to tell due to incomplete information: How do you select the “good” stations?

        We use NOAA/HOMR/NCDC[EI]/USHCN2 metadata and curator interviews (when we can get them). Neither are 100%-perfect. But both are very good. USHCN2 metadata is near-complete for TOBS and moves (and I have spot-checked with some B91s).

        Going back almost ten years, the metadata was pretty spotty, even for the USHCN. But someone made a good hire, transfer, or decision, and it has been greatly improved, both in terms of present and of site history.

        Coordinates as of today are (usually) good. But they were pretty wretched, going back. I groaned every time I saw an xx.500 or xx.833 coordinate. No way to find the moved stations, so we drop them. (The other big set of dropped stations is the one with the vast bulk of the TOBS bias.)

        I also looked at the GE wayback machine to see if the microsite change (if any) was large enough to cause a rating change.

        It is not perfect. We do our best to insure that the Leroy (2010) heat sink rating for unperturbed stations (both compliant and non-compliant) is consistent throughout the station’s record.

        We have reduced those objections to a real but nibbling-round-the-edges concern. Sort of what homogenization is intended to do (in a metaphysically reversed sense).

        Thing is, we have the Plush-East-Side luxury of the USHCN. GHCN, however, requires homogenization, inference of missing metadata, and probably can’t get away without some sort of infill. That’s a lot of inferring. And I agree it is necessary.

        But it is important to account for microsite. (Or whatever it is we have turned up in the data.) Even if it has to be inferred, based on “similar”, “nearby”, or whatever, examples, or maybe even from a trend fingerprint only.

        That makes the job harder. I appreciate that. I also appreciate the reluctance to upset the applecart and apply all sorts of yet further inference onto the pile of existing inferences and reconciling the mess. After all, even if the sites are all spotted and snapped, that doesn’t help us when it comes to missing data and (especially) metadata. But that is going to be what it takes.

        I think you guys have done very close to the BEST job possible selecting the good stations. While there are still some unknowns, we do know there are problems with the others. I see no reason not to use this hard-won information.

        We like to think we have narrowed the field of uncertainty. And don’t forget: Both BEST and GHCN can account for this in their own way (whether they do it right or not being another question). Homogenization and splitting are potentially powerful and useful tools. Possibly the only way to (sort of) redeem the GHCN. But if you juggle ’em wrong, you’ll get cut.

      • “Coordinates as of today are (usually) good. But they were pretty wretched, going back. I groaned every time I saw an xx.500 or xx.833 coordinate. No way to find the moved stations, so we drop them. (The other big set of dropped stations is the one with the vast bulk of the TOBS bias.)”

        And there comes one of the disconnects I see. The only reason to discard data is if you want to compare/contrast it with another/other similar points.

        If you only compare it to itself (as a reference) you can get anomaly data quite easily.

        I believe that all point sampled data should only be compared to themselves, not a wider field. No interpolation/extrapolation to infill what we do not know but can only guess/approximate.

        Point sampling of a underlying Field is the equivalent of a sub-sampled, piece-wise integral of the actual ‘curve’ between those points. Always going to be a source of uncertainty.

        If we were to discard that step and only create anomaly data from each point individually then any ‘trends’ present will be unchanged. Just on slightly differing baselines. Baselines that are more accurate than any sub-division of the same sample window. Sure they move slightly as new data is added, but that in itself tells us something.

      • And there comes one of the disconnects I see. The only reason to discard data is if you want to compare/contrast it with another/other similar points.

        I need to clarify. I said “lousy”, but I meant “precise”. A station would be listed at say, xx.000, yy.500. Then ten years later it would be xx.1232 and yy,4884. The previous coordinates are not wrong, but so imprecise as to make it difficult to locate a station prior to the GPS.

        I am not talking error here, i am talking precision only.

        If you only compare it to itself (as a reference) you can get anomaly data quite easily.

        Utterly necessary. Otherwise station and data dropout bite you in the ass. Heck, that be whatfor anomaly in the first place. And that’s how we do it.

        This only becomes an issue with missing data from the start of the series.

        I believe that all point sampled data should only be compared to themselves, not a wider field.

        You are singing my simplistic song. Yes, let the numbers flow free, unshackled to arbitrary-yet-insisiously-malleable chains.

        If you are going to put in a severely truncated record, though, you will have to baseline it at the join-point, for continuance, so as not to use too small an anomaly spread. I am working on that for followup. Should bring a whole lot more partial records into the stationset. Useful for us. (More useful for the GHCN.)

        No interpolation/extrapolation to infill what we do not know but can only guess/approximate.

        That’s how I do it. If you do it by 30-year month anomaly, averaging those trends for 30-year yearly trend, you avoid the well known annual distortion issue neatly. I even weight for varying number of day per month (that may be an indulgence in false precision, but it can’t hurt).

        But J N-G also put together a lovely, self-checking infill thing that matched our results well. Maybe it’ll be archived, maybe we’ll save it and he’ll improve it for followup.

        Point sampling of a underlying Field is the equivalent of a sub-sampled, piece-wise integral of the actual ‘curve’ between those points. Always going to be a source of uncertainty.

        If we were to discard that step and only create anomaly data from each point individually then any ‘trends’ present will be unchanged. Just on slightly differing baselines.

        Right you are. And who cares? Baselines are only for when you need them. To wit, our graph, which I baselined to 1979 as startpoint. That makes the graph trendline with a >. With our all-natural, genuine green anomalies, trendline more like an X, as one would expect.

        Baselines that are more accurate than any sub-division of the same sample window. Sure they move slightly as new data is added, but that in itself tells us something..

        Yes, yes, and yes.

      • P.S., Good thinking, old son. We’ll make a wargame designer of you, yet.

      • “I’ve brought up to Mosher before using only “good” stations for adjustments. He said at the time there is no way to tell due to incomplete information: How do you select the “good” stations?”

        This is especially true since the LeRoy criteria have NEVER BEEN TESTED IN A SYSTEMATIC FASHION.

        Pointing at an “authority” that has never been tested doesnt really help your cause.

        The bottom line is you get the same answer whatever subset you use.

        There is one exception: let anthony and evan pick and choose.. then and only then will you get a slighly different answer… for a small part of the world..

      • Thanks. It is just a simple observation at heart.

        Point sampled data needs to honour Nyquist in it method of usage (as do the satellites but that is a different question).

        As I have pointed out elsewhere, Nyquist has a lower as well as an upper limit. You cannot draw straight ‘trend’ lines through the data without having absolute knowledge of what happened before the data collection started. (i.e. an infinite lower frequency bandwidth)

        Nyquist says you do have that knowledge, so a straight line is not possible (with any scientific precision). At best (pun?) it will be accurately bounded by a single sign wave across the whole series. Beyond that point (and especially in the presence of noise) any phase information is becoming more and more lost. Well out beyond it and magnitudes are uncertain as well.

        Is the apparent rise in the temps in UAH down to observing a half wave of some underlying temperature movements/trends/cycles/waves? The data sets available are still too short for an accurate answer.

      • “The bottom line is you get the same answer whatever subset you use.”

        Only if your subsequent treatment of the data allows the conclusions you then draw.

      • “P.S., Good thinking, old son. We’ll make a wargame designer of you, yet.”

        Assumes I have not done that already? Game design anyway (and it was set in war context).

      • David Springer

        RichardLH babbles on without missing a beat. I’m reminded of M-y-r-r-h whose name is evidently unspeakable to this day.

      • And you consistently fail to address anything about the scientific points I raise and babble on regardless.

      • Assumes I have not done that already? Game design anyway (and it was set in war context).

        No surprise here. All the earmarks.

        The others have a handle on what the handles are made of (which is indispensable), but they often don’t get how to use the handles they have. C.f., homogenization, C.f., MMTS v. CRS.

        Two things struck me when Doc. Menne announced his new MMTS-adjustment baby.

        First, he was doing a 15-year pairwise, 7 years in each direction. For an offset issue. An offset issue? Say, what?

        Second, he remarked that this step might even be unnecessary, as homogenization could be made to handle it anyway. Sorta brings it all together, don’t it? Kindly Uncle H. Cures all ills of man or beast. (Sometimes.)

        So why is he screwing around with trend, anyway? Well, I’m thinking it’s because he’s noticed that CRS and MMTS trends diverge, quite apart from the offsets noted in Menne (2009). So he’s “fixing” the MMTS units accordingly. Bottom line; CRS trend will be bumped down a little. MMTS will bumped way, way up.

        The problem arises, that whatever the picky vicissitudes of the MMTS sensor, it is the CRS trends, especially Tmax, that is the wonk-donk here. But instead taking a deep breath and adjusting CRS to MMTS (thus taking on the whole damn crowd going back to 1880 along with it), he takes the easy-cheesy way out by making it so MMTS doesn’t rattle the HCN applecart.

        So an important adjustment is applied. To the wrong object and in the wrong direction. Same goes for homogenization.

        The corrections do need to be made. But roughly the right amount in the right direction would be nice.

        P.S., Until grad school, I got away with designing a game on every history course I took. I even earned a living at it for a while. But this game is better. At least there is someone else here who speaks my language. These guys are making mistakes we’ve made before. We are on our guard for them. Them, not so much.

      • This is especially true since the LeRoy criteria have NEVER BEEN TESTED IN A SYSTEMATIC FASHION.

        Even if it had been, the results would have only been in offset. Leroy never said thing one about trend, did he? And as for trend, Mosh, we are the testing in a systematic fashion. We have systematically tested it.

        Pointing at an “authority” that has never been tested doesnt really help your cause.

        It can if your cause is just, right, and proper. (And correct.)

        The bottom line is you get the same answer whatever subset you use.

        And I can’t tell you how heartily we endorse those findings.

        We see in every case the well sited stations with lower trends than the poorly sited stations.

        We see in every case the well sited trends adjusted upwards to near-match (even exceed) the poorly sited trends.

        And yes, the adjusted data always resembles that of the badly sited stations. oh, yes, all classes are the same. Problem “solved”.

        There is one exception: let anthony and evan pick and choose.. then and only then will you get a slighly different answer… for a small part of the world..

        We pick bad microsite as the experimental binning. We choose the data-, metadata-rich USHCN.

        for a small part of the world..

        What’s sauce for the USHCN is sauce for the GHCN. Probably thicker.

      • RichardLH.

        you fundamentally misunderstand the calculation of the temperature field, and misunderstand the purpose.

        The goal is simple: Predict the values where you DONT have values.

        THAT is the entire problem in a nutshell.

        Let’s take a simple example: you measure the air temperature 2meters
        off the ground in Florida. its 86F
        You measure the air temperature 2Meters off the ground in Nome Alaska. its 5F.

        Problem: provide a prediction for EVERY OTHER PLACE ON LAND.

        That is the goal. To provide a prediction for those locations where you DONT have measurements. The Best prediction wins. To WIN you must predict EVERY location over land. Not some. Every location.

        Nyquist doesnt apply. IF you were trying to re construct a continuous signal from discrete samples, then it would apply. BUT we are not trying to do that. We are trying to minimize the error of prediction, GIVEN that we are not able to, nor need we, reconstruct the signal. What we want
        is the trend in the signal.. and even there we dont need the exact trend.

      • “Predict the values where you DONT have values.”

        Ah, the penalties of language. I would say “Estimate the values where you DONT have values.”

        That is the problem. It IS an estimation. With error bands and uncertainties all around. In 3D. From point sampled data.

        I rather do understand what is done and why it is done. I challenge the certainty that is placed on it.

      • “Nyquist doesn’t apply”.

        Those words may come home to haunt you.

      • “Problem: provide a prediction for EVERY OTHER PLACE ON LAND.”

        Why do that step? We do not KNOW the other values. We can estimate them only. The need to preform this step is purely to then be able to do a comparison to other data. Without that simple ‘reference to self’ will suffice to produce anomaly data.

      • “These guys are making mistakes we’ve made before. We are on our guard for them. Them, not so much.”

        Ah, the arrogance that assumes other scientific fields do not provide insights into what you are doing.

        Like Nyquist. If ever there was a clear cut example of ignoring how point sampled and volume sampled data of the same Field will never produce identical results. Close but never the same (assuming you get the maths right anyway).

        And straight line ‘trends’ break all his rules. To suggest somehow that sampling a BIG system can somehow step round the known problems with small systems is staggering.

      • A good station can become a bad station and vice versa several times in a single year by simply failing to cut back vegetation. Class 1 requires vegetation less than 4 inches and Class 2 less than 10 inches. As well the type and condition of the vegetation will also have an effect – lush green or dead brown.

        I don’t think vegetation length is a major issue. We didn’t notice much discrepancy when it was measurable. A bit of shade, a bit of heat sink, but nothing like a nice wide paved driveway.
        And we couldn’t measure it under most (current) circumstances, anyway.

        Yes, ground color will affect things, too.

        Worth further study, but it’d be difficult.

        And I think this is chipping around the edges, not a fundamental, systematic disqualifier. If it were, our results would be off the beam from the sats, and they are well within it.

        Snow cover or not will also have an effect.,/i>

        Especially in area where snow can bury the sensor, as in arctic regions.

        You didn’t mention shade, but that issue is conflated: usually what causes the shade is the heat sink itself.

        It’s really a hopeless task to get ground station consistency with the needed precision and accuracy for this task. I will never trust anything except satellites measuring temperature of a column in the troposphere isolated from ground level effects by altitude.

        It’s not as easy. But, look at he bottom line. And remember, we has a top satellite guy on the team.

        According to Klotzbach, et al, and to Christy, during an overall warm trend, surface trend should exceed satellite trend by ~10% to 40%, depending on latitude.

        Our class 1\2 unperturbed set clocks in at Tmean ~10% under RSS2 and UAH6. The sum checks.

      • Can you describe the heat sink(s) in question

        Certainly. Paved surfaces, structures, and bodies of water. Active parking lots (whether paved or not).

        and how they work to change a long term trend?

        When the temperature is lower, they create a smaller offset. By the end of the series, when the temperature is higher, and shows a higher trend, the offset is larger. The difference between the two creates the effect on the trend.

        For a crude, analog example, see:

        https://judithcurry.com/2015/12/17/watts-et-al-temperature-station-siting-matters/#comment-753903

      • RichardLH, “Why do that step? We do not KNOW the other values. We can estimate them only.”

        Because that is the problem he is working on. He feels that a “global” mean temperature is meaningful. What is meaningful for thermodynamics is the change in energy in the system and changes in energy flows inside the system. If you have a longer melt period in a region that is meaningful thermodynalically. Having some indication that the temperature at some altitude changed from -79C to -75C is pretty much meaningless, except for His problem. He has to know for whatever reason he thinks he has to know it.

        There are also people that feel compelled to polish ostrich turds. Kind of neat when they are done, but not my cup of tea.

      • Capt.,

        I’ll be calling you. We need to fish together for a couple of days.

      • “Why do that step? We do not KNOW the other values. We can estimate them only. The need to preform this step is purely to then be able to do a comparison to other data. Without that simple ‘reference to self’ will suffice to produce anomaly data.”

        No. You take the estimates of the sampled data to predict or give estimates for the unsampled data.

        The goal is not comparison with “other data”.

        The goal is prediction.

        We dont work in anomalies so lose that idea as well.

      • “That is the problem. It IS an estimation. With error bands and uncertainties all around. In 3D. From point sampled data.

        I rather do understand what is done and why it is done. I challenge the certainty that is placed on it.”

        ##############

        it is not in 3D.

        simple example.

        I measure the temp here at my house. 54F. 2meters off the ground
        I measure the temp at your house 57F. 2 meters off the ground.
        I measure the temp at a third point 60F. 2 meters off the ground

        The task is to predict the temp at all x,y locations within that triangle

        AT 2 meters.

        That will have uncertainty. you calculate that as well.

        Then you can test. go ahead.

      • “Nyquist doesn’t apply”

        Those words as uttered by a respected Climate Scientist are indicative, not only of staggering lack of understanding of what is being done to his data, applies to not only his work but apparently of the whole field.

        Nyquist applies to every picture you take, every chart you draw, every calculation you make, every machine you build.

        To say it doesn’t denies science.

        GIGO is not just a phrase, it is a real and living danger in all we do.

        Each pixel in a photograph, each point you place on a chart, etc. have at their core Nyquist. It displays ignorance, not intelligence to make the claim that his work is irrelevant.

      • “it is not in 3D.”

        OK, so I live on a 2D piece of paper apparently.

        Of course it is 3D. The world we live in and measure is 3D. Stop now whilst your still ahead.

      • “2 meters off the ground” Re-read your own words. That is a 3D statement.

      • evanmjones: “In the first example, four years is meaninglessly short stretch. We use a 10-year subset interval (1999-2008). And that is stretching it.

        Steven Mosher: “When commenters showed it to be false, you objected that a 4 year trend was “too short”. That doesnt wash. You now need to explain how this unicorn physical process only operates over long periods

        There is not only no “trend exaggeration” on the scale of 4 years, there is also none in the year to year variability or in the seasonal cycle or in the daily cycle.

        There is also no “trend exaggeration” on the largest scale studied, the 30 year scale. Otherwise the difference between the “compliant” and “non-compliant” stations would still be growing. But the difference is no longer growing since 1996. There is a trend on the 30-year scale, why is this trend not “exaggerated”, Mr. Jones?

        There is only “trend exaggeration” for the two periods Evan Jones sees as special. We really need some real physical hypothesis. That would allow one to ask the data specific questions, to test this hypothesis.

        My hypothesis is well explained by the example of pete:

        Someone has a weather station in a parking lot. Noticing their error, they move the station to a field, creating a great big cooling-bias inhomogeneity. Watts comes along, and seeing the station correctly set up says: this station is sited correctly, and therefore the raw data will provide a reliable trend estimate.

      • That is just confusing things.

        Of course station moves are required to be separate series. (ala BEST).

        I am not sure that Watts et al says anything different.

      • Extraordinary that quite learned people talk of max temps as being interchangeable with temps, ignoring how-hot-when, and how-hot-for-how-long, and how-hot-why – all of which you think would be the main subjects of eager scientific inquiry, however scant the data.

        But no, just one simplistic number to crunch for the day and the station will do them. (It’s called “best available knowledge”, which sounds lots better than “superficial stat”.) And if cloud came across and made everything much, much cooler? They still just add that one superficial number to lots of other superficial numbers, which will eventually make their way into all sorts of interesting scientific “products”. By the time our clouded-down temp reading is a tiny, indistinguishable blip on a sciency-looking graph, who’s gonna care?

        Like Bismarck said of laws and sausages, it’s better if you take the result and don’t look at the process.

        – ATTC

      • David Springer

        Mosher is right in that Nyquist rate doesn’t apply. The claimant evidently doesn’t understand it. The Nyquist rate is an analog to digital signal conversion rule. In order to faithfully detect the frequency of an analog signal it must be sampled at a frequency at least twice the rate of the input signal.

        So if we have a sine wave of frequency X then in order to digitally determine X we need to sample the signal at 2X.

        So the question for babble-boy with the Nyquist tic is: What frequency signal are we trying to faithfully detect in a time-temperature series?

        Stand by for some laugh medicine if he answers.

      • OK, laugh at this.

        The local thermal response to the solar input signal is first sampled as min/max over a day. That is the input frequency, modulated by orbital factors to provide the annual local cycle.

        So already Nyquist is involved. That min/max thing is bounded by his rules.

        Also any individual point is a volumetric sub-sampling of the underlying Temperature Field.

        Still laughing?

      • Nyquist also tells us that sampling hourly will get more accurate results than a simple tMin, tMax but we do not have that accuracy in most temperature series.

      • The ease with which people end up using language in a way that hides clear thinking is always a mystery.

        Also, scientific discussions rarely involves trying to poke fun at people without at least first dispassionately considering what is being proposed.

      • David Springer

      • “The Nyquist rate is an analog to digital signal conversion rule.”

        Actually Nyquist is about the digitisation of an underlying signal, not the digitalisation.

        Applies to paper records as well as machine derivations.

      • David Springer

      • Laugh away. We will see who laughs longest.


        “Nyquist doesn’t apply” !!!! ????

        Those words as uttered by a respected Climate Scientist are indicative, not only of staggering lack of understanding of what is being done to his data, but also it applies to not only his work but apparently of the whole field.

        Nyquist Sampling Theorem applies to every picture you take, every chart you draw, every calculation you make, every machine you build.

        To say it doesn’t denies science.

        The local thermal response to the solar input signal is first sampled as tMin/tMax over a day. That is the input frequency, modulated by orbital factors to provide the annual local cycle.

        Nyquist also tells us that sampling hourly will get more accurate results than a simple tMin, tMax but we do not have that accuracy in most temperature series.

        Nyquist is about the digitisation of an underlying signal, not the digitALisation. Applies to paper records as well as machine derivations.

        We are trying to assess the local power transfer curve and its related usage to later compare to abstract, computer based, models of the same thing.

        GIGO is not just a phrase, it is a real and living danger in all we do.

        Each pixel in a photograph, each point you place on a chart, etc. have at their core Nyquist. It displays ignorance, not intelligence to make the claim that his work is irrelevant.

        It also immediately labels all work that has that phrase attached that is has GIGO all over it.

        For those who wish the academic view of Nyquist then https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem will provide some clues.

        “A sufficient sample-rate is therefore 2B samples/second, or anything larger. Equivalently, for a given sample rate fs, perfect reconstruction is guaranteed possible for a bandlimit B < fs/2.

        When the bandlimit is too high (or there is no bandlimit), the reconstruction exhibits imperfections known as aliasing. Modern statements of the theorem are sometimes careful to explicitly state that x(t) must contain no sinusoidal component at exactly frequency B, or that B must be strictly less than ½ the sample rate. The two thresholds, 2B and fs/2 are respectively called the Nyquist rate and Nyquist frequency. And respectively, they are attributes of x(t) and of the sampling equipment. The condition described by these inequalities is called the Nyquist criterion, or sometimes the Raabe condition. The theorem is also applicable to functions of other domains, such as space, in the case of a digitized image. The only change, in the case of other domains, is the units of measure applied to t, fs, and B.”

        Notice space tucked in there? That means horizontal separation between point samples in Nyquist terminology.

        And for the sake of this discussion a temperature map, however derived, is a ‘digital image’.

        OK. So we are not going to proceed further in our thinking until we create an abstract experiment that will show Nyquist is present everywhere. This is abstract, not real, so please no distractions.

        We are tasked with designing an experiment to prove the validity and accuracy of the work being done at a local site. Consider this a external, quality control, review step, to determine how best to spend our money.

        There are 3 simple statements we are asked to consider.

        1. Moving from tMin and tMax to an hourly sampled instrument will improve quality of the data. Yes or No.

        2. Adding in extra instruments at 2m height (say 10 times the number we have now) across the sample area will improve the quality of the data. Yes or No.

        3. Adding in extra instruments above and below the plane of the existing one(s) will improve the quality of the data. Yes or No.

        Obviously we now see how Nyquist applies.

        1. Is a statement of Nyquist in time.
        2. Is a statement of Nyquist in the horizontal plane.
        3. Is a statement of Nyquist in the vertical plane.

      • Short version: You have probably applied a wrong MMTS adjustment.

        You bet I did. I adjusted the MMTS trend upward, esp. Tmax. And that’s actually wrong. So, yes, I applied the wrong MMTS adjustment.

        By all rights, what I should have done instead was adjust the CRS trends downward, esp. Tmax.

  65. I don’t understand all this “secrecy” and withholding data “until publication”.

    What are you after? A career in academia?
    Screw the journals, they are turning more and more irrelevant.
    You’ve got all the fame and acclaim you can hope for.

    Publish your complete paper and data on this blog. You’ll get crowd-review. You’ll get credit, you’ll get fame (if you are so starved for it).

    Forget the journals. They are irrelevant. A silly pretext to withholding the data.

    • I have had plenty of fun already. And those decisions are not up to me, they are up to the others. And the peer review literature is kind of a basic hurdle. Hardly anyone will accept a non-reviewed study.

      • evan

        That is what sceptics find hard to understand. There is a fundamental difference between a blog post and a peer reviewed science paper.

        Whilst blog posts can be extremely useful in gathering comments, corrections and ideas, few people who matter in science will take much notice unless it appears in a peer reviewed journal.

        Like it or not, that is the situation.

        tonyb

      • What tonyb said.

        Besides, what’s so horrible about some ‘skins looking a thing over to see if it contains glaring errors? They let you fix them, if you can. If peer review is good quality, then a paper is often strengthened and improved.

        Anyway, it’s amazing how much stuff I learned and perspective I gained while cramming for a “useless” test. I got no bones about the process. Independent review (and staying power) is the final arbiter, yes. But proper peer review is a good prep. sort of like a BA on the way to Ph.D. (which I don’t have).

        If you play in the circus, don’t be complaining because you have to jump through a few hoops. (And don’t expect us not to prepare for the jump in our own way, as best as we know how.)

      • “few people who matter in science will take much notice …”
        What matters in science is the truth, not “people”.
        If you have something important to say, it will be heard, no matter what “people who matter in science” approve.

    • “Screw the journals, they are turning more and more irrelevant.
      You’ve got all the fame and acclaim you can hope for”

      Yup.

      If evan has an analysis that holds up the journals dont matter.

      Sure in the short term journals matter for POLITICS, but for the pure
      truth of the matter journals dont matter.

      Look, Evan has a hypthosis about these stations.. 10 years from now
      when the globe is .15C warmer.. His result will be undeniable!!

      And then no one will care where it was published… it will in fact be an further indictment of “journal science”

      If it’s true, no one will care where it is published

    • I don’t understand all this “secrecy” and withholding data “until publication”.

      You would if you had it usurped. That was data I had a personal hand in. Therefore you must perish until we publish. That’s that.

  66. “few people who matter in science will take much notice …”
    What matters in science is the truth, not “people”.

    Yeah, sure. But to get at the truth it sure as heck helps to interact with other people. (Thanks again, VeeV.)

    • evan

      most scientists are perfectly happy to interact with well thought out and interesting hypothesis that are contrary to the beliefs they currently hold.

      Naturally, they don’t like it when they are insulted, ridiculed or told they are a part of some giant hoax and are then unlikely to take such a person seriously.

      Testing an idea comes from the peer review process and if we ignore that we can’t-as mosh points out-eat at the top table.

      tonyb

    • Agree.

      Besides, I ain’t afraid of it. Not after all the issues addressed since 2012 and especially not when “covered” by goode olde indispensable J N-G. #;^)

    • “But to get at the truth it sure as heck helps to interact with other people.”

      Of course – interacting with other people helps – but publishing on the net (with the data) you get all the interaction you could dream off. No corner remains un-turned.
      The question is: why not make the data available now – why withhold it for a few months waiting for an interaction, through the journal, with one or two reviewers. Publish it now, and you get all the reviewers in the World to interact with.
      Insisting on “journals first” looks like an anachronistic quest for prestige.

      • What a lack of sensitivity jacobress. They have spent years on this. They deserve to publish in a real, peer reviewed journal; then everyone else can have a shot. As long as they make available all code and data when the paper is published, they will be golden.

      • The problem is global warmers (eco-terrorists) have set the science bar at “publish and peer review”, then loaded the grant and peer review boards to block funding and publishing of non-warmer-friendly literature.

        Since the funding hasn’t blocked Mr. Watts, now all he has to do is get it published for his paper to be “science” by global warmer (eco-terrorist) standards.

  67. get it published for his paper to be “science” by global warmer (eco-terrorist) standards.

    Isn’t that the standard in all areas of science, not just climate science? No one is stopping anyone from publishing their findings and data on a blog. But to be seriously considered by other scientists it needs to peer reviewed and published in a journal. For one thing this makes it easier for scientists to keep track of new research, but it also can weed out papers that are not new and or have no scientific value due to a flawed design or conclusions. if peer review should not be a criteria used by scientists to filter the research, then they would be forced to search the entire internet to find every blog or web site that might have published a study in the relevant area even if it might have no merit. I don’t think that’s really practical. But if someone wants to publish on a blog, then go ahead, but don’t expect anyone to necessarily take the study or findings seriously.

    • Agree with much of what you say.

      Isn’t that the standard in all areas of science, not just climate science?

      Yes. And releasing data upon publication is also the standard in all areas of science.

    • I bet it’s got to do with the geometry and coating of the CRS vs. MMTS. White painted surfaces, especially wood which has a rough surface, tend to darken as they age from accumulation of black carbon.

      Yes, and more. You are on the right track. That was Anthony’s inspiration. Even before microsite. he discovered the bad microsite while investigating the paint issue. I agree that is part of it and contributes to it.

      But I think it is a part of a deeper story, one related to heat sink effect. The box itself is a heat sink, faded or not. A CRS is a slatted wood box all around the sensors, which are attached to the back. Wood gets hotter than concrete, but cools faster. Concrete sheds its heat slower than wood. Therefore it affects Tmax, both offset and trend.

      Why does it magnify trend? Look at this inadequate, loose, otherwise lousy, offscale, but descriptive analogy that applies to microsite as well as equipment conversion:

      If you close up your car on a 50-degree day, it will heat some. Maybe five degrees, to 55F. But if it’s a 70-degree day, it is 90F in your car within a half hour. So the offset effect at the lower end (+5) is less than it is at the high end (+20).

      Let’s say you inferred the temperature outside while only measuring the temperature inside the car? You would get a 15F bump and a heap big effect on trend.

      And, not too terribly unlike in our car (except in all sorts of important details), on a far more diffuse scale, the heat sink offset in 1979 is less than the heat sink offset is 29 to 30 years down the road. It is only a ~0.3C, 30-year before/after bump, or ~0.1C/decade but that’s is a third or even more of LST warming.

      So that is why I think microsite increases tend.

      Thinking about it, I think Haddy2 may have been telling the story better than v4 or even v3: Less warming until 1950, when CO2 took off, and then a steady increase in trend (providing always you detrend for PDO and cousins).

      This is a case where the shorter runs get weak PDQ. A gradual effect can easily be temporarily offset by any numbers of reasons. We have a 30-year, 20-year, and 10-year trend, and while we think the 20- and 10-year trends are in the right direction and support they hypothesis, they are more prone to uncertainty.

    • Joseph,

      You wrote –

      “But to be seriously considered by other scientists it needs to peer reviewed and published in a journal.”

      Completely irrelevant. Warmist deny, divert, obscure tactics.

      Science is based on fact. Real science, that is, not the climatological version, which depends on fantastic assertion, rather than rational thought, and experiment.

      Who cares if other scientists don’t believe in tectonic plate movement? Who cares if they all believe in the luminiferous ether? It took more than a little while for scientists to agree with Einstein’s relativistic work. On the other hand, Einstein disagreed with the majority of scientists who accepted quantum theory.

      Facts and experimentation, rather than opinion, ultimately prevail.

      Warmism? Bah, humbug! The gullible being led by the foolish or fraudulent. Which one are you?

      Cheers.

  68. “but don’t expect anyone to necessarily take the study or findings seriously.”

    You live in yesterday’s culture.
    Watts and his blog is known by all. Nothing published there goes unnoticed.
    Watts’ problem is not the fear that he will remain unnoticed, but that somebody will steal his credits. Seems like publishing in a journal is his most cherished life desire. I don’t understand this.
    Not that I oppose his publishing in journals. It’s his privilege. But once he releases a press release trumpeting his paper. he should publish the whole paper and data online.
    Or, he could have kept quiet and issued his press release only after the paper has been finished (after review) and accepted.

    • He reported his findings at the AGU. And? The post wasn’t even a sticky.

      In any case, you do it your way, and more power to you. We’ll do it ours.

    • You live in yesterday’s culture.

      I can’t help it. I am a sporadic victim of conditioning.

  69. David Springer

    I thought you needed company. It appeared you were the only assh0le in the thread. Merry Christmas dummy.

  70. It will certainly be interesting to see how the various groups producing global surface temperature analyses respond to the study.

    Stipulating that they respond at all. (Which would also be interesting.)

    The least likely response, I think, will be an attempt to ferret out (or disconfirm) this systematic error, regardless of whether it is HSE or something else that just happens to correlate.

    My money is on a brief comment or two, amounting to a pooh-pooh dismissal. Maybe a quickly dashed off paper from NOAA that fails to come to grips. I.e., the past will repeat.

    We’ll find out soon enough, anyway.

  71. We (skeptics) are often accusing climate scientists of using their science as propaganda. We should lean backward hard, trying to avoid even giving the impression of doing the same.
    It is in this aspect that I criticize Watts. He hasn’t leaned back enough.

    • Under the circumstances we have no choice. And I may not be as into the political stuff as Anthony (and most others), but his “aspect”, whatever it is or isn’t has provided the most widely known platform on the subject.

      Sometimes a seeming disadvantage is merely the downside of an approach that yields huge net advantages. I’ll take the bad with the good. I am not inclined to argue with success.

  72. I am so glad that only data from ‘good’ sites contribute to a real conceptual global temperature. Mother nature was kind. It would be terribly complicated if temperatures from all ‘not a good sites’ actually contributed to ‘real’ global averages. How did we get so lucky? /sarc off :O)

  73. David Springer

    TMEAN for CRS-ONLY stands out like a sore thumb in the unperturbed Class 1/2 list. It is far higher than any others in the RAW Class 1/2 category yet matches almost exactly with NOAA adjusted, homogenized data in all categories including poorly sited (when they say homogenized they *mean* homogenized).

    The CRS-ONLY standout needs to be explained.

    https://wattsupwiththat.files.wordpress.com/2015/12/agu-poster-watts-website-release.pdf

    • Heh! You should get a load of Tmax.

      0.410C/d.

      And MMTS adjusted to CRS is the answer? It’s the wrong answer says this wargame designer-developer-scenario-modeler. The VeeV likes to smilingly twit me a bit for that. What I say is he should use it.

    • Correction: Not 0.410. make that CRS Tmax 0.442c/d.

    • David Springer

      Yes I noted the chart. I bet it’s got to do with the geometry and coating of CRS vs. MMTS. White painted surfaces, especially wood which has a rough surface, tend to darken as they age from accumulation of black carbon. It’s worse if in an area where there is much diesel or other combustion without particulate filters. This would cause a warming trend over a course of years during the day when the sun is shining. The polished plastic of would be MMTS is less susceptable to soot accumulation.

      Next up let’s figure out what happens when the box gets scrubbed with soap and water (rain won’t remove it) or painted. That would constitute a step change in temperature that wouldn’t be reflected in neighboring stations thus triggering homogenization and upward adjustment to match neighbors which are in some random stage of soot darkening. The would also happen with BEST which, instead of adjusting that station splits it off into a new series to the same effect.

      So the slow incremental warming as the exterior darkens is all reflected in the decadal trend but the step-down cooling that happens when the box is cleaned or painted is rejected. All warming and no cooling in other words.

      Poor siting will generally put a station closer to a soot source.

      There ya go. Demystified.

      • There ya go. Demystified.

        Whenever they say that what they really mean is layering on further mystery.

        Besides, the shade of the box can only be an issue if Heat Sink Effect is also an issue. The box is a heat sink. The graying/peeling of the box affects how much heat actually sinks (then to play varying degrees of havoc with Tmax — trending ones).

        So, yes, Anthony’s paint issue is part and parcel to HSE.

      • P.S. Poor siting will not by any means necessarily put it near a soot source, not even on average. Urban soot does not appear to have a material effect on Tmean trend, at least not for the Class 1\2 unperturbed set. And in CONUS, rural microsite averages worse than urban, anyway.

      • David Springer

        A dark object warming up more in the sun than a light colored object is not referred to as a heat sink. You are obsessed with a very loose definition of heat sinks. Suggest you stop as it makes you appear poorly educated in science and engineering.

      • David Springer

        Certainly it tends to put it nearer a soot source. Civilization brings with it internal combustion engines, fireplaces, BBQ grills, and so forth. You’re very shallow. If I were Watts and Gammon I’d ask you to stifle yourself. What part do you actually play in this project?

      • “You’re very shallow.”

        Spoken from a very thin viewpoint.

      • A dark object warming up more in the sun than a light colored object is not referred to as a heat sink. You are obsessed with a very loose definition of heat sinks. Suggest you stop as it makes you appear poorly educated in science and engineering.

        But I have no education in science and engineering. I come in from a different path, entirely. I am the Old Wives’ Cure to their Modern Medicine. But some of those old wives were pretty sharp. And some of those “remedies” have passed modern “peer review”, too.

        For future reference: When I say “heat sink” I mean something that accumulates heat (moreso than normal background) and then hits the sensors, in the Tmax and Tmin when it is released.

        It’s a lagging effect. If the temperatures were measured ’round the clock instead of only at Tmax and Tmin, daily temps would be distorted by the lag, but that might average out. If it didn’t solve all the problem, it would at least solve it some.

        Don’t you see? This is the same reason as why Tmax and Tmin lag 12:00AM and 12:00 PM. That is also from heat sink (the ground). But when heat sink is worse (like concrete), the effect is going to be more.

        So HSE and microsite is not a patch in and unto itself. It is just part of the larger picture. The driveway is not doing anything the surrounding dirt is not doing. It’s just doing it faster and better. With an apparent effect on trend, not just offset.

        And that is where microsite and HSE is in the scheme of things. Say I.

      • What part do you actually play in this project?

        I suggest asking Anthony. (Or Mosh.)

      • You’re very shallow.

        I make up for it in width.

      • David Springer

        So you don’t know what role you play in this project. That makes two of us. In addition to climate blog whipping boy it appears from the outside it’s drone assigned simple repetitive tasks too tedious and boring for the team members with some science and engineering backgrounds. In other words you are to this project what Mosher is to BEST. Except Mosher is a lot more knowledgeable than you are which is not complimentary to Mosher due to the low height that bar.

        I suggest others well versed in relevant science and engineering ignore the hypothetical explanations on offer by the Watts team, focus on the unique data compilations, and come up with physical explanations for it.

        I believe gradual darkening over a course of years happening to CRS stations punctuated by undocumented cleaning and painting is the explanation. The gradual darkening leads to increasing daytime maximums which are duly integrated into the record while the cleaning and painting is detected as a cooling perturbation which triggers homogenization and rejection of the temperature step down.

        I would ask what aspect of the observations does not fit that explanation?

      • So you don’t know what role you play in this project.

        #2 coauthor. Does that help?

      • Except Mosher is a lot more knowledgeable than you are

        Gosh, yes. (Smarter, too.)

        I believe gradual darkening over a course of years happening to CRS stations punctuated by undocumented cleaning and painting is the explanation.

        They are repainted. Possibly not as often as they should be. Like you, I suspect there has been some net darkening since 1979, but I have no data to support that. FWIW, the CRS stations I surveyed from the ground were excellently and freshly painted.

        I think the darkening is an exacerbation of an effect already in play. I think you are right that there may be some net darkening over the study period. But that is not all of the difference, only part of it.

        It is all part of the effect of heat sink, from the ground up. The concrete, the CRS box itself, the darkening of CRS box, etc., merely add to an effect already in play. We know some types of terrain warm more quickly than others. And what’s good for ground color and density is good for wooden boxed and paved surfaces.

        What we are seeing here is not isolated, nor is it complex. It is ubiquitous and simple, almost to the point of simplicity. Generic. Part of an already known process.

  74. David Springer

    Yes I noted the chart. I bet it’s got to do with the geometry and coating of the CRS vs. MMTS. White painted surfaces, especially wood which has a rough surface, tend to darken as they age from accumulation of black carbon. It’s worse if in an area where there is much diesel or other combustion without particulate filters. This would cause a warming trend over a course of years during the day when the sun is shining. The polished plastic of would be MMTS is less susceptable to soot accumulation.

    Next up let’s figure out what happens when the box gets scrubbed with soap and water (rain won’t remove it) or painted. That would constitute a step change in temperature that wouldn’t be reflected in neighboring stations thus triggering homogenization and upward adjustment to match neighbors which are in some random stage of soot darkening. The would also happen with BEST which, instead of adjusting that station splits it off into a new series to the same effect.

    So the slow incremental warming as the exterior darkens is all reflected in the decadal trend but the step-down cooling that happens when the box is cleaned or painted is rejected. All warming and no cooling in other words.

    Poor siting will generally put a station closer to a soot source.

    There ya go. Demystified.

  75. David Springer

    Holy Air Movement Batman!

    http://www.homogenisation.org/files/private/WG1/Bibliography/Applications/Applications%20(F-J)/hubbard_etal.pdf

    Its worse than we thought.

    Wind speed drastically effects temperature reading of MMTS stations. Indicated temperature changes 1C or more as wind speed varies from 0 to 6m/s.

    So how do you boys (Venema/Mosher) factor this into the equation? Are you measuring temperature trends or wind speed trends? LOL

    • Yet another semi-known, semi-unknown.

      But wind brings in different air from different places and therefore brings different temperatures. The CRS box is still retaining and releasing energy and and that will likely temper the effects of a breeze.

      Same lack of direction here as with current MMTS conversion: Every time we find a disparity, we start by asking ourselves what is wrong now with the MMTS. We should maybe be asking ourselves what new damn problem did we just turn up with CRS.

      Question here is finding out what’s Copernicus and what’s Ptolemy.

      As an aside, wind direction would certainly affect how a heat sink would bias a sensor. Probably in a net-neutral fashion (for our purposes), but it would have to be examined. Maybe there is something systematic which would make it less severe or more. More work for some lucky person.

      • Could you make seasonal temperature plots like your annual plot?

        Guess that should be possible without disclosing too much data.

      • David Springer

        Good idea. New methods of examining the data is the best way to coax more clues into the light. The CRS anomaly was a big one.

      • Parked cars are deathtraps for dogs: On a 78-degree day, the temperature inside a parked car can soar to between 100 and 120 degrees in just minutes, and on a 90-degree day, the interior temperature can reach as high as 160 degrees in less than 10 minutes.

        An example with “real numbers” (allegedly).

        So outside the car over time (under a day in this case) of from 78 to 90 is compared with an inside-the-car increase from 120F to 160F. That is a 12F increase vs. a 40F increase inside the car.

        The heat sink (the car) is warming at over twice the rate, the trend inside the car as it is outside the car.

        We are, by analogy, measuring our temps largely from inside the car. And, yeah, the car cools faster at the same rate during the subsequent “cooling phase”.

        So if there is no overall trend, there will be no divergence in trend as a result of spurious heat sink effect. But if there is a trend, either cooling or warming, that trend will be exaggerated.

      • David Springer

        Not necessarily different air temperature but that’s irrelevant. We only care about measuring the true ambient temperature.

        The problem is ventilation or lack thereof. CRN stations have ventilator fans. Read the linked article carefully. The time of observation and solar power makes a difference too. When sun is shining hard from clear sky the innards of the box heats up above ambient giving a false reading. A breeze brings in ambient air and a truer temperature.

        But it’s still worse than we thought. If there’s a little rain getting the box wet and a breeze comes along we get ourselves a miniature swamp cooler and suddenly air much colder than true ambient in the box.

        And all this happens to boxes that are in some random stage of soot darkening compounding the mess further.

        Surface station data is pretty much useless no matter what for the purpose of discovering true decadel trends accurate to tenths of a degree with all these confounding factors. It’s a wicked mess.

      • “Surface station data is pretty much useless no matter what for the purpose of discovering true decadel trends accurate to tenths of a degree with all these confounding factors. It’s a wicked mess.”

        And they are probably only of true use if you do a simple ‘reference to self’ to determine any anomaly information. Wider comparisons require addition maths/steps which may muddy rather improve that information

      • David Springer

        Reference to self doesn’t work either. To get decadal trends requires decadal consistency in the instrument. A whole new and more expensive paradigm is needed in surface station instrumentation to eliminate all the confounding factors. Of course it’s too late now as we’d need it deployed for decades before getting a long enough history to separate climate from weather.

        There are two metrics that are not some hopeless combination of inconsistent, imprecise, and inaccurate; MSU/AMSU satellites and ARGO diving buoys. These were designed for the task. Nothing else comes close.

      • “Reference to self doesn’t work either.”

        I disagree.

        All, that is achieved with baseline periods and wider comparisons is to achieve results that hide any self references that may be present but ignored.

        If you only do reference to self even for extremely short series, (disconnected from other ‘verifying’ stations which actually requires interpolation/extrapolation to be derived) then they can be utilised also.

        It was hot that summer there is valuable information. Absolute is interesting but not helpful. Relative is.

        Sure you need to separate out station moves. I believe that they should be separate series (ala BEST). Beyond that however, I challenge the need or usability of the extra maths.

      • David Springer

        You don’t seem to understand the problem which makes your responses little more (if more at all) than meaningless babble.

        BEST does nothing but self-reference. If, for whatever reason, they determine a station has been perturbed they don’t adjust it they end the series at the point of perturbation and start a new series as if the old station were replaced. You are conflating self-reference in establishing trends, which is valid, with using neighboring stations to detect perturbations, which is dicey and error prone because the stations used as cross reference are usually poorly sited due to poor siting being in the vast majority.

        The infilling you keep bringing up is irrelevant babbling. Gridded data is used for climate analysis not a continuous temperature field. They don’t even pretend to be constructing a continuous temperature field but rather the average of a gridded field with grid blocks consisting of tens of thousands of square miles each.

      • David Springer

        For the sake of educable others… using gridded data for climate analysis is required so that finite computing resources can work the model through from start to finish. It’s also a major source of error because individual convection cells and clouds in general are much smaller than the grid cells. So they have to estimate the effect of the water cycle rather than calculate it on the fly from first principles. Given that about two thirds of heat transport from surface to emission altitude (where atmosphere becomes too thin to block thermal radiation) is due to evaporation and precipitation that’s a huge hole in the model to plug with a simple parameter. Energy transport by this mechanism is called “latent” because the energy is carried insensibly as latent heat of vaporization. Insensibly means a thermometer does not and cannot sense the energy content of the air ergo “insensible”.

      • Could you make seasonal temperature plots like your annual plot?

        Guess that should be possible without disclosing too much data.

        We wouldn’t have to disclose any data at all to do that. It would just involve another round of work for me. And while I would do it if no one else did, I think that when we do release, others will do it quicker and better than I ever could.

        But if no one does, even after the data is released, I will do it personally (it’s always been somewhere on the list), as we will be looking at a lot of these loose ends in followup.

      • “BEST does nothing but self-reference”.

        And extrapolation/interpolation to determine data quality of individual points. Adjusting those that do not fit to its own estimated 3D Temperature Field.

        Please think about what I have said. You do NOT know the 3D Temperature Field from the point samples. You do estimate it so that you can then compare it to the models which are volume based as you note elsewhere.

      • BEST does nothing but self-reference. If, for whatever reason, they determine a station has been perturbed they don’t adjust it they end the series at the point of perturbation and start a new series as if the old station were replaced.

        We and BEST use opposite approaches to a common end. One point of difference (aside from pairwise that fails to consider microsite), though, is that BEST never saw a jump they didn’t split. They are, in effect, inferring missing metadata. For GHCN, they have little choice. if they factor in microsite, their results will be better.

        We take a flip-side approach. For our metdata-rich patch, we do not infer. If lack of inference were the problem, we would not get the same exact wiggles diverging over the study period. We would be seeing blips and jumps and different squiggles. Besides, sometimes a jump is just a jump.

        That tells us that, while USHCN2 metadata is not perfect, any problems are reduced to nibbling ’round the edges and are neither widespread nor systematic.

      • I would concur with your analysis of BEST and your methods. Both are attempting in their own way to reduce some of the effects that plague point sampled data. The need for accuracy in that individual points data.

        That is not what I am pointing (pun) out.

        The point data is then extrapolated/interpolated either to try and judge the quality of other nearby points (which can then be adjusted to fit to the abstract model that has been created), to infill missing values, or to try and then estimate the true underlying 3D Temperature Field that is required for compassion to the Models.

        That step has and always will have a degree of uncertainty in its outcomes. We do NOT KNOW the 3D Field. We can estimate it fairly well from what we have. But only fairly well.

        The accuracy of that last step is often, deliberately it would seem on occasions, merged with the question you are answering which is the perceived accuracy of the point data itself.

        The two things are quite separate and need to be addressed separately.

      • Mmmm. yes, yes, yes, yes (distribution is spotty to lousy).

      • @David Springer

        It is a very basic, simplistic example for the heat sink effect. What you mention (ventilation, etc.) lessens the effect, but here the effect is indeed much smaller: ~0.3C over 30 years.

        I blew it by the group. As W. the peacemaker would say, NG got us covered.

        And, yes, darkening and moisture re. CRS is indeed a part of the picture. (Like you, I’ll stick to the white line. And I figure white car will reflect more IR, which is why paint matters.)

    • David Springer,

      Definitely worse than anyone thought.

      From your link –

      “Over the last two decades climate scientists have spent considerable effort assembling climate data and evaluating data homogeneity, especially for the air temperature and precipitation datasets. The motivation, in large part, is the interest in evaluating global climate change purported to be associated with the greenhouse effect at local, regional, and global scales.”

      Translation – “Twenty years of wasted effort. More to come.”

      Cheers.

    • The address for your concern is Watts et al. Who implements an uniform MMTS adjustement.

      • David Springer

        You forgot the /sarc tag, Mr. Kyle Hilburn.

      • Who implements an uniform MMTS adjustement.

        Not uniform in the way you probably think. We add a uniform jump, yes, but we add it at the month of conversion, which means the effect on a station can run the entire gamut. As it’s a jump and not a blip, the effect will greatest if it occurs in the middle of the time series in question.

        However, one might regionalize the offsets, and (if done correctly) that would make the application more precise.

        Like with homogenization, we are trying to get a better end result more than we are concerned with the occasional, inevitable tactical mangling.

        And, frankly, I question the entire MMTS/CRS approach from top to biottom. Same philosophical problem as homogenization: You are adjusting the wrong thing. You should not be adjusting the MMTS to fit the CRS (with a 15-year pairwise? For a jump? Come again?).

        You should be looking at the CRS units and wondering why Tmax is a daylight scandal. And what you are looking at (a nice heat-sinky wooden box) is what to look at.

        Then you shouldn’t be jacking up MMTS trends (esp. Tmax) with pairwise, you should be dumbing down the CRS trends. Startpoints? You cooled ’em, we’ll warm ’em. When it comes to CRS v. MMTS, I think the scientific community has Dreyfus confused with Major Henri.

        I bet if this is done, Haddy2, warty as it is, will have the last laugh.

    • “So how do you boys (Venema/Mosher) factor this into the equation? Are you measuring temperature trends or wind speed trends? LOL”

      Neither. You cant measure trends.

      • A metaphysically correct comment. Heh.

        But Dig We Must.

      • David Springer

        Not even metaphysically correct. Just Mosher-stupid. It has its own category of stupid.

        You boys ever see a digital thermometer with an arrow to right of the temperature display: up, down, flat?

        What do you think the arrow is measuring?

        Or maybe you’ve been behind the yoke of an aircraft and are familiar with one of the standard instruments called a Rate of Climb indicator. What is it measuring?

        Like duh.

      • “Neither. You cant measure trends.”

        But you can measure changes in trends.

      • You guys got Mosh all wrong. He is sharp. He is blunt. He is profound. He has an uncanny sense of schwerepunkt.

        I often disagree with his points, but once I’ve heard his take on them, I come away thinking more clearly about them.

  76. evanmjones,

    There doesn’t seem to be any point to any of this, with respect.

    Measuring the supposed temperature of an ever moving air mass, at a more or less fixed location achieves what, precisely? As others have already pointed out, you are probably measuring the temperature of the enclosure. As Tyndall wrote, measuring the temperature of the atmosphere is not easy. Just surrounding a thermometer with air does not necessarily give you a good indication of the air temperature, however you choose to define it.

    As a simple example, should the temperature of the enclosure drop, due to cloud or similar, the thermometer may well record a drop in temperature, even though the air flowing through the louvres has not changed its temperature. Conversely, an enclosure heated by the reemergence of the Sun may show an increase in temperature, as the thermometer will respond to the increase in radiation which it absorbs from the enclosure walls.

    Although it seemed like a good idea at the time, plonking a heap of thermometers here there and everywhere, in a variety of locations at varying heights above ground, with scant regard to things like katabatic or anabatic effects, let alone environmental radiative influences ranging from the Sun, to the effects of Man and his works, would seem to provide little, if anything, of value.

    I believe the aim is to see whether the globe is heating or cooling. There seems to be a naive belief that Nature will somehow ensure that thermometer readings imply something other than the temperature of the thermometer, however derived. If the interior of the Earth is above the surface temperature, then the Earth must cool. No amount of CO2 can prevent this.

    After four and a half billion years of sunlight, and an atmosphere containing CO2, the Earth has demonstrably cooled. To claim that the laws of thermodynamics have decided to reverse themselves recently, seems a little far fetched. Thermometers may well be showing higher temperatures, as populations and energy production increase. Warmists seem oblivious to the easily demonstrated fact that light of all frequencies, from the longest radio waves to the highest energy gamma rays and beyond, travels in straight lines from its source.

    Whether this be the Sun, a lump of iron, or a diffuse gas makes no difference. Everything above absolute zero emits radiation. Pretending that meteorological instruments magically measure the temperature of the air which surrounds them is just silly. They respond to the totality of the radiation which they absorb.

    It really makes no difference. Fiddling with historical temperature records achieves no more than fiddling with historical cloud cover observations, and who would bother? Or adjusting rainfall records. Or visibility, or wind speed and direction. What has any of this to do with non existent magical CO2 warming?

    All a bit of a mystery, I fear. But if it’s fun, and – even more fun – if you can get somebody to pay you to do it, why not?

    Cheers.

    • David Springer

      “Just surrounding a thermometer with air does not necessarily give you a good indication of the air temperature, however you choose to define it.”

      Actually that does give a good indication of air temperature. The problem here is the thermometer is surrounded by m0r0ns.

      • David Springer,

        You wrote –

        “Actually that does give a good indication of air temperature. The problem here is the thermometer is surrounded by m0r0ns.”

        I have to appeal to authority. Sorry about that. I can quote authorities such as Maxwell and Tyndall at length, but I’m sure you would dismiss them as old fashioned. Possibly Richard Feynman might suffice, but you might dismiss quantum electro dynamics as mumbo jumbo.

        So here’s a very small test. Stand in front of a roaring fire on a cold winter’s day. Inside might be nice – less wind. Wall temperatures are below freezing. Now take your thermometer and measure the air temperature. According to you, a thermometer surrounded by air should measure the air temperature. The temperature seems a bit high. Maybe you need to shield the thermometer from the fire. Whoops, maybe your body heat is affecting the thermometer. Okay, move the thermometer over there. No good, the winter sun shining through the window seems to be affecting things. Oh no, you just noticed the thermometer bulb has some soot on it. And so it goes.

        You’re right. The thermometer is surrounded by . . . climatologists.

        I’ll agree with Maxwell, Tyndall and Feynman. You can stick with Hansen, Mann, and Schmidt. Good luck.

        Cheers.

      • A thermometer (mercury or thermistor) is a point measuring device ones you move beyond a cm to mm scale.

        Below that it operates as an integrating instrument over its surface to the outside world.

        Above that it operates as a point sampling device.

      • I agree with Mike in his particulars. However, I am willing to consider that there is a sort of bouncearound factor and that some (a lot?) of these things may cancel out.

        Oversampling is the best, crudest defense. That will mitigate non-systematic error. A die (d6) roll can be anywhere from 1 to 6, a far greater deviation than what we are seeing in temperature sensors. Yet if you roll 1218 dice, I’ll give you long odds the average will be close to 3.5.

        Systematic error can often be identified and ferreted out (as Watts et al. attempts).

        Mike and Mosh are saying the same thing, but in a different way. The way I would put it is that there is uncertainty as to data, therefore there is uncertainty as to trend.

  77. Agree with your initial points. Your other points, while correct, are not the main point.

    They want to know how fast the surface is warming. So that’s what we are trying to best determine. One tends to be less concerned (except in an academic sense) with the earth’s cooling “as a whole”, core included.

    Satellite is a good proxy for surface, but LT annual trend varies a bit less (yet has higher trend). More ups and downs in play on the surface, but slightly lesser trend. That’s the “basic physics” viewpoint.

    This is borne out by our unperturbed Class 1\2 set, which clocks in just below UAH and RSS.

    It is especially true for the MMTS-majority subset, which further supports the notion that it is the CRS boxes that need the primary adjustment, not MMTS). It may be true MMTS needs some adjustment. But that would be chump change compared with what’s gone wrong with CRS. And if CRS is off the beam, that throws the whole pre-MMTS era record into serious question.

    • Agreed about the collection methods.

      Disagree that you use those in the way currently done to provide an accurate analysis step.

    • Evan.

      “Satellite is a good proxy for surface,…”

      Roy Spencer does not agree with you. At least not for MSU/AMSU:

      http://www.drroyspencer.com/2015/12/2015-will-be-the-3rd-warmest-year-in-the-satellite-record/#comment-203356

      • I’ve seen both the monthly and annual graphs.

        I notice that for UAH, there is less annual variation than with our surface data (either perturbed or unperturbed). It was also surprising to me because I had previously looked at monthly data and it appeared similar to the others, so it failed to occur to me that there was less annual variation going on. So much so, there are baseline comparison problems even with a common baseline.

        So I would agree that it would be a poor tracker at any particular point.

        But sat data may be a better proxy for trend than the surface stations, themselves. There’s drift, clouds, ice, what have you, of course. The RSS discovery of the drift problem was radically important (maybe you were “there” for that?). But both UAH and RSS are tracking quite the same at this point.

        And the sats provide uniform coverage. Not everywhere. But need I bring up the GHCN distribution problem? Even if all those stations were CRN equipped and sited, there would not be anywhere near enough for global coverage. Not to mention sea-air temps.

        Furthermore, LT is expected to have a slightly higher trend than surface. (And our unperturbed Class 1\2s weigh in just a little under the sat trends.)

        So I think sats are the best proxy currently available for surface trend until the surface metrics can get straightened out. Eventually they will be.

    • evan: LInks to published papers on the CRS-bias?

      • You will find some recent articles on the (radiation error) bias of Cotton Region Shelters in my post on the cooling bias due to the introduction of CRS. Their references should allow you to find most literature on the topic.

      • As we haven’t published yet, none, whatever. It never seems to have occurred to anyone that the mere presence of a heavy wood box in the immediate vicinity of a sensor was at the root of problem in the first place. Everyone was running around trying to find out what was wrong with MMTS, shooting at the wrong dog.

    • I do wish you had included a quote of what he actually said.

      “The bottom line is that boundary layer vapor is not a proxy for tropospheric temperature…but it is a pretty good proxy for SST.

      So please learn something about the issue, “ehak”, and stop making it sound like boundary layer vapor (which is basically what TPW is) somehow tells us what free-tropospheric temperature should be doing.”

  78. David Springer

    https://en.wikipedia.org/wiki/Heat_sink_(disambiguation)

    Heat sink (disambiguation)
    From Wikipedia, the free encyclopedia

    The term heat sink may refer to:

    Heat sink, a component used to conduct heat away from an object

    Thermal energy storage, a number of technologies that store energy in a thermal reservoir for later reuse

    Urban heat island, an urban area with a tendency to absorb sunlight at much higher rates than the natural landscape

    A heat reservoir which can absorb arbitrary amounts of energy without changing temperature

    I can’t tell from context of your writing which meaning you want.

    • Actually I am not sure if they don’t all apply in this conversation.

      • David Springer

        The last doesn’t apply to artificial structures but the rest do. Most of them however won’t change a decadal trend.

        The first applies to a modern heat pump which raises the temperature of a structure. It exhausts cold air. They’re a bit on the rare side and the outdoor temperature domain in which they operate is limited to approximately 40F-70F.

        The second would apply to something with a lot of mass that tends to stay at a constant temperature. Something made of cement and lots of it. But that will, in and of itself, only reduce min/max delta without changing the mean.

        The third is the one of interest. It is better described as land use change where the albedo of the landscape has changed so that it absorbs more or less energy from the sun than than the natural landscape. This will change a trend when it is a growing amount of land use change. Civilization tends to grow so as it grows and/or encroaches it will create a trend.

        But these and their effects on a temperature station are all old news.

        I think what the Watts study has done is by isolating variables in a new way has revealed a smoking gun – non-aspirated cotton region shelters. They show far higher warming trends than any of the other sorted categories, the trend matches homogenized adjusted USHCN trend, and critically the greatly increased trend is daytime maximum.

        These clues are now sufficient IMO to reverse-engineer what has been happening. Gradual loss of reflectivity in the non-aspirated painted wood enclosures due to degradation of paint and build up in deposition of airborne dirt, dust, and soot. The gradually darkening exterior of the enclosure affects a gradually increasing internal daytime temperature due to increased absorption of solar energy.

        This is buttressed by the fact that civilization tends to bring airborne soot with it produced by combustion of various sorts including coal burning power plants, indoor and outdoor fireplaces, diesel engines, and so forth. This leads to an acceleration of the warming trend.

        The insidious thing that hid this for so long is the adjustments themselves and metadata-independent means of detecting when adjustment is necessary. CRS shelters are cleaned and painted on occasion. That however is an instant event which would create step-change in temperature, in the cooler direction, not reflected in nearby stations. This will then be detected as a “perturbation” and the station gets adjusted to remove the sudden cooling. BEST would split it off into a new temperature series which has the same effect. Thus the sudden cooling back to a baseline shiny new cotton region shelter escapes integration into the time/temperature series while the gradual degradation over the years as it darkens is fully integrated.

        We now have our explanation and when we compare the trend of non-CRS shelters we find it closely matches the trend in independent satellite-measured temperature of the lower troposphere.

        Mystery solved.

      • “A heat reservoir which can absorb arbitrary amounts of energy without changing temperature”

        Agreed, but a pond will do as a stand-in.

  79. They do. But I’ll add a new context.

    The way it came about was a necessity rising from the Leroy papers, which never mention the term. He was much too proper. He referred only to heat “sources”.

    That term covered two things: 1.) Direct sources (such as an air conditioner) that transfers heat to a sensor. 2.) an object that absorbs heat and re-radiates some of it towards the sensor.

    We felt a need to distinguish between the two, so we settled on the terms, “waste heat” and “heat sink”.

    I have changed my mind about the waste heat issue. It is not nothing, but not as bad as I thought. It males the most comical photos ops, but I think it is chipping around the edges. In our study, heat sink rules.

    The problem is paved surfaces and structures. Roads and runways do not generate their own heat, they get that way by absorbing it. They may depress the temperature — as the temperature rises. But the readings are Tmax\Tmin.

    And when do those occur? Generally around 4:00PM and 5:00AM. So there is a lag effect, or else Tmax would be at noon and Tmin at midnight. What is causing that lag effect? The ground itself. The ground is releasing heat for a few hours after even faster than the sun is waning. And then it steadily releases as much as it can by the time Tmin rolls around. Poor microsite does that too. And it does it more and better. Hits you wort at both ends, max and min.

    Bad microsite doesn’t magically conjure up a new effect. It’s just an extra side of the old effect. A big one.

    As for wiki, my response is what the fortuneteller told the cop. All those things described are looked at only from the pre-lag period. And it is during that pre-lag period that a heat sink is slowing down warming. But we to not record data at that time. We observe ’round the clock, but we record only Tmax and Tmin. And that’s the apex during the lag.

    To cut to the chase, all those definitions are about when the the heat, but have nothing to say about where all that absorbed heat ultimately goes. Except maybe greenhouse operators, who will sometimes place a nice concrete block in order to slow cooling and take the edge off Tmin.

    As far as I am concerned, wiki can just add a new term. “In climate science, an object that absorbs heat (as described above) and subsequently re-radiates it, which may affect the readings of a nearby sensor.”

    • Evan

      Some years ago I wrote this

      https://noconsensus.wordpress.com/2009/11/25/triplets-on-the-hudson-river/

      If you look at the data below figure 10 you will see reference to a number of changes in the built and natural environment equating to your paved surfaces.

      These evolving changes must have had a profound effect on the Central Park temperature station. My point is that so many sites are so compromised by moves, the built environment, elevation changes, shading, instrumental faults, observers faults etc that it is difficult to do more than draw a general picture of the evolving climate.

      The idea that we can pin it down to tenths of a degree or that we know a global figure back to 1880 should not be entertained. Did you ever read the Book by Von Haan from 1903? In it he lists many of the station siting problems still being grappled with today.

      I can send you a link if you have never seen it.

      tonyb

      • climatereason,

        And of course, because air has a habit of moving, the air occupying the screen at any given time, may have moved there from elsewhere.

        Climatologists may not know this phenomenon, but it’s commonly known as “wind”. Unfortunately, this movement of heated air may result in a maximum temperature, which, while true, has recorded the effect of a heat source some distance away. Hot air doesn’t always rise as rapidly as one might think.

        I won’t bother with examples from experience, but your reluctance to place too much reliance on thermometer records is not misplaced, in my view.

        Cheers.

      • Indeed. Another assumption that needs investigating. Wind means that temperature moves in waves across any sensor. The Field in between multiple copies of those sensors is not a fixed one. Estimation of values in that Field must use some temporal calculation as well as spatial ones to be accurate.

    • ““In climate science, an object that absorbs heat (as described above) and subsequently re-radiates it, which may affect the readings of a nearby sensor.””

      The absorption may/will effect tMax, The re-radiating will/may effect the tMin?

    • Did you ever read the Book by Von Haan from 1903? In it he lists many of the station siting problems still being grappled with today.

      I can send you a link if you have never seen it.

      Sounds interesting. Yes, if you can.

  80. It is re-radiation in both cases. The same reason Tmax comes hours after noon and Tmin, hours after midnight. The ground itself is a heat sink. Paved surfaces and structures just do the same thing more and better. (And the same to your not-so-little CRS box, too.)

    • David Springer

      Yes but that action doesn’t change a long term trend.

      • Yes but that action doesn’t change a long term trend.

        I think it does. A trend is often different depending on the characteristics of the ground alone. We see this in the different rates the different regions warm, regardless of siting, or even adjustment. It’s “baked in”. When you alter the effect by placement of cement, etc., you are affecting that equation.

    • David Springer

      Actually no, it isn’t re-radiation in both cases. You need to start thinking about energy in, energy out, when and how much. When the sun is the highest in the sky the surface is absorbing the most energy. But even after the input from the sun tapers off the surface is still absorbing more energy than it is losing so it continues to get warmer for a few hours.

      The bottom line regarding the surface as a heat sink is it loses as much energy as it gains every day (seasonality excepted) so there is no change in a longer term trend. The change in trend occurs when there is a change in how much energy is absorbed when the sun is shining. This happens when the surface albedo changes and/or when the amount of evaporative cooling changes. Going from plants to cement changes both the albedo and the amount of evaporative cooling. If the cement is gradually gaining in area over plants then it will change the trend commensurately otherwise it’s a step change when the cement was poured.

      What your excercise revealed is that Stevenson Boxes (Cotton Region Shelters; CRS) have twice as much warming trend but only in daytime maximum temperature. Since the trend isn’t reflected at night or in MMTS sensors, and isn’t different rural vs. city, or perturbed vs. unperturbed, or homogenized or unhomogenized, it really limits the hypotheses capable of explaining it.

      As far as I can see it limits it to one hypothesis – the boxes themselves grow gradually darker over a course of years. In cases where the box gets a good cleaning or repainted then it shows up in the record as a step change to cooler temperatures. A difference signal is generated with neighboring stations that weren’t rejuvenated and the homogenization algorithms detect the discrepancy and eliminate the step change. In effect all the gradual warming due to darkening get integrated into the temperature record while all the cooling events when a box gets cleaned or painted are not integrated into the record. So we all the warming due to box darkening and none of the cooling due to box lightening.

      Feel free to try poking holes in that hypothesis. Good luck.

      • David Springer,

        With some trepidation, may I add a little?

        As you have pointed out, the thermometer enclosure is an energy integrator. There are a couple of other factors which spring to mind – clouds, and pollution. Examination of rareps (radar reports), which can at least compare cloud concentrations given the same radar wavelength is used, can show changing patterns of cloud formation, density and movement as urban areas form and grow. Cloud cover affects energy received and lost.

        Pollution, particularly where particulates are involved, will affect both the rate at which the enclosure heats and cools, and recorded maxima and minima. I certainly won’t attempt to poke holes in your hypothesis. If I understood you correctly, I agree.

        Cheers.

      • The bottom line regarding the surface as a heat sink is it loses as much energy as it gains every day (seasonality excepted) so there is no change in a longer term trend.

        On a daily average? Quite possibly. (The heat sink lags the temps at, say 10AM.) But we only record Tmax and Tmin. So as far as the data goes, the other points are moot.

        Note that in a cooling phase, the process reverses, returning to the original starting point. The only reason we are seeing an overall trend exaggeration during our study period is that we are looking at a real, genuine warming trend.

        Deserts, for example, tend to warm faster during the day, cool faster at night.

      • Of course, a pendulum with a weight attached always follows the same path as one without. Duh!

      • But even after the input from the sun tapers off the surface is still absorbing more energy than it is losing so it continues to get warmer for a few hours.

        Right. And The process is reversed at night, but there is still extra energy in the denser sinks being emitted at Tmin.

        The (very loose) closed car example demonstrates this. At 7AM when it is 50 degees, the car is maybe 5F warmer than the outside. By 12:00 is is 70F outside, and closer to 90F in the car. During the 7AM to 12PM “mini-warming phase”, the warming trend inside the car (55F – 90F) is near-double the trend outside the car (50F to 70F).

        And, yes, the process reverses itself at night as it cools, producing an exaggerated cooling trend during that stretch.

      • David Springer

        I suggest you set up an actual experiment to demonstrate this supposed effect. It should not be difficult in a laboratory setting since it’s just a heat source, heat sink, and digital thermometer. You won’t get much traction with your mind games no matter how many times you try to explain it.

      • Let’s start with a pendulum with a weight attached by a short piece of wire. Mimics the above quite well I think.

        One claim is that the pendulum is unaffected. The other is that it is.

        Guess which way that plays out.

      • David Springer

        It’s not the size of the pendulum that matters but how you use it!

  81. These evolving changes must have had a profound effect on the Central Park temperature station.

    That was one I surveyed up close an personal. Good microsite indeed. And, during our study period, low trend, too. (You may have seen the adjusted data. That one got hiked way up.)

    And was it you that mentioned Mohonk Lake? I surveyed that one, too, from the ground. Mesosite is about as rural as it gets. They paved the main drag over the years, but that famously perfect station was never other than a Class 4 CRS. 5.6m. from a house that was there when it was installed. (High trend, too.)

    • Evan

      This from one of my articles; reading the book by Von Haan is most instructive

      “Many of these basic concerns can be seen in this contemporary description from a 1903 book which relates how temperature recordings of the time were handled. The “Handbook of Climatology” by Dr Julius von Hann (b. 23 March 1839 d. 1 October 1921) is the sometimes acerbic observations of this Austrian, considered the ‘Father of Meteorology.’

      The book touches on many fascinating aspects of the science of climatology at the time, although here we will restrict ourselves to observations on land temperatures. (It can be read in a number of formats shown on the left of the page on the link below).

      http://www.archive.org/details/pt1hanhdbookofcli00hannuoft

      This material is taken from Chapter 6 which describes how mean daily temperatures are taken;

      “If the mean is derived from frequent observations made during the daytime only, as is still often the case, the resulting mean is too high…a station whose mean is obtained in this way seems much warmer with reference to other stations than it really is and erroneous conclusions are therefore drawn on its climate, thus (for example) the mean annual temperature of Rome was given as 16.4c by a seemingly trustworthy Italian authority, while it is really 15.5c.”

      That readings should be routinely taken in this manner as late as the 1900’s, even in major European centers, is somewhat surprising.

      There are numerous veiled criticisms in this vein;

      “…the means derived from the daily extremes (max and min readings) also give values which are somewhat too high, the difference being about 0.4c in the majority of climates throughout the year.”

      Can I also suggest that a read of the book linked to here would also be worthwhile;

      http://www.isac.cnr.it/~microcl/climatologia/improve.php

      It is the ultimate book on microsites and involves 7 historic European temperature data sets that were exhaustively re-examined by Camuffo and Jones.

      http://www.isac.cnr.it/~microcl/climatologia/improve.php

      Their budget of 7 million Euros to look at these 7 sites may be just a tad higher than the budgets available to you from Big Oil, Big Wind and Big Frack :)

      tonyb