Berkeley Earth: raw versus adjusted temperature data

by Robert Rohde, Zeke Hausfather, Steve Mosher

Christopher Booker’s recent piece along with a few others have once again raised the issue of adjustments to various temperature series, including those made by Berkeley Earth. And now Booker has double-downed accusing people of fraud and Anthony Watts previously insinuated that adjustments are somehow criminal .

Berkeley Earth developed a methodology for automating the adjustment process in part to answer the suspicions people had about the fairness of human aided adjustments. The particulars of the process will be covered in a separate post. For now we want to understand the magnitude of these adjustments and what they do to the relevant climate metric: the global time series. As we will see the “biggest fraud” of all time and this “criminal action” amounts to nothing.

The global time series is important, for example, if we want to make estimates of climate sensitivity or if we want to determine how much warmer it is today than in the Little Ice Age or if we want to compare today’s temperature with the temperature in the MWP or Holocene or if we want to make arguments about natural variability or anthropogenic warming.

Figure 1. HomogenizationGlobalLand

Figure 1. Unadjusted data results are shown in the blue curve. The green curve shows the results if only metadata breakpoints are considered. The red curve depicts all adjustments.

As Figure 1 illustrates the effect of adjustments on the global time series are tiny in the period after 1900 and small in the period before 1900. Our approach has a legacy that goes back to the work of John Christy when he worked as state climatologist: Here he describes his technique.

The idea behind the homogenization technique is to identify points in time in each station’s record at which a change of some sort occurred. These are called segment break points. A single station may have a number of segment break points so that its entire record becomes a set of segments each of which requires some adjustment to make a homogeneous time series. Initially, segment break points were identified in every case when one of the following situations occurred: (i) a station move, (ii) a change in time of observation, and (iii) a clear indication of instrument change.

The results of Berkeley adjustments are shown in the green curve. In addition, we break or slice records where the data itself suggests a break in the record. We refer to this as empirical breaks. This is shown in red. The impact of adjustments on the global record are scientifically inconsequential.

On smaller spatial scales, however, we can see there are certain areas where we could pick stations to show two opposite different conclusions: we could show that adjustments warm the record; and we could show that adjustments cool the record. First, a chart for people interested in accusing the adjustment algorithm of warming the planet:

Figure 2 HomogenizationUSA

Figure 2. Adjustments for the contiguous US

And next a chart for those who want to accuse the algorithm of cooling the planet

Figure 3. HomogenizationAfrica

Figure 3. Africa adjustments

The differences between the various approaches—unadjusted, metadata adjusted, and full adjustment is shown below in figure 4 along with data for selected regions.

Figure 4. HomogenizationDifferences

Figure 4. The top panel depicts the difference between all adjustments and no adjustments. The black trace shows the difference for all land. Blue depicts USA; red Africa; and green Europe. The lower panel depicts the difference between all adjustments and metadata only adjustments.

As the black trace in the upper panel shows the impact of all adjustments (Adjusted-NonAdjusted) is effectively zero back to 1900. And prior to that the adjustments cool the record slightly. However, we can also see that the adjustments have different effects depending on the continent you choose. Africa, which has 20% of the land area of the globe has adjustments which cool it from 1960 today. While the US, (around 5% of all land) has adjustments which warm its record.

Spatial maps show the same story. Certain regions are cooled. Other regions are warmed. On balance the effect of adjustments is inconsequential.

Figure5. HomogenizationMap2014

Figure 5. The effects of adjustments on 2014 temperatures

Figure 6. HomogenizationMap2000s

Figure 6. The effects of adjustments on the last 14 years

Figure 7. HomogenizationTrend1900

Figure7. The effect on trends since 1900

Figure 8. HomogenizationTrend1960

Figure 8. The effect on trends since 1960

Since the algorithm works to correct both positive and negative distortions it is possible to hunt through station data and find examples of adjustments that warm. It’s also possible to find stations that are cooled.

One other feature of the approach that requires some comment is the tendency of the approach to produce a smoother field than gridded approaches. In a gridded approach such as Hadley CRU, the world in carved up into discrete grids. Stations within the grid are then averaged. This produces artifacts along gridlines. In contrast, the Berkeley approach has a smoother field.

If we knew the true field perfectly, we could decide whether or not our field was too smooth or not. But without that reference we can only note that it lacks the edges of gridded approaches and tends to have a smoother field.

One approach to evaluating the fidelity of the local detail is to compare the field to other products using other methods and source data. In this poster , http://static.berkeleyearth.org/posters/agu-2013-poster-1.pdf , we compared our field with fields from reanalysis, satellites , and other data producers. What we see is essentially the same story. Over the total field the various methods all produce similar answers. Locally, however, the answers vary. All temperature fields are estimates, spatial statistical estimates. They all aim at producing a useful global results. They succeed. Deciding which result is also useful at the local level is an area of active research for us.

One final way to compare the results of various data producers is by comparing the spatial variability against that produced by global climate models. While not dispositive this comparison does indicate which temperature products are consistent with the variability found in simulations and those which are less consistent.

Figure 9. HomogenizationSpatialVariability

Figure 9. Spatial variability. GCM results are depicted in blue. Black lines on the GCM results indicate the variations across model runs of the same model

The vertical axis provides a measure of how much variability in temperature trends is observed across the whole field. The homogenized Berkeley, NASA GISS and NOAA, all broadly agree with historical global climate model runs on this metric. The horizontal axis provides a measure of the local variability in trend (i.e. the average change in trend when travelling a distance of 750 km). On this metric, Berkeley, NASA GISS and NOAA are all consistent with GCMs but on the low side of the distribution.

In general, noise and inhomogeneities in temperature data will make a temperature field rougher while homogenization practices and spatial averaging will make it smoother. Since the true temperature distribution is unknown, determining the right amount of homogenization to best capture the local details is challenging, and an active area of research. However, as noted above, it makes very little difference to the global averages.

In summary, it is possible to look through 40,000 stations and select those that the algorithm has warmed; and, it’s possible to ignore those that the algorithm has cooled. As the spatial maps show it is also possible to select entire continents where the algorithm has warmed the record; and, it’s possible to focus on other continents were the opposite is the case. Globally however, the effect of adjustments is minor. It’s minor because on average the biases that require adjustments mostly cancel each other out.

JC note:  As with all guest posts, please keep your comments civil and relevant

 

1,178 responses to “Berkeley Earth: raw versus adjusted temperature data

  1. Thanks, guys. Now we can move on to the strongly positive water vapor feedback assertion. Where is the evidence?

    • I dont vape

      • You wouldn’t.

        Kidding aside, it’s an impressive effort. You convinced me some time ago that BEST is about as good as we are going to get with what we have to work with in the surface data realm.

        I still have doubts about the accounting for UHI. There were 2 billion people in 1935, since then 5 billion have been added in ever higher concentrations around cities towns villages etc. And that’s where the thermometers are located. Let’s envision those 2 billion people living primitively in the open with their campfires and cookfires. Well now we have 7 billion living in the same space with bigger fires and firetrucks too. I don’t believe that UHI is negligible. But in the grand scheme of things, maybe it’s still negligible. The water vapor is the thing.

        You and the other guys who work on this for free deserve commendation. Not condemnation.

      • Don,

        Regarding UHI, we’ve done a fair bit of work looking for it.

        See my paper on the U.S., for example:
        ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/hausfather-etal2013.pdf
        Or our AGU poster from a few years back:
        http://wattsupwiththat.com/2011/12/05/the-impact-of-urbanization-on-land-temperature-trends/

        There is still more to do, but asofyet we haven’t found a particularly large effect globally.

      • Thanks for your efforts, Zeke. Keep looking. I think it’s there, somewhere:) But I have been wrong, back in 1987.

      • Yeah. Keep looking, Zeke. Don thinks it’s there. Sure, you’ve done a fair amount of looking and haven’t found it. But Don thinks it’s there. He hasn’t actually looked. But he thinks it’s there so you should keep looking. If you don’t find it in this universe, try a couple of others just in case.

      • You are overwrought, overeager and overselling it, joshie. Study Henny Youngman. Get back to us in about 4 years.

      • Don.

        Thanks.

        I would say the most useful thing that skeptics can do in terms of focusing their collective brain power is to look at UHI.

        Since 2008 zeke and I have spent tons of hours looking at adjustments and UHI.

        There is no story in adjustments. I mean a story that makes science headlines and not blog headaches. Its the wrong tree to bark up.

        There might be a story in UHI. but it will take tons of work.
        and that means putting on a different hat. it means putting your own ideas to the test.

      • Don Monford wrote:
        “You convinced me some time ago that BEST is about as good as we are going to get with what we have to work with in the surface data realm.”

        +1
        And I used to be very suspicious indeed.

        I think that Booker and Delingpole suddenly jumping all over this will eventually reflect badly on them. Which is a shame, as they have previously shone a spotlight on some terrible scientific malpractice by alarmists. I think they hope the adjustments are the silver bullet that will finally kill CAGW alarmism. They aren’t.

      • Jonathan

        I am not necessarily disagreeing with you or Don. I think Mosh and Zeke have integrity, but they sure don’t translate that into intelligible or useable information.

        If Booker or Delingpole are reading this I can’t see anything that has been written that they could take away and say ‘we were wrong.’

        The global record suffers from wildly different levels of accuracy-the record from Albania is not equal to the record for Australia for example.

        The methodology to collect the temperature data in the past is often highly variable. The numbers that are used are often as ‘anecdotal’ as my written historic records are claimed to be.

        The levels of certainty and accuracy proclaimed by all the data sets are by no means warranted. Global temperature to 1850? How? Why?

        The Climate undoubtedly varies. It was not virtually static for 1000 years as shown in the paleo proxy reconstructions and stated by the Met Office on their web site until around 2012. Looking at all the data it appears we are mostly a little warmer than we were during the LIA. Why is that surprising?

        Less certainty and more humility over the basic data would not go amiss and a clearer rebuttal of the claims made in the various articles cited here would help this particular sceptic to maintain his belief that there is no hoax, conspiracy or fraud. What I have come to recognise over the last ten years though is that there is an overbearing and unwarranted certainty, when frank admissions that ‘we don’t know’ would be more realistic.

        tonyb.

      • Tony,

        In general I agree with you. The real problem is the quality and coverage of the raw data in the first place. The adjustments are, to paraphrase Mosher, inconsequential. I’m quite happy to say that BEST is pretty much as good as we can get with the data we have, but even then is no more than a rough indicator of general trends. Has it warmed since the LIA? Yes. Has it warmed since c1980? Yes. Has there been no significant warming (or cooling) since c1998? Yes. Do we know any of these things within 0.1°C? Hell no.

        But to claim there is deliberate fraud in the BEST adjustments, or that they constitute a massive scientific scandal is wrong. In my opinion, it’s the failure of the GCMs that Booker and Delingpole should be shouting about.

      • My main concern with homoginization is that the relationships that are used probably don’t hold over time.

      • I’m with Tony. BEST and the rest have produced a temp series that, whatever it reflects, it doesn’t reflect history. The Northwest passage was open and sailed in a single season by Henry Larsen in 1944. There was less ice in the Arctic in 1944 than now. It was obviously warmer in the Arctic then than now. We are told constantly that disappearing Arctic ice is an indicator of a warming world. You would not know that 1944 was hotter than now from BEST or the rest. I know BEST uses 16 data series, I know BEST uses raw data and only adjusts for breaks, measures against nearby stations when there are gaps and blah blah blah, whatever. BEST missed the 1940s heat. Something’s not right.

      • Steven, I read the posts linked at the top and never saw a reference to BEST or Berkeley Earth.
        Did I miss it or is this a strawman?

      • Steve,
        Very clean and to the point. Good job of showing the effort and direction that you are going. Thanks

    • “You wont like the answer if that approach is taken with…”

      Lemme do my best Joshua here:

      “They did it too, mommy!”

      Andrew

  2. There is no such a thing as ”raw data” all data has being cooked!: https://globalwarmingdenier.wordpress.com/

    • Unless, of course, you use the raw data like we do… http://berkeleyearth.org/source-files

    • @Zeke Hausfather I went to your link; that’s for creating confusion – here is the truth and all the facts::: the ”correct” monitoring is completely WRONG, not only the manipulated data; therefore: the overall ‘’global’’ temp is same every year, BUT hypothetically: even if there was any fluctuation in temp, nobody would have known, because nobody is monitoring on every 10m3, for every minute in 24h!!!

      1] monitoring only for the hottest minute in 24h and ignoring the other 1439 minutes, in which the temp doesn’t go up, or down, as in the ”hottest” minute…. statistically 1439 minutes against one…?!?! Hello ”statisticians! It’s same as: if the car is got 1440 different parts, but you are building that car, ”with one bolt only” you will not get very far…! Some places, sometimes warms by 5C from midnight to midday – other places at different times from midnight to the hottest minute in the day – IT WARMS UP BY 20c-25c- and 30C, in 12h difference – no swindler takes those things into account! Why?! Therefore: ”the hottest minute in 24h misleads by at least 10C!!! They conveniently overlook that; then ”pretend” to know to a hundredth of a degree precision, for the whole year, for the whole planet?!?! The ”Skeptics” are getting fattened on the Warmist bullshit…

      2] the ”highest temp minute in 24h, is not at the same time every day! Sometime is the ”hottest at 11, 50AM, most of the time is after 1pm = that is many more warmer minutes than previous day.

      3]example: tomorrow will be warmer by 2C than today, happens many times; and they will record 2C warmer – because the ”warmest” minute was 2C warmer, not ALL the rest of the 1439 minutes since midnight were warmer by 2C. b] question is: ”is it going to start from midnight, every minute to be warmer than the same minutes in previous day?! Therefore: recording only the hottest minute is meaningless! Nobody knows what was the temp yesterday, or last year on the WHOLE planet… but most of the fanatics in the blogosphere pretend to know with precision the temp for the whole year, for last thousands of years… What repetition and aggressive propaganda can do to a grown up person’s brains… tragic, tragic…

      4] on a small hill, put a thermometer on all 4 sides; all 4 will show different temperatures on different day and on SAME minute simultaneously – when you take in the account that: on many places one thermometer represents millions of square kilometers, where are thousandths of ”independent” variations, every few minutes, on every different altitudes = gives a clear picture about their ”global temperature” for last 50years… or 5000 years, or for the last two years. On small part of the planet is warmer for few weeks than normal – they declare it as: ”warmer year”… what a science… couple of months after, when on that same place is colder than normal – they avoid that place and point other place where is for 3-4 days warmer than normal… what a brainwashing scam…

      5] pointing at some place that is warmer than normal – is SAME as saying: ”the planet is warmer by 12C at lunch time, than before sunrise…? taking in consideration the size of the planet: one thermometer or 6000 thermometers, wouldn’t make any difference! ( look at their ”global” temp charts… they look like seismographs… with ”precision” to one hundredth of a degree, for the last thousandths of years… = the biggest con /lies since the homo-erectus invented language…

      6] a thermometer can monitor the temp in a room; but one thermometer for 10 000km2?!

      7] even those ”few” (6000) thermometers are not evenly distributed; no honest statistician would have taken to make ”statistic” if he wasn’t told: which individual thermometer, how much area represents. Example: if the workers in 4 countries have their pay packet increased by a dollar, and in 2 countries had ”decreased by a dollar Q: would the ”overall’ all workers in those 6 countries get more money, or less? Of course, statistic would say: ‘’yes’’ (the 4 countries were Luxembourg, Monaco, Belgium and Portugal, increased by a dollar. The other two were India and Chinese workers, decreased by a dollar) statistic would be wrong; because two thermometers represent much larger area than the other four combined. So much about the ‘’correct’’ temp data… (there are more thermometers in England monitoring for IPCC, than in Russia… England is a small dot on the map (but most of the lies come from there) Warmist Poms are the most shameless liars..

      8] when is sunny – on the ground is warmer / in upper atmosphere is cooler – BUT, when is cloudy, upper atmosphere is warmer, on the ground cooler – overall same temp; BUT, because ALL thermometers monitoring are on the first 2m from the ground = they are completely misleading! There is much less heat in the first 2m from the ground, than in the rest of 10km up. The rest of 10km up, is not on their ”globe”…?!

      9] for the shonks northern hemisphere summer is warmer by 3,8C than S/H summer. That tops the stupidity; they can’t get it correct even about same year. They come WRONG by 3,8C for two different reasons: a] N/H has more deserts, southern hemisphere has more water. Desert has warmer top temperature, BUT the night temperatures are cooler – by not taking all minutes in 24h, they are wrong by +/- 3C. In deserts get to 45-50C at day time, but nights are cold -/ on islands in south pacific between day / night temp is different 3-5C, is that science? B] on southern hemisphere are ”LESS” thermometers = less thermometers cannot say correct temperature against the N/H more thermometers, when you summon up all the numbers. So: only by those two factors they are wrong by +/- 3C, but when you say the last year’s temp cooler by 0,28C than today’s = it shows the sick propaganda… they call themselves ”scientist” Instead going to Antarctic, Arctic to get reumatizam and spend millions, they can get the whole truth on my blog; but they are scared from the truth as the devil from the cross… The truth: if they have same number of thermometers, distributed evenly AND every minute in 24h is taken in consideration = would have shown that: every day and every month of every year and millennium is ”overall” same temperature on the earth!!!

      10]almost all of those 6000 thermometers collecting data for the ”climatologist; are distributed on land – water covers 2/3 of the planet!!! If you don’t understand what that means… you are qualified to be a ”climatologist”…

      11]When you point out to them that: ‘’6000 thermometers cannot monitor the temp in the whole troposphere – thermometer is good to monitor room temp, but not one thermometer for 1000 km2 – 6000 thermometers is not enough to monitor the temp in all Hilton Hotel’s rooms’’ -> they instantly point out that: ‘’there is satellite temp monitoring’’! Well, ‘’satellite’’ is a great technology, very impressive; unfortunately, they don’t have 350km long thermometers, to monitor from space the temp on the ground! They use infrared photos that never covers the whole planet, in two color blotches for the whole of Pacific, or for the whole of US. The ‘’two’ colors represent THE different temp, BUT: if you look the evening weather report, it says that are many variations in temp even for the big cities in USA; would be much more variations if they were reporting for every square mile! B] temp distribution is three dimensional in the atmosphere and constantly changes, cannot present it on two-dimensional picture! Satellite monitoring is the biggest con! Unfortunately, person responsible for analyzing those pictures will not admit the truth – because he prefers to be seen as very important, by the gullible foot-solders, the lower genera and IQ Warmist & Skeptics…

      12]Earth’s temperature is not same as temperature in a human body I.e: if under the armpit goes 0,5C up = the whole body is higher by 0,5C, arms, legs, the lot. Therefore, can tell if is gone higher or not. Earth’s temp is different on every 100m and FLUCTUATES ”INDEPENDENTLY”! Which means: one thermometer cannot tell the temp correctly for 1km2, when one monitors for thousands of square kilometers = Warmist can get away with that sick trick, thanks to the ignorant phony Skeptics and bias media… (Skeptics don’t need even thermometers, a pit-bog on each hemisphere is sufficient for them… Nobody knows what was the earth’s temp last year – to save his / her life! They ”pretend” to know what was the earth’s temp for thousandths of years = that’s Warmist & Skeptic’s honesty!

      Using only those 12 points above; to put any leading Warmist on a witness stand, under oath => will end up in jail; for using the ”temperature data / charts” as factual =/ Skeptics & Warmist of lower genera and IQ in a nuthouse, for believing in warmer / colder years. Warmist only prosper and flourish, thanks to the Skeptic’s outdated Pagan beliefs…https://globalwarmingdenier.wordpress.com/2014/07/12/cooling-earth/

  3. Rud Istvan wrote in previous post | February 7, 2015 at 4:18 pm |
    “Steven, no quibble with any of your points except the last. Perhaps my meaning was not clear. It is indisputable that for at least a decade, the past has been cooled and, in some cases, the present warmed compared to ‘raw’. That is so for USHCN, GHCN, BOM Acorn, NIWA… Moreover, the tendency has been increasing, documented by simple archival comparisions of regional and global temperature series at different points in time. For example, Between 1999 and 2011 NOAA NCDC cooled CONUS 1933 and warmed 1999 by a net total of 0.7C and erasing the mid 1930s heat record. It is also indisputable that all this opposite to what NASA GiSS says it does in homogenization to resolve UHI, using Tokyo as the illustration. It is also indisputable that GHCN homogenization has with statistical ‘certainty’ a warming bias. See the Europe AGU 2012 paper available at http://www.itia.ntua.gr/en/docinfo/1212. All examples and more in essay When Data Isn’t, with footnotes”

    • Rud tells half the story. you’ll have to ask him why he and other ignore the following facts.

      1. Berkeley COOLS the global trends since 1960, the period most relavant for attribution studies
      2. berkeley COOLS the global trends from 1979 to present.
      3. We cool large portions of the largest continent. Africa
      4. The AGU paper looked at 181 stations. BAD skeptic.

      and more,

      However if you like raw data, we can produce a raw ocean and raw land.

      Trends will INCREASE.. big time.

      • I like raw data,
        Real Raw data.
        Like you get on land for 130 years.
        Just show the land real raw data Steven.
        And discuss it.
        No one measuring raw data on or in the sea 130 years ago of any significance at all [note ‘of any significance”]
        Hence your attempt to inject sea water into the debate is a big red herring, misleading , fallacious, as quoting “Zeke Hausfather | February 9, 2015 ” ocean temperature adjustments significantly increase temperatures prior to 1950.”
        Amazing!
        reductio ad absurdum.
        ” The net effect of all adjustments is to notably reduce the century-scale warming trend compared to raw data:”
        Or in plain English [sorry, American] We can hide the way we dropped the past land temperatures.
        By putting in made up sea ones.

      • Yup, if you didn’t artificially cool that “RAW” data someone might smell something!!

      • “Berkeley COOLS the global trends since 1960”

        So does GHCN. From here is this graph of trends from x-axis year to present. Adjusted purple, unadjusted blue.

      • The EAGU paper looked at ALL ghcn stations with reasonably complete 100 year records. That was the sole selection criterion for the sample. For example, De Bilt Netherlands, home of KNMI, with a well maintained station that may have some UHI (for which the NASA GISS recommended adjustment is to warm the past to compensate). The past was cooled, instead.

      • thisisnotgoodtogo

        angech2014 | February 10, 2015 at 12:02 am |

        “Or in plain English [sorry, American] We can hide the way we dropped the past land temperatures.
        By putting in made up sea ones.”

        In the mode of Wigley/Jones:

        “Phil, Here are some speculations on correcting SSTs to partly explain the 1940s warming blip. If you look at the attached plot you will see that the land also shows the 1940s blip (as I’m sure you know). So, if we could reduce the ocean blip by, say, 0.15 degC, then this would be significant for the global mean — but we’d still have to explain the land blip. I’ve chosen 0.15 here deliberately. This still leaves an ocean blip, and i think one needs to have some form of ocean blip to explain the land blip (via either some common forcing, or ocean forcing land, or vice versa, or all of these). When you look at other blips, the land blips are 1.5 to 2 times (roughly) the ocean blips — higher sensitivity plus thermal inertia effects. My 0.15 adjustment leaves things consistent with this, so you can see where I am coming from. Removing ENSO does not affect this. It would be good to remove at least part of the 1940s blip, but we are still left with “why the blip”. “

    • That’s something that makes me wonder. Its clear that NOAA and the other main datasets are applying a positive bias. BEST does things differently and gets relatively similar results. But I’m still left with that nagging feeling that the underlying problem is that the majority of maintenance of the surface temperature network…results in steady warming followed by a break point to cooler temperatures.

      I am quite open to the idea that BEST is correct. But I also know that the maintenance of the surface network was NEVER intended to provide the sort of data we’re using. Literally, no matter how you slice it….as long as you try to stick it together into an unbroken record OF ANY KIND, you’re going to have to deal with an upward slope….which they might be inadvertently sticking end to end without correcting the bias. And that’s on top of the fact that the world most likely has been warming.

      I was trying to explain this using a thought experiment…of using three POTENTIAL locations for stations that all (for this example) had the same temperature, the first being the initial location near a city. The next two farther and farther out. As UHI became a problem the station would be moved from the first to the second. Then as UHI became a problem again it would be moved from the second to the third. If you try to stitch that “single station” record together, it will look like there’s warming. Throw in multiple neighbors going through the same process and while yes, you can find the break points with greater certainty…you still can’t tell if the warming is UHI or the environmental.

      HERE’S A THOUGHT. What if we make a pretend world that we KNOW the warming rate of as well as UHI contamination. Someone (I’m not up to the task) could create a world not unlike the one I mentioned. It would be a world where an increasing number of stations were subject to UHI contamination. ALL data would be known in advance…like the temperatures of the new sites stations would eventually be moved to. From that point on, its a relatively simple test. Use that input data and see what happens when it goes through the processing methods used by BEST, GISS, Hadley, etc.

      Maybe they will add warming. Maybe they won’t. Maybe in some methods, the REAL warming will manifest partly (or entirely) as “adjustment” (as reasonably suggested by Luboš Motl).

    • Point is ter change it… BOM ACORN..

      • The problem with Vaughhan Pratt comment above “you use the “available” data is that the “available” data was altered. As stated in a prior comment, if the data is so rotten it needs to be altered it calls into question to validity of said data. Ergo it is immoral to base energy policy that impacts millions of lives on data that needs to be altered even by the most honest people. The problem is even if there is no intent to be dishonest people who are convinced that CO2 is the primary culprit behind climate change will have what some have referred to as “Confirmation Bias” Confirmation may explain why so much of the raw data has been altered to cool the past and warm the present It’s human nature.

      • Confirmation [bias] may explain why so much of the raw data has been altered to cool the past and warm the present. It’s human nature.

        You on the other hand are immune to confirmation bias. The world eagerly awaits your secret.

  4. Pingback: Did NASA and NOAA dramatically alter US climate history to exaggerate global warming? | The Fabius Maximus website

  5. This post says:

    As Figure 1 illustrates the effect of adjustments on the global time series are tiny in the period after 1900 and small in the period before 1900. Our approach has a legacy that goes back to the work of John Christy when he worked as state climatologist: Here he describes his technique.

    But if you look at just the blue and red lines, you can see a .2 to .3 degree difference in the earlier portions. That means there’s a 15-20% change introduced by these adjustments. And that’s for if we only go back to 1850. The BEST temperature record extends another hundred or so years back. We can’t see how much of a difference its adjustments make in that period.

    It’s also not clear to me the timing of the adjustments is as meaningful as this post makes it out to be. BEST estimates its climate field over a particular period (just which period is used has been a source of some confusion). It is possible the adjustments have a greater effect in the past, at least partially, because they are further from the baseline period.

    One other feature of the approach that requires some comment is the tendency of the approach to produce a smoother field than gridded approaches. In a gridded approach such as Hadley CRU, the world in carved up into discrete grids. Stations within the grid are then averaged. This produces artifacts along gridlines. In contrast, the Berkeley approach has a smoother field.

    If we knew the true field perfectly, we could decide whether or not our field was too smooth or not. But without that reference we can only note that it lacks the edges of gridded approaches and tends to have a smoother field.

    This is simply untrue. We don’t need to know “the true field perfectly” to decide whether or not the resolution of a temperature record is too coarse. We don’t need to know “the true field perfectly” to compare the spatial resolution of BEST’s results to that of other groups’ results and judge which are more useful.

    We can do far more than “note [BEST] lacks the edge of gridded approaches and tends to have a smoother field.” This post tries to hand-wave away the issue, but We can specify how much smoother BEST’s temperature field is and make judgements about whether or not it’s a bad thing. In fact, we should.

    On a final note, I have to say comments like this are strange to me:

    However, as noted above, it makes very little difference to the global averages.

    Because if I read this without seeing the graphs, I’d never guess “very little” means several tenths of a degree. I don’t consider 15% or more to be “very little.” I’m sure there are other people who don’t as well. I imagine there are also people who do. It’s a judgment call, and there is no “right” answer.

    It is quite unhelpful to simply dismiss something as “minor” when people can reasonably believe is it not.

    • Brandon,

      The impact since 1900 is on the order of 0.05 C, not particularly large. The magnitude of this adjustment is effectively the same if you use metadata + empirical breakpoints or metadata alone (see Figure 4).

      • The impact globally, that is. Specific regions have larger adjustments, as discussed in the piece. For the U.S., at least, we have good evidence for large systemic biases (TOBS, MMTS, etc.): https://judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/

      • Zeke Hausfather, I have no idea why you say this. I don’t disagree, but how does it relate to anything I said?

      • John Smith (it's my real name)

        last month
        0.05 degrees was measurable and important
        now it’s insignificant
        this game is hard to follow

      • “The impact since 1900 is on the order of 0.05 C, not particularly large. ”

        Then the obvious question is: why continue to make the adjustments?
        Sure, run a test to make sure the adjustment is still negligible when new data arrives, then just report that and use the raw data. If someone comes up with a detection scheme that shows it is significant, report on that.

      • Zeke, I went to your earlier Climate Etc. link (2014/07/07, linked in this thread nearby) and it shows that the NCDC raw data, global, around 1900 through about 1925 is about 2 tenths of a degree warmer than the NCDC homogenized data (5 year smooth). By around 1950, the homogenized and raw are nearly equivalent. For the US, the differences are larger for longer and don’t peter out until about 1990.

        I may be missing something, but your response to Brandon says the magnitude of adjustments is on the order of 0.05 degrees. That is much less than the differences between raw and homogenized data in your 07/07 post, linked nearby in this thread.

        What am I missing? Are the adjustments you are talking about not the differences between raw and homogenized data? And if they are not, why are there differences of up to 2 tenths of a degree, for several decades, between the raw and homogenized data?

        I’m not on the conspiracy bandwagon, just looking for understanding.

      • OK, I think I see the source of my confusion. Rereading Zeke’s 07/07 post about adjustments, that post is about the several reasons why raw data from the past had to be adjusted. Things like changing the time of day at which the temps are read from PM to AM, resulting in an artificial cooling trend, if not adjusted. THOSE adjustments are what causes adjusted data, globally, to be about 2 tenths of a degree cooler than raw data in the early 1900s, with the difference between the two sets petering out around 1950. Bigger differences, lasting longer, for the US, in that post of Zeke’s.

        So why does Zeke say that the impact of adjustments since 1900 is tiny, on the order of 0.05 C? This is the source of my confusion, that led me to post earlier.

        Here is a guess: when Zeke uses the word adjustment in his reply to Brandon (Feb. 9 at 11:58), is he referring to further adjustments, AFTER the basic adjustments to the raw data described in his 07/07 post? If this is the case, the use of the word adjustments in both cases threw me off.

        Zeke, can you please confirm that this is what happened?

      • thisisnotgoodtogo

        “The impact since 1900 is on the order of 0.05 C”

        Larger than enough for the Director of NASA to broadcast a record in hottest year for the planet….

    • First things First
      In this post
      http://wattsupwiththat.com/2015/01/29/best-practices-increase-uncertainty-levels-in-their-climate-data/

      you messed up, in particular not noting the change in spatial uncertainty.
      I would like to see you add a correction to that post, and an acknowledgement.

      Now here>

      “But if you look at just the blue and red lines, you can see a .2 to .3 degree difference in the earlier portions. That means there’s a 15-20% change introduced by these adjustments. And that’s for if we only go back to 1850. The BEST temperature record extends another hundred or so years back. We can’t see how much of a difference its adjustments make in that period.”

      The change is on the order ( as Zeke notes ) of about .05C. since 1900.
      Since 1960 we reduce the trend. Since 1979 we reduce the trend.
      These changes and the changes before are inconsequential to any interesting question in the climate science debate. They don’t effect our core understanding of A) sensitivity. B) attribution. C) reconstructions
      D) GCM performance. In short, they may be of minor technical interest,
      but in terms of interesting science.. they are just pedestrian.

      ##############################################

      ‘It’s also not clear to me the timing of the adjustments is as meaningful as this post makes it out to be. BEST estimates its climate field over a particular period (just which period is used has been a source of some confusion). It is possible the adjustments have a greater effect in the past, at least partially, because they are further from the baseline period.”

      WRONG. we used a 100 baseline period here. We studied the baseline period as I told you. It makes no difference.

      “This is simply untrue. We don’t need to know “the true field perfectly” to decide whether or not the resolution of a temperature record is too coarse. We don’t need to know “the true field perfectly” to compare the spatial resolution of BEST’s results to that of other groups’ results and judge which are more useful.”

      wrong. too smooth means smoother than the TRUE FIELD.
      #######################################
      “It is quite unhelpful to simply dismiss something as “minor” when people can reasonably believe is it not.”

      Here is what you have to do to prove that its CONSEQUENTIAL.

      show that the adjustments CHANGE some core science in a meaningful way.

      1) Show, that is DO THE WORK, to demonstrate that our belief that
      over 50% of temperature change since 1950 is due to man.
      Attribution is CORE claim of AGW.
      2. Show that adjustments have a meaningful effect on estimate of sensitivity such that the consensus position of 1.5C-4.5 is overturned

      3. Show that adjustments have a meaningful effect on climate reconstructions. for example the HS

      4. Show that adjustments have a meaningful effect on climate forecasts made by GCMS.

      Changing the global series by small amounts, heck even large amounts, have very little leverage into the key uncertainties in climate science. Of course it makes a good blog post, but not impactful science.

      Now you can prove that “reasonable” people are right to consider these differences as “major” publish a paper showing how the core science is challenged.

      Ever heard the phrase polishing a bowling ball?

      • Steven Mosher, it’s obnoxious to say things like:

        you messed up, in particular not noting the change in spatial uncertainty.
        I would like to see you add a correction to that post, and an acknowledgement.

        When I directly addressed this point less than two hours after you first brought it up. Ignoring people’s responses to what you say then repeating yourself is just rude.

        The change is on the order ( as Zeke notes ) of about .05C. since 1900.

        This in no way addresses anything I said. This post talks about the effects of BEST’s homogenization, not just the effects of BEST’s homogenization after 1900.

        WRONG. we used a 100 baseline period here. We studied the baseline period as I told you. It makes no difference.

        You never actually told me this.

        Also, what you say does nothing to demonstrate what I said is wrong. You say you used a 100 year baseline, but that doesn’t contradict what I said. In fact, it makes what you and Zeke say incredibly peculiar. You both make the point the effects of homogenization are small after 1900, but if you used a 100 year period, that means the non-baseline period ends ~1900.

        wrong. too smooth means smoother than the TRUE FIELD.

        This does nothing to indicate I am actually wrong. This post refers to knowing “the true field perfectly.” I specifically said we don’t need to know “the true field perfectly.” We can make judgements about whether or not BEST’s results are too smooth without knowing “the true field perfectly.”

        Here is what you have to do to prove that its CONSEQUENTIAL.

        show that the adjustments CHANGE some core science in a meaningful way.

        I said adjustments to the BEST results shouldn’t be described as “minor” or “very little” because people may reasonably misinterpret what is meant by those words. Your response does nothing to rebut that. Instead, it seems to advance a crazy position, that it is acceptable to call any adjust “minor” if one can’t show the adjustment changes “some core science in a meaningful way.” Indeed, you say:

        Changing the global series by small amounts, heck even large amounts, have very little leverage into the key uncertainties in climate science. Of course it makes a good blog post, but not impactful science.

        Now you can prove that “reasonable” people are right to consider these differences as “major” publish a paper showing how the core science is challenged.

        Even though I clearly discussed the effect BEST’s adjustments have on BEST’s results. Determining whether or not adjustments have “very little” effect on BEST’s results does not require determining whether or not adjustments to BEST’s results somehow change our understanding of climate science.

        This is easily demonstrated by applying your argument to any other piece of work. Thousands of papers are published every year which don’t speak to “the core science” of global warming. Suppose one of these papers got results which were 99% due to adjustments. Your response would suggest it is fine to call those adjustments “minor” because anyone who disagrees needs to show “how the core science is challenged” by the existence of those adjustments.

        This entire response of yours is a mess.

      • Brandon

        “When I directly addressed this point less than two hours after you first brought it up. Ignoring people’s responses to what you say then repeating yourself is just rude”

        No. you need to post a correction in your post.

        probably my Last response to you until you do that

      • Steven Mosher:

        No. you need to post a correction in your post.

        I did. You’ve conveniently ignored that.

        probably my Last response to you until you do that

        You mean you’ll stop making things up in obvious ways to insult me and dismiss the valid points I make? Alright. Running away would be a better plan than what is practically lying about your critics.

      • Matthew R Marler

        Stephen Mosher: Quotes Branson Shollenberger“But if you look at just the blue and red lines, you can see a .2 to .3 degree difference in the earlier portions. That means there’s a 15-20% change introduced by these adjustments. And that’s for if we only go back to 1850. The BEST temperature record extends another hundred or so years back. We can’t see how much of a difference its adjustments make in that period.”

        Mosher’s response: The change is on the order ( as Zeke notes ) of about .05C. since 1900.

        That was a good point, and the whole post was a good post.

      • Matthew R Marler

        Brandon Shollenberger: We can make judgements about whether or not BEST’s results are too smooth without knowing “the true field perfectly.” :

        No you can’t. All you can show without knowing the true field perfectly is that one smoother is more or less smooth than another (or no smooth at all.)

      • Matthew R Marler:

        No you can’t. All you can show without knowing the true field perfectly is that one smoother is more or less smooth than another (or no smooth at all.)

        Since you guys are using the extreme of “knowing the true field perfectly,” let’s take this to the extreme and suppose we smoothed the temperature data so much the value was the same across the entire globe. That would be indistinguishable from simply using the global temperature record at every part of the globe.

        According to you guys, we can’t say that is bad. According to you guys, we need to know “the true field perfectly” to be able to make judgments like, “There should be some amount of spatial variation.” Just about everyone can agree temperatures in Antarctica are changing at a different rate than temperatures at the equator. If I saw results smoothed so much they found the same rate of change in Antarctica as at the equator, I’d say that is “too smooth.”

        According to you guys, I’d be wrong. According to you guys, if we don’t know “the true field perfectly,” we can’t make any judgments based upon it at all. I’m sorry, but this is just ridiculous. You guys wouldn’t accept it on any other issue. You guys would scoff at anyone who said we need to have perfect information in order to draw any conclusions.

      • Matthew R Marler

        Brandon Shollenberger: let’s take this to the extreme and suppose we smoothed the temperature data so much the value was the same across the entire globe.

        That does not help you to determine whether BEST smoothed the data too much.

      • Matthew R Marler:

        That does not help you to determine whether BEST smoothed the data too much.

        I find simply asserting someone is wrong tends not to accomplish much. Would you mind explaining why I am (supposedly) wrong rather than just saying I am? If you don’t, I’m going to just wind up responding, “Yes, it does.”

        And then you’ll be all, “Nuh-uh.”

        And I’ll be all, “Yuh-huh.”

        And we’ll keep going back and forth until one of us realizes it is a terrible way to participate in discussions that just makes anyone doing it look obnoxious.

      • Matthew R Marler

        Brandon Shollenberger: I find simply asserting someone is wrong tends not to accomplish much.

        “Taking it to extremes” is counterfactual — they did not do that. They have described what they did and shown some outputs. We can not tell without knowing the exact values whether they have “over smoothed” or not.

        Another example of “oversmoothing” would be to take a single number, the calculated mean or calculated conditional global expected value, as representative of the entire Earth; calling it an “equilibrium” temperature; and then using that as the basis for calculating a new “equilibrium” temperature that would result from doubling the CO2 concentration. The BEST team have not done that either.

      • Matthew R Marler:

        “Taking it to extremes” is counterfactual — they did not do that.

        Uh… duh? Of course they didn’t do it. The point of reductio ad absurdum arguments is to show what happens if you apply an argument to a more extreme situation so as to indicate why that argument fails to work in a less extreme situation.

        The entire point of me saying I was taking the argument to the extreme (which you misquoted) was to make it clear I wasn’t referring to what BEST actually did.

        They have described what they did and shown some outputs. We can not tell without knowing the exact values whether they have “over smoothed” or not.

        The sentence before this doesn’t do anything to support this, meaning you’ve again offered no basis for this claim.

        Another example of “oversmoothing” would be to take a single number, the calculated mean or calculated conditional global expected value, as representative of the entire Earth

        But according to you and BEST, we couldn’t say this is over-smoothed because we don’t have perfect information. BEST could literally claim every location has warmed at the exact same rate, and according to you guys, we couldn’t say they’ve over-smoothed their data. We’d have to just shrug our shoulders and say, “Eh, maybe they got the spatial resolution right.”

        This is incredibly simple. I say we can make judgments about whether BEST’s spatial resolution is better or worse than that of other groups by using the information we have. You guys say we can’t do that because the information isn’t perfect. That’s all there is to it.

      • > The point of reductio ad absurdum arguments is to show what happens if you apply an argument to a more extreme situation so as to indicate why that argument fails to work in a less extreme situation.

        Perhaps not:

        Reductio ad absurdum is a mode of argumentation that seeks to establish a contention by deriving an absurdity from its denial, thus arguing that a thesis must be accepted because its rejection would be untenable. It is a style of reasoning that has been employed throughout the history of mathematics and philosophy from classical antiquity onwards.

        http://www.iep.utm.edu/reductio/

      • Matthew R Marler

        Brandon Shollenberger: The point of reductio ad absurdum arguments is to show what happens if you apply an argument to a more extreme situation so as to indicate why that argument fails to work in a less extreme situation.

        Yes. That is the point. I would call it a “claim”. But the extreme is so different from the actual case in this instance, that you can’t show how it invalidates the original assertion. You can not show that the BEST team over smoothed without knowing the exact data, but it is oversmoothing to use one value for a large area. Where in between you pass from requiring exact data to requiring less exact data to requiring no data at all would require knowledge of the specific cases.

        Reductio ad absurdum only works in mathematics, not in empirical science where all statements of knowledge are only approximate, and the absurdity only refers to the extreme, not the topic of focus. Clearly, if you consume a pound of salt per day you’ll get sick, but lightly salting your salad is no harm whatsoever. As has frequently been pointed out, reductio ad absurdum (taking the extreme case) is of no use whatsoever in toxicity studies. One of the substances that you do not want too much of is oxygen.

      • Brandon Shollenberger

        Again, you’ve offered no basis for your position. Instead, you’ve seemingly admitted it is wrong:

        You can not show that the BEST team over smoothed without knowing the exact data, but it is oversmoothing to use one value for a large area.

        I can’t show BEST is oversmoothing, but I can show using one value for a large area is oversmoothing… and I have shown BEST uses one value for a large area.

        But that doesn’t prove BEST is over-smoothing because it is BEST, and apparently that means the problem you acknowledge can be shown can’t be shown because… they wouldn’t be the best? Is that the argument? I don’t know. I can’t tell. You’ve still done nothing to explain why we can’t say it is bad that BEST shows practically the same results for entire continents.

      • Matthew R Marler

        Brandon Shollenberger, I can’t show BEST is oversmoothing, but I can show using one value for a large area is oversmoothing… and I have shown BEST uses one value for a large area.

        This is where you began: We don’t need to know “the true field perfectly” to decide whether or not the resolution of a temperature record is too coarse. We don’t need to know “the true field perfectly” to compare the spatial resolution of BEST’s results to that of other groups’ results and judge which are more useful.

        We can do far more than “note [BEST] lacks the edge of gridded approaches and tends to have a smoother field.” This post tries to hand-wave away the issue, but We can specify how much smoother BEST’s temperature field is and make judgements about whether or not it’s a bad thing. In fact, we should.

        Rereading your posts, I do not find where you ever showed that BEST was oversmoothing by using one value for a large area. Your comment was about comparing different smoothing algorithms (or different smoothed outcomes) and deciding which of them did the most “useful” smoothing. But unless you know the true values, you can not tell which algorithm was the most accurate. Without knowing which one is most accurate, you would be hard-pressed to tell which would be most “useful”.

        All you have shown is that in some other circumstance, you might be able to do something other than what you started out describing here.

      • Matthew R Marler:

        Rereading your posts, I do not find where you ever showed that BEST was oversmoothing by using one value for a large area.

        Then you haven’t read my posts on this topic. I’m guessing you meant comments, not posts. If so, the reason you haven’t seen me show this in my comments here is it seems silly to show what BEST clearly shows in its post. What point would there be in me replicating what is clearly shown in Figures 5, 6, 7 and 8?

        Your comment was about comparing different smoothing algorithms (or different smoothed outcomes) and deciding which of them did the most “useful” smoothing. But unless you know the true values, you can not tell which algorithm was the most accurate.

        If you had absolutely no idea what the true values are, you’d be hard-pressed to tell which algorithm is the most accurate. If you had a good, but not perfect, idea what the true values are, but it wouldn’t be so difficult to tell which algorithm is most accurate.

        Apply your argument to any other topic, and you’ll see it is clearly ridiculous. In no other topic would you say we need perfect information to make any judgments. If you did, you’d wind up saying things like, “We can’t tell if the planet has warmed because we don’t have perfect information about what the planet’s temperatures are.”

        We don’t need perfect information to decide whether or not BEST over-smooths its data. We just need useful information, of which we have plenty.

    • This is precisous

      “But if you look at just the blue and red lines, you can see a .2 to .3 degree difference in the earlier portions. ”

      Red is Africa
      Blue is the US

      when we talk about the adjustments being inconsequential GLOBALLY
      we mean the BLACK line

      The red line is Africa– 20% of the land
      the blue line is the US 5% of the land.

      So yes if you look at 5% of the data ( blue) you see a .2 to .3 degree difference

      the POINT of showing people continents and how they differ is so that people will AVOID the kind of mistake Brandon just made.

      GLOBALLY ( we are estimating the GLOBAL average) the adjustments are mousenuts..

      BUT because people can cherry pick ( the US) they can show BIG differences.. BUT they also IGNORE big differences in the other direction.

      • Steven Mosher, are you trying your hardest to misunderstand me? You say:

        “But if you look at just the blue and red lines, you can see a .2 to .3 degree difference in the earlier portions. ”

        Red is Africa
        Blue is the US

        when we talk about the adjustments being inconsequential GLOBALLY
        we mean the BLACK line

        The red line is Africa– 20% of the land
        the blue line is the US 5% of the land.

        So yes if you look at 5% of the data ( blue) you see a .2 to .3 degree difference

        the POINT of showing people continents and how they differ is so that people will AVOID the kind of mistake Brandon just made.

        Yet this is obviously not what I did. The quote you provide from me was provided in direct response to:

        As Figure 1 illustrates the effect of adjustments on the global time series are tiny in the period after 1900 and small in the period before 1900. Our approach has a legacy that goes back to the work of John Christy when he worked as state climatologist: Here he describes his technique.

        Which I quoted. I clearly and specifically discussed Figure 1. Figure 1 does not have lines for Africa or the United States. The figure with the lines you refer to is Figure 4, described as:

        Figure 4. The top panel depicts the difference between all adjustments and no adjustments. The black trace shows the difference for all land. Blue depicts USA; red Africa; and green Europe. The lower panel depicts the difference between all adjustments and metadata only adjustments.

        You just wrote 150 words to mock me entirely by ignoring what I was responding to and coming up with a completely ridiculous interpretation of what I said nobody with basic reading skills could innocently come up with.

        Seriously, are you intentionally trolling me?

      • Steven Mosher,

        You wrote –

        “GLOBALLY ( we are estimating the GLOBAL average) the adjustments are mousenuts..”

        Why bother – or are you only kidding, and mousenuts means either vaishingly small, or quite significant. Ah, the benefits of Warmist scientism!

        What is the Warmist definition of mousenuts? Do you find them locally, or only GLOBALLY? Are GLOBAL mousenuts bigger than local mousenuts?

        Have you the faintest idea what you are talking about? Does anybody else care?

        Live well and prosper,

        Mike Flynn.

      • Matthew R Marler

        Brandon Shollenberger: Which I quoted. I clearly and specifically discussed Figure 1. Figure 1 does not have lines for Africa or the United States.

        Tough luck Steven Mosher, you goofed on that one.

      • Matthew R Marler:

        Tough luck Steven Mosher, you goofed on that one.

        Good luck ever getting him to admit this.

        It’s pretty pathetic BEST is having this sort of behavior be part of its public outreach. “You criticized us? We’ll completely misrepresent what you say in obvious ways to insult you! Because that’s how we deal with legitimate concerns!”

      • Matthew R Marler

        Brandon Shollenberger: “You criticized us? We’ll completely misrepresent what you say in obvious ways to insult you! Because that’s how we deal with legitimate concerns!”

        You are overwrought. It was a simple goof.

      • You know that Brandon is very sensitive, Matt. He’ll calm down in about a week.

      • Matthew R Marler:

        You are overwrought. It was a simple goof.

        You might consider the possibility you aren’t as good at reading people’s minds/tones as you might think. I’m not overwrought at all. I think this is hilarious. That’s why I gave an exaggerated impersonation, complete with exclamation marks. It’s a common and humorous form of mocking people.

        Overwrought would be more like if I started resorting to using Caps Lock like:

        when we talk about the adjustments being inconsequential GLOBALLY
        we mean the BLACK line

        Or:

        the POINT of showing people continents and how they differ is so that people will AVOID the kind of mistake Brandon just made.

        But I don’t feel any need to do things like that, just like I don’t feel a need to use bombastic rhetoric or derogatory remarks. I don’t know what makes you think I am overwrought, but I am not. The most I am is sardonic.

        Or maybe I am overwrought. Maybe I am and I just don’t know it. Maybe I need you to tell me what I am feeling so I can actually know!

      • Matthew R Marler

        Brandon Shollenberger: The most I am is sardonic.

        OK, I’ll abandon the inference about your mood and stick with my judgment about your response to Mosher: you overreacted.

        I wrote a snarky reply to your claim to be at most sardonic, including a lot of quotes from your replies. Thankfully, it has not appeared. I must have goofed, but it worked out better this way.

      • Stages:1 denial, 2 anger, 3 bargaining, 4 depression, 5 acceptance.

        1. Denial. Yet this is obviously not what I did.

        2. Anger. You just wrote 150 words to mock me entirely by ignoring what I was responding to and coming up with a completely ridiculous interpretation of what I said nobody with basic reading skills could innocently come up with. Seriously, are you intentionally trolling me?

        3. Bargaining. Skipped that stage.

        4. Depression. The most I am is sardonic.

        5: Acceptance. Or maybe I am overwrought. Maybe I am and I just don’t know it. Maybe I need you to tell me what I am feeling so I can actually know!

    • Matthew R Marler

      Brandon Shollenberger: Quotes Steven Mosher: As Figure 1 illustrates the effect of adjustments on the global time series are tiny in the period after 1900 and small in the period before 1900. Our approach has a legacy that goes back to the work of John Christy when he worked as state climatologist: Here he describes his technique.

      Brandon Shollenberger: But if you look at just the blue and red lines, you can see a .2 to .3 degree difference in the earlier portions. That means there’s a 15-20% change introduced by these adjustments. And that’s for if we only go back to 1850.

      There are a few adjustments as large as 0.2 degrees, but they have little effect on the estimates of the global trends. converting 0.2 – 0.3 to 15%-20% does little to add to your point: what is the base?

      • Matthew R Marler:

        There are a few adjustments as large as 0.2 degrees, but they have little effect on the estimates of the global trends.

        Suppose I am correct that the timing of the effect of the homogenization algorithm is (at least in part) an artifact of their choice of baseline period when calculating their climatology fields. If I am, it would mean BEST could have its modern temperatures change by as much as several tenths of a degree just by using a different baseline period. I think that’s something people might care about, even if that didn’t affect global trends (and I have no particular reason to think it wouldn’t).

        This is especially true given the fact BEST released a report for the media discussing whether or not 2014 was the hottest year, a report which gave significant attention to differences less than a tenth of a degree. BEST apparently cares enough about differences of ~1% to release a report on it to get media attention. I don’t see how it can do that then dismiss concerns which affect their results by more than 10 times as much.

        converting 0.2 – 0.3 to 15%-20% does little to add to your point: what is the base?

        I’m not sure what you mean when you say “base” here. I was tempted to just be snarky and say, “10.” Instead, I’ll explain the reasoning. Looking at the BEST figure, I estimated it shows ~1.5C of warming since 1850. I then estimated that number would have been ~1.2C if they hadn’t calculated their “empirical breakpoints.”

        I think it is useful for people to know ~20% of the warming shown in the figure is due to this particular homogenization step. Doing so lets them know the total amount of warming, the amount of warming caused by these adjustments and how the two relate to one another.

      • Matthew R Marler

        Brandon Shoilenberger: Instead, I’ll explain the reasoning. Looking at the BEST figure, I estimated it shows ~1.5C of warming since 1850. I then estimated that number would have been ~1.2C if they hadn’t calculated their “empirical breakpoints.”

        That answers the question.

  6. Its worth mentioning that while global land temperature adjustments tend to slightly increase the century-scale trend, mostly by reducing temperatures prior to 1950, ocean temperature adjustments significantly increase temperatures prior to 1950. The net effect of all adjustments is to notably reduce the century-scale warming trend compared to raw data:

    https://pbs.twimg.com/media/B9cBLSACcAAyZ4_.png:large

    • Zeke, what’s the deal with the convergence of the raw and adjusted in about 1995 and an almost perfectly matched run-up to about 2002 and then a significant divergence for the remainder of the chart?

      • Its a result of ocean temp adjustments: https://twitter.com/hausfath/status/564939671247941632

        Not sure of the reason for it, but I’m much less familiar with ocean adjustments in general. Ask John Kennedy.

      • Well, that is odd looking. In the early years the land and ocean temperature adjustments go in opposite directions. Then ocean temps don’t need adjusting for some time, then in 2002 they are adjusted up, from then on. Doesn’t look right. Very interesting charts, when you look at the land and ocean side by side.

      • I don’t see significant ocean temp adjustment for the 1995-2002 period I mentioned. Only after 2002. One would think the later data would need less adjusting than the earlier data. Looks weird.

      • Eyeball the charts and compare the land and ocean trends from 1910-1940. Why da ocean warming so fast? Weird. I am not believing they got good data.

      • It looks like the adjustments cooled the land and warmed the oceans, rather than climate physics having done it.

      • Zeke, you look taller and more intelligent than Mosher. Maybe Mosher needs glasses.

      • Hi Don,

        The base period is 1961-1990, so the average is zero (or close to it) for both lines over that period. That often means that agreement looks better over the climatology period than at other times.

        Since 2002, there has been an effort to increase the number of drifting buoys making SST measurements (note that drifters are not Argo floats). In 2005/06 the number of drifters increased markedly and their coverage improved too. Ships tend to measure warmer on average than drifters, so if you change the balance towards a network that’s more dominated by drifters, the result is an artificial cooling. You don’t see that cooling in either the ships or the drifters individually, nor do you see it in (arguably) the best available satellite record from ATSR instruments (see Figure 2-17 of Chapter 2 WG1 AR5). The HadSST3 adjustments account for the buoy-ship difference and I think that’s what you see in Zeke’s diagram.

        Best regards,

        John Kennedy

      • Diagramonkey, “The base period is 1961-1990, so the average is zero (or close to it) for both lines over that period. That often means that agreement looks better over the climatology period than at other times.”

        Baseline choices for anomaly do have that feature/flaw.

        I was comparing “raw” as in absolute temperature estimates of Berkeley Earth and CRUT Ts with ERSSTv4 and HADIsst to see how much long range interpolation effected land temperature anomaly. The Berkeley data appears to use more coastal station data for “correcting” land station data than CRUT. Not sure if that is a good thing or a bad thing, just a thing.

        Note: HADI from Climate Explore is in C degrees. Berkeley land from Climate Explorer needs the estimated monthly absolute value added to produce an absolute temperature data set. Both HADI and ERSSTv4 were in C degrees as provided by Climate Explorer.

        If you try to construct a global actual temperature data set using Berkeley and ERSST or CRUT Ts and HADI you end up with something like this.

        Anomalies of course paint a slightly different picture.

      • Thanks John and Capt. I am still feeling the data prior to the 1940s is very sketchy. Maybe we should go to the treemometers for that.

      • Don

        If asked I would say the global temperature record becomes reliable enough to use with the advent of Automatic weather stations in the 1980’s which should have some degree of correlation with satellite records always assuming the siting of the former is good.

        I certainly wouldn’t bet the house on any sort of global or SST land record prior to this date. Individual country records may have a better provenance over a longer period but the number of reliable countries is relatively small.
        tonyb

      • I certainly wouldn’t bet the house on any sort of global or SST land record prior to this date.

        Unless you can quantify that, the house would be well advised not to let you.

  7. “One final way to compare the results of various data producers is by comparing the spatial variability against that produced by global climate models. While not dispositive this comparison does indicate which temperature products are consistent with the variability found in simulations and those which are less consistent”
    So.
    The facts have to be consistent with the fiction??
    This is cart before the horse stuff.
    There is no reason on earth for facts to have to agree with simulations.
    To state this is extremely unscientific.
    The mucky statement ” not dispositive” meaning you claim [wrongly] the temperature record agrees with the simulations also needs correcting please.
    .

    • It means exactly what it says. The comparison doesnt decide anything.
      period. Its just a comparison.

      Another way to look at this is as follows. If all the statistical models ( cru, giss, best, ncdc, ) had similar smoothness and the GCMs were different that would indicate that the GCMs might be too smooth. But, the comparison didnt turn out that way. So, we do the comparison. we dont hide the result. We show it.. conclusion? hmm doesnt show you anything that helps you decide.

      • Steven (and Zeke) – Thanks for your thankless work. I very much appreciate it – including but not limited to BEST, in the specific case of this post, and in the many (to the many power) thoughtful (and sarcastic [even sardonic!]) posts on this blog.

        Even your noise is signal to my ears.

  8. “Since the algorithm works to correct both positive and negative distortions it is possible to hunt through station data and find examples of adjustments that warm. It’s also possible to find stations that are cooled.”

    If it did this properly the temperature could never change, could it.

    But it does so the algorithm only works to correct some positive and negative distortions.
    who gets to choose?
    Cowtan and Way? taking out those 3 aberrant arctic stations that gave 40% “erroneous’ cooling.
    Zeke when looking at those shorelines?
    Some little clerk in the back office working for Greenpeace?
    or some big clerk working for IPCC.
    Let’s have a committee to vote on which ones to use .
    Put Gavin in as chair.

    • “But it does so the algorithm only works to correct some positive and negative distortions.
      who gets to choose?”

      If a human chose things they you would be dubious of the human.

      Here is what we did.

      1. We built an algorithm. Note that we took a lot of ideas from skeptics
      2. We tested that algorithm in a double blind fashion.
      it corrected both positive and negative biases.
      3. We applied that algorithm and reported the results.
      4. We did sensitivity tests on the algorithm.
      more break points, fewer breakpoints, no break points.

      Guess what? We still cool the record in Africa, we still cool the globe since 1960, while.

      Some people had a theory. Adjustments should even out. We tested that theory. It was wrong. Dr. Feynman tells you want to do.

      • This conspiracy ideation is ridiculous. Berkeley Earth was intended to give an independent measure of global temperature and it has done that. Skeptics may not like it, but that’s science. The temperature histories are quite robust.

        I do think that there is a problem with the data, though, and it is a conceptual one.

        A measurement is a measurement. It’s what the thermometer read on that day at that time. Measurements are raw data. Raw data should not be messed with.

        Presenting “adjusted” measurements implies that the real data is not accurate and we know better now what it really was despite the measurement being, you know, raw data.

        Generally speaking, in experimental science, the raw data is never changed. Analyses can model from it, by, say, using larger scales and adjacent measurements, but that in no way requires making adjustments to the original raw data. Smoothing raw data and treating the result as if it were raw data is a bad, bad, bad thing to do.

        So the problem is that the climate science community, for some reason that I am not able to completely understand, has decided to replace their raw data with ever-changing “adjusted” data. The adjustments may make little or no difference to the big picture, but that’s not the issue. The issue is that now the community has decided that raw data is something that can be changed And the ability to change data ex post facto is a gigantic invitation to self-delusion and cherry-picking.

        it’s probably too late to put the horse back into the barn, but I do wish that the community had chosen a more scientifically rigorous terminology and method to perform the various necessary operations on the raw data.

      • fizzymagic:

        This conspiracy ideation is ridiculous.

        I agree, but I want to point out I’ve seen a number of people exhibiting no conspiracy ideation be dismissed as conspiracy theorists simply for raising concerns/criticisms. When people who behave properly get slimed and mistreated, it only encourages people to have take negative views.

        Berkeley Earth was intended to give an independent measure of global temperature and it has done that.

        BEST was never intended to give an “independent” measure of anything. It’s always been known BEST uses some of the same data other group use. That means it can’t be independent. That doesn’t make it worse. It just means we can’t call it an “independent” measure of global temperature.

        I mostly point that out because people like Michael Mann misuse “independent” like this all the time. Mann takes it so far he calls work done by his co-authors “independent” confirmation of his work. He also likes to call work “independent” confirmation of his even if half of its data was used in his work as well.

        I can’t criticize Michael Mann for misusing “independent” like that and not speak up when I see others do it too.

      • A measurement is a measurement. It’s what the thermometer read on that day at that time. Measurements are raw data. Raw data should not be messed with.

        Then you can’t take averages, summaries, or anything like that. All such procedures throw away the vast majority of the “raw data.” There’s no automatic, objective way to decide which bits of the “raw data.” to keep while throwing away the vast majority of the rest.

      • But that’s the entire point! Throwing away raw data is generally verboten, but if it is necessary, it has to be very well justified using a procedure that is determined in advance. Berkeley Earth has come up with a reasonable algorithm, which I am willing to trust. I just don’t think that the processed data should be treated in the same way raw data would be.

  9. Zeke (Comment #130058) June 7th, 2014 at 11:45 am

    “Mosh, Actually, your explanation of adjusting distant past temperatures as a result of using reference stations is not correct. NCDC uses a common anomaly method, not RFM.
    The reason why station values in the distant past end up getting adjusted is due to a choice by NCDC to assume that current values are the “true” values. Each month, as new station data come in, NCDC runs their pairwise homogenization algorithm which looks for non-climatic breakpoints by comparing each station to its surrounding stations. When these breakpoints are detected, they are removed. If a small step change is detected in a 100-year station record in the year 2006, for example, removing that step change will move all the values for that station prior to 2006 up or down by the amount of the breakpoint removed. As long as new data leads to new breakpoint detection, the past station temperatures will be raised or lowered by the size of the breakpoint.
    An alternative approach would be to assume that the initial temperature reported by a station when it joins the network is “true”, and remove breakpoints relative to the start of the network rather than the end. It would have no effect at all on the trends over the period, of course, but it would lead to less complaining about distant past temperatures changing at the expense of more present temperatures changing.”

    Any algorithm that does this will bias the past cooler and cooler and worse as it goes on. Lucky the postcasts do not go back another 200 years or we would be in an Ice age. Also those warm seas back then. R Gates will be happy to know ocean warming travels back in time and heats the seas in the past before 1950 , might explain why we do not have to worry about the future.

  10. “It’s minor because on average the biases that require adjustments mostly cancel each other out.”

    Average, adjustments, mostly.

    I see. The raw data is useless as is. By performing mostly random average adjustments, inaccurate and unreliable data is magically transformed into accurate and reliable data.

    More or less the same as running a toy computer model several thousand times varying inputs randomly, and then insisting the different useless incorrect – by definition – outputs somehow cancel each other out, and wrong is transformed into right.

    The joys of Warmist science! Big is small, minor is large, adjustments balance each other out, but we need them anyway, and so on.

    In any case, what’s the point? On average, for four and a half billion years the Earth has cooled. On average, over the last twenty years or so, the Earth has cooled around twenty millionths of a degree or so. Gee!

    What a waste of time! I hope you do all this at your own expense, or accept only voluntary donations from supporters.

    Live well and prosper,

    Mike Flynn.

    • If you want raw data they by all means USE IT.

      you will actually WARM the record after 1960.

      • I have been trying to follow this argument for a long time. What needs to be addressed are the comments that adjustments are “inconsequential” but would result in consequential warming if removed. Makes no sense to me. I guess some folks still consider warming of 0.02C is consequential. Being a student of climate history, I still think the viking settlement of Greenland, the well-documented temperature changes in Iceland, fishing records that show movement of species and many other records show that nothing unusual is happening. The northwest passage must have been open when it was discovered! At times I think we have replaced common sense with a boatload of formulas.

      • So what? Warm it a bit, cool it a bit, that’s what actually happened in the 1960s and 1970s until we got into the 1980s and it started to get warmer still.

        I’m with Mike, history doesn’t cancel out. BEST and the rest can adjust and homogenise all they like.

      • R2, you will like essay Northwest Passage. Mostly ‘selfies.!

    • Matthew R Marler

      Mike Flynn: By performing mostly random average adjustments, inaccurate and unreliable data is magically transformed into accurate and reliable data.

      More or less the same as running a toy computer model several thousand times varying inputs randomly, and then insisting the different useless incorrect – by definition – outputs somehow cancel each other out, and wrong is transformed into right.

      They have explained before what they did. And they’ll explain again, according to their post. That was not what they did.

  11. Which files are you getting from NIWA, because if it is their official files then they “adjust” the temperatures before you get them. They do have the REAL raw data as well, so you may have the right files, but I just wanted to check.

    As with most of the world, it would appear that NIWA’s adjustments, which should in theory be random, seem to be a significant “cause” of the global warming experienced here – ie the same story as Brookers. It would seem to me that, since that was the case, NIWA scientists would be falling over themselves to have the quality of their adjustments peer reviewed, but in fact they fought a court case to prevent anyone getting access, except of course for the Australian BoM which has (going by the Darwin example anyway) the same issue of having most of the global warming coming from “neutral” adjustments.

    I am sure you are familiar with both situations.
    Mary

    • daily unadjusted data.

      here is clue. take NZ. make all the temps 0

      the global average wont change.

      • Good grief Mosher, talk about Exhibit A evidence for climate science’s utter contempt for locality and history.

      • I really am surprised by that, given the records for Campbell Island are amongst the longest and best for the sub-Antarctic and for Raoul Island amongst longest and best for the sub-tropical, and the Chatham Islands amongst the longest and best for the sea to the south east of New Zealand. “New Zealand” is of course not just the landmass, but the wider region.

        However, I have never looked at it. I am pleased that you are using the old unadjusted numbers. If you ever want to verify they really are unadjusted, they were published each year, from 1893 onwards, in the official year book, which is online here: http://www.stats.govt.nz/browse_for_stats/snapshots-of-nz/digital-yearbook-collection.aspx

      • “the global average wont change.”

        for one country? no, not significantly.
        The point is that BEST, like most attempts at what they did, assume that the “raw” data is actually raw. For NZ and Australia, it is well documented that it is not, and well documented that the adjustments changed the trend – even changed the SIGN of the trend. Nor was this change “insignificant” – in at least one case >2C change in reported trend!
        So, given that we have 2 examples of this “corruption” of “raw” data, at what point is it valid to ask: how many countries do this and how does this affect any attempt at analysis (inc BEST).
        How many countries do I need to demonstrate do NOT supply “raw” data for their “raw” records before it is an issue you feel is worth persuing?

        Just curious…

    • richardcfromnz

      Mary
      >”Which files are you getting from NIWA, because if it is their official files then they “adjust” the temperatures before you get them. They do have the REAL raw data as well, so you may have the right files, but I just wanted to check. ”

      Mosher
      >”daily unadjusted data.”

      Yes, raw unadjusted data from NIWA’s CliFlo database.

      Now the fun starts. All the institutions, NIWA, GISS, BEST, (NiWA provide CRU with adjusted 7SS data – see below), select different stations, same stations different timeframes, more or less stations, different homogenization methodologies, etc, from the CliFlo database (and thence distributed elsewhere with the added complication of GHCN) either directly or indirectly.

      The NIWA product that corresponds to BEST’s kriging methodology is their multiple station Virtual Climate Network (VCN). VCN predates BEST i.e. NIWA were implementing a method similar to BEST’s before BEST got around to it, and selling it, it’s proprietary. VCN only starts at 1972 because CliFlo records from then on are good quality in NZ and the VCN series profile corresponds to BEST 1980 – 2000 except the respective effective altitudes are a little different therefore an absolute offset.

      The VSN and BEST trends since 1972 are essentially 2 cooling periods interrupted by a 0.45 C spike in 1997 giving an overall warming trend (which BEST describes by +linear trend – don’t mention the cooling). There is no locality homogenization of stations in the VCN, each station is a separate entity. NIWA’s CO2-forced model-based forecast from 1990 (see their website) is already tracking too warm relative to VCN and BEST (and the 7SS).

      Except the VCN is not what NIWA provides CRU for CRUTEM4/HadCRUT4, that’s their 7SS which homogenizes multiple stations to compile a locality series e.g. Auckland. Albert Park is the long-running Auckland series but is UHI/Sheltering contaminated so the series moves to Mangere Treatment Plant and then to Mangere Airport. NIWA didn’t correct Albert Park for UHI/Sheltering, NZCSC did correct. NZCSC arrive at a 7SS series with a trend that is one third of NIWA’s.

      BEST continued to use Albert Park for Auckland but not Mangere (why not?). But even then, after adjustments and homogenization, BEST’s Auckland trend (Albert Park) is closer to the NZCSC Auckland trend (Albert Park + Mangere) than to NIWA’s. If BEST corrected Albert Park for UHI/Sheltering, their Auckland trend would be lower than NZCSC’s and far less than NIWA’s i.e. corroborating NZCSC but not NIWA.

      BEST pull in Albert Park as the primary data for Hamilton which is 125 km south in a different climate zone when they could use Ruakura Research situated in Hamilton (why not?) that NIWA uses for their Eleven Station Series (11SS).

      GISS appear to be thoroughly confused with Auckland, their GHCN datasets prior to final are here:
      http://data.giss.nasa.gov/cgi-bin/gistemp/find_station.cgi?dt=1&ds=13&name=Auckland

      Auckland, Auckland Airport, and Auckland Albert Park. I don’t know what “Auckland” is, it’s only a few years. Albert Park should be long-running but they only pick up 6 years. The airport site was only used as a club airfield from 1928, airport built 1966, NIWA pick up the airport site in 1998. After Albert Park, NIWA picked up Mangere Treatment Plant from 1976 – 1998. GISS seem to think the airport has been operating continuously from 1879 to 1990 (see link above). Odd, because the Wright brothers first flight was 1903

      The GISS final Auckland dataset is here (Airport):
      http://data.giss.nasa.gov/cgi-bin/gistemp/show_station.cgi?id=507931190000&dt=1&ds=14

      Except the final series (Airport) runs from 1950 to 1991 instead of 1879 above. But at least the airport was actually operating from 1966. I doubt that “Airport” data is from the airport.

      So you see Mary, even if the same raw data source is accessed by the respective groups in this New Zealand example, the final results from each are widely dissimilar.

  12. Ah yes, the old “we adjusted the data because” excuse…

    In the engineering field measurements that are made for mission critical applications are traceable to a national standard issued by a standards laboratory. I have participated in work like this, it often involves purchasing a “transfer standard” instrument. For example a “standard” yardstick. Then through a careful process of tracking and accumulating errors through each step in the process it is possible to state that; “these measurements are traceable to the US NIST standard (for length as an example) within an error (1 sigma) of plus or minus 1 micron (for example).

    None of these calibration and traceability steps have ever been applied to the “official” temperature record. It is a “dog’s breakfast” of garbled data with adjustments added in ad-hoc to make it match the “expected” results.

    The biggest example of confirmation bias is the observation that almost all of the adjustments make the distant past colder and the recent past warmer…. The chances of that happening in a system with randomly distributed errors is huger than your chances of winning the lottery. Of course it appears that some climate scientists have won the tax payer funded lottery; Hey give me some grant money and I can give you something to worry about, and I can make it look like the END OF THE WORLD AS WE KNOW it, oh and we need to study it more to make extra sure the world is coming to an end….

    The surface temperature record is worthless at this point in history, and no amount of tax payer dollars can ever turn it into a useful data set to base policy decisions on.

    If you are flying on an airplane are you reassured that all the companies that manufactured all the myriad components (engines, wings, fuel tanks, etc…) did their work with calibrated measuring instruments that are traceable to a single national standard (NIST in the US), or are you OK if they just “adjusted” their measurements so their portion of a complex machine “looks OK with our measurements” ???

    Cheers, KevinK

    • Kevin K: Re: Chances of “almost all of the adjustments make the distant past colder and the recent past warmer” is very small.

      I agree with your sentiment here, but have a possibly different explanation for the cause. I disagree with Mosher on many things, but I trust his honesty on temperature records ( recognizing that he can make mistakes just like anyone else). I am beginning to think that the Hansenite researchers are so incompetent and closed minded that they unknowingly clung to substandard methodologies which unwittingly undermined their goal of cooling the past and warming the present.

      JD

      • JD, “I am beginning to think that the Hansenite researchers are so incompetent and closed minded that they unknowingly clung to substandard methodologies which unwittingly undermined their goal of cooling the past and warming the present.”

        Note really. Their method, long range interpolation produces a different product. In some case it is a “better” product, some cases not so much.

        It is pretty much a waste of time arguing about the “adjustments” unless you have a specific example of when interpolation is good and when it isn’t so good. For the climate of say Kansas, Interpolation not so good because Kansas temperatures would be “adjusted” to New Orleans climate. A Kansasite ( whatever) might not care for that.

        Here is a Redneck take on the subject,

        http://redneckphysics.blogspot.com/2015/02/whats-deal-about-global-temperatures.html

  13. ==> “On balance the effect of adjustments is inconsequential.”

    I refuse to believe that. If that were true, then I couldn’t argue that the adjustments were made in order to scare the public

    But I know that the adjustments have been made in order to scare the public. Therefore, it is obvious that the effect of the adjustments have been to create warming effect.

    It doesn’t matter what the data actually show. When will you get that through your head?

    • Its pretty funny.

      The say they believe in climate change. ( just not human caused) I show them data that CONFIRMS their belief that claimate changes… and they reject data that confirms their belief… weird

      • Yeah, in figure 1 the adjustments clearly reduce the average(s) prior to 1900, which suggests an increased magnitude to the warming coming out of the LIA. Assuming you can tell anything about temperature trends that far back, given the sparsity of the measurements.

        What stands out to me is how figures 5-8 show that the adjustments pull most of the warming into the high NH latitudes, where it’s more likely to make things more comfortable than less. For instance, all four figures show significant (to the eyeball) warming in southwestern Africa without adjustments, which almost goes away once the adjustments are made.

        Once people start actually looking at what the adjustments do to the impact, I’d expect it to be the alarmists who start yelling about a “conspiracy” to “hide the impact” of greenhouse changes. Come to think of it, didn’t BEST get some help from the Kochs?

      • Steven Mosher, “Its pretty funny”

        It actually is, that is why I enjoy watching the silly back and forth. weren’t you at one time going to see how much interpolation of coastal stations impacted Berkeley Earth’s final product? If a young paleo kinda guy is trying to calibrate his reconstruction of temperature is the wilds of BFEgypt, would he really want Atlantic City temperatures influencing his results?

      • “It actually is, that is why I enjoy watching the silly back and forth. weren’t you at one time going to see how much interpolation of coastal stations impacted Berkeley Earth’s final product?

        We looked at that in a couple of different ways. Distance from Ocean
        and then distance from large body of water. Because you know
        some guy in western michigan would argue ” but Holland and Grand Haven!!” Distance from coast worked to improve estimates of seasonility.
        Basically.. the temperature isnt different.. the RANGE of variability is.
        Hmm I did some work with that long ago.. looking at how variance
        changes with distance from coast.
        We may have more luck with cold air drainage as I have found a dataset
        that saves me about a YEAR in compute time.
        Blogging bozos have no idea the huge amount of time is involved in
        driving an estimate from 1 degree resolution down to 1km. its staggering.

        If a young paleo kinda guy is trying to calibrate his reconstruction of temperature is the wilds of BFEgypt, would he really want Atlantic City temperatures influencing his results?

        Depends how good his local temps are.

      • It’s all a matter of attribution. Wierd that that would matter.
        ===========

      • thisisnotgoodtogo

        “Its pretty funny.

        The say they believe in climate change. ( just not human caused) I show them data that CONFIRMS their belief that claimate changes… and they reject data that confirms their belief… weird”

        BEST loses when Mosher strikes out. He pollutes the atmosphere and it’s not only with CO2.

  14. I compared one Swedish station with data from SMHI (Swedish Met office) with the data from BEST, I didn’t understand anything. SMHI had one serie, BEST three, when I tried to look at specific years, there was few similarities. How is the quality check done? Where have BEST get their data from? What have happen to the data from SMHI before it reached BEST?

    • HenrikM,

      Since no one responded I will put one idea out there. I have read Mosher’s description of the BEST procedure and one thing I believe they do is to split a temperature record if there is a station location change and give it a new number. This was intentional to do it a different way than other groups had (rather than just adjusting the data?). So, the 3 you mention may be just this procedure showing up.

    • the data comes from 14 different sources, hourly, daily, monthly.
      Those 14 different sources can contain multiple records for the same station.

      Step one of our procedure is to try to eliminate duplicate records from different .

      records can be combine as duplicates if

      A) the location is a close match AND
      B) the name is a close match AND
      C) the data reported is the same.

      If stations cannot be combined, they are kept as separate stations.

      In some case ( Norway, sweden, etc ) there are cases where a human
      might look at the electronic records and other non public records and
      conclude that there is only one station.

      As local data providers clean up their metadata ( for example WMO asked people to re survey sites ) there is room for the station deduplication process to be improved.

      There are two kinds of error in station de duplication.

      1. Failing to merge two stations which are “really” the same record.
      2. merging two records that are really different.

      We do everything to AVOID the second error because merging two records that are different will just cause the breakpoint algorithm to break them apart. And errors of type 1, dont change the values computed, they do
      make you think you have more stations that you really do.
      Hmm, a recent improvement to the algoithm eliminated about 400 of these errors. there wasnt any effect on the answers.

      for folks who work in database land.. you can think of the problems and techniques for doing a probablistic joins.

      • Again, an interested commenter notices that BEST’s amazing global temp diagram doesn’t actually match anything that actually happened at a locality, doesn’t reflect that locality’s history at all. Mosher’s response, blah blah error matching, something something. Whatever.

        By the way, it’s 16 data series at BEST website, Mosh. Not 14. Not good with numbers, eh.

      • Hide the decline

        I have made the point many times that the world comprises many different climates and trying to jam them all into one global average means we miss all the nuances evident in the local records.

        Local is after all where we all live and that is where we need to know what the trend is and why.

        Tonyb

  15. “Since the algorithm works to correct both positive and negative distortions it is possible to hunt through station data and find examples of adjustments that warm. It’s also possible to find stations that are cooled.”

    Please provide the numbers cooled vs. warmed.

    • “Please provide the numbers cooled vs. warmed.”

      You can find multiple GHCN histograms here. Or find the stations themselves on this Google Maps app.

    • lance the NUMBER will fool you.
      look at the black line.

      of course we could do no adjustments. Then we would have a HIGHER trend from 1960 to today.

      • ==> “of course we could do no adjustments. Then we would have a HIGHER trend from 1960 to today.”

        Maybe so – but still, you’re only adjusting the data in order to scare the public about warming.

        See, no matter what you say I can still make that argument.

      • thisisnotgoodtogo

        Sure you can make stupid argument, Joshua. Nobody ever doubted that.

      • thisisnotgoodtogo

        Mosher said” lance the NUMBER will fool you.
        look at the black line.

        of course we could do no adjustments. Then we would have a HIGHER trend from 1960 to today.”

        The thing is, it’s double-edged sword. You make it steeper and the pause is more profound.
        You can’t cheat your way out of everything.

  16. “And next a chart for those who want to accuse the algorithm of cooling the planet”

    Here is my corresponding plot for continents. It also shows adjustments mostly warming Africa. There is an active version here. Using TempLS mesh – ie GHCN adj/unadj and ERSST.

  17. Figure 2 on the US shows that the most recent temperatures (after 1980) require the most adjustment upwards. Shouldn’t we be getting more accurate rather than less with our temperature measurements?

    • Hi Lance,

      In the 1980s the U.S. switched the vast majority of the sensor network from liquid in glass thermometers to MMTS thermometers, which resulted in a sizable cooling bias: http://rankexploits.com/musings/2010/a-cooling-bias-due-to-mmts/

      Similarly, time of observation was changing for many stations over the same period: judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/

      In fact, the U.S. post-1980s was unusually inhomogeneous in many ways; most of the rest of the world was much less so.

      • Zeke

        Thank you for your efforts and excellent work. I know we give Mosher a lot of grief but I think most appreciate his very hard work to produce a quality product.

        I come at this not from the statistical perspective, but from a large organizational view. Many, many thousands of individuals have been involved over the course of hundreds of years to observe, enter and archive data. ( I see one of your locations in Germany had temperatures into the 1700s) While everyone involved at those 40,000 locations over the last few hundreds of years may have been doing their best, I just wonder how many entries were somehow not reflecting reality of what the actual temperature was that day. In spite of the great intentions of every person recording the temperatures and trying to get it with accuracy, there are going to be errors. Even if it is just writing down one number when thinking of another number, which I do all the time with Sudoku. This concern goes beyond the Time of Observation to simply the probability of not getting things right.

        This is not something that anyone can solve, so it certainly is not your problem or anyone else’s problem. But it is one of the areas that call into question how close our records are versus what actually happened in the field.

        Thanks again for your continued good work.

      • ceresco.

        That is a real problem. There are several approaches.

        First one has to make the stark admission that we will never have certainty about the past. All we have are records. It doesnt matter whether the record was signed, or checked, or claibrated, or whatever. it is a record. it is not the temperature. it is a record of the temperature. Further there is no way to go back and time and check it.

        Approach number 1 is the phil jones approach. he does a bottoms up
        estimate of the errors. For transcription error he has ( as I recall ) a really small number.

        Approach number two TOP DOWN.
        we calculate the error in a top down fashion and say the sum of all errors is X. We use the correlation function to do this ( look at the nugget )
        This error contains all error whatever the cause. Its much larger than a bottoms up error. One of our british reviewers didnt like this large error.

    • Fig 4 gives a better idea. And if you compare the two, CONUS is mostly metadata based – I suspect TOBS. It corresponds to the rate at which TOB was changing.

      Bias corrections aren’t (mostly) correcting inaccurate measurement. They fix inconsistency. You can measure accurately at 5pm and 9am, but they aren’t consistent. Neither is wrong.

      • When I was at high school, the weather station I monitored for a term used thermometers that pushed a marker up to save the previous days max temp, & another thermometer which pulled a marker down to save the overnight minimum. Unless the US Weather Service chose not to use this type of thermometer in their network, it beats me how TOBS could make any difference to the daily readings. How can recording the daily maximum & overnight minimum at 9:00am, or 5:00pm make any difference to the actual max & min? If you ask me the adjustments for TOBS are dubious. This punter is genuinely perplexed.

      • There is an explanation here.

  18. Robert, Zeke, Steve:
    Thanks for this.

    The question is not whether homogenization is necessary. It is.

    The question is whether homogenization is done correctly, a question that is hard to answer, and I would worry about the impact it has on the correlation structure.

    A more important question is perhaps why the public lost so much faith in climate science that they prefer to believe Booker over you guys. The Telegraph poll suggests that 90% of 110,000 readers are with Booker.

    I would hypothesize that the constant stream of climate nonsense — it’s five for twelve, kids will not know what snow is, we’re all gonna die, last chance to save the planet, climate change is coming to blow over your house and eat your dog — has made people rather suspicious of anything climate “scientists” say.

    If my hypothesis is correct, instead of arguing with Booker about the details of homogenization, you should call out the alarmists.

    • Hi Richard,

      When Shakhova argues for a “methane bomb” in the arctic, I loudly disagree.
      When Wadhams argues for the arctic being ice free in the next few years, I loudly disagree.
      When Booker says that station adjustments are the “biggest scientific scandal ever”, I loudly disagree.

      Maybe its selection bias on my part, but I more often come across the Bookers of the world misrepresenting my area of focus (surface temperature records) than those who are on the “alarmist” side of the fence. I’m sure if my area of expertise was arctic methane the story would be different :-p

      • Shakhova argues for 50 Gt of methane that is eligible for early release.

        Currently 0.5 Mt of methane is released by hydrates. Unless the release rate increases to over 1000 times the current rate it isn’t even an issue.

        About 650 Mt of methane is released per year with about 55% related to humans in some way (mostly rice and livestock).

        1820 PPM (5.3 Gt) more or less is the current atmospheric methane level with about a 10 year lifetime.

        The pre-cambrian methane level was 1000 times higher than today.

        The cold reality is unless all 50 GT of unstable methane is released over a few years time, it won’t make any practical difference and if there is a rapid release, the atmospheric levels of methane will rapidly decline afterward..

        .

    • You are right, Richard. It’s climate scare fatigue. I have the habit of looking up into the sky and saying to people I encounter here, there and everywhere “Global warming.” Pouring rain, snow, hot, cold, whatever “Global warming.” Almost always they smile and roll their eyes.

      • “Could be wrong but I thought I saw Steven Mosher try to marginalise the importance of GCM’s above in this:”

        No.

        I am saying this.

        Take GCM results: compare them to adjusted data. Draw conclusions
        about how poor the models perform
        Take GCM results: compare them to raw data. Draw THE SAME conclusions about how poor the models perform

        get it.

        Take any critical question in AGW.. change the data from raw to adjusted
        Your answer wont change in any material fashion

    • Richard???

      Maybe I should write a book on climategate?

    • Richard S.J. Tol

      Zeke, Steve:
      I stand corrected by both of you.

    • “I would hypothesize that the constant stream of climate nonsense — it’s five for twelve, kids will not know what snow is, we’re all gonna die, last chance to save the planet, climate change is coming to blow over your house and eat your dog — has made people rather suspicious of anything climate “scientists” say.”

      Exactly. Thank you.

    • > The Telegraph poll suggests that 90% of 110,000 readers are with Booker.

      In other news, water is wet:

      Despite strong scientific consensus that global climate disruption is real and due in significant part to human activities, stories in the U.S. mass media often still present the opposite view, characterizing the issue as being “in dispute.” Even today, the U.S. media devote significant attention to small numbers of denialists, who claim that scientific consensus assessments, such as those by the Intergovernmental Panel on Climate Change (IPCC), are “exaggerated” and “political.” Such claims, however, are testable hypotheses—and just the opposite expectation is hypothesized in the small but growing literature on Scientific Certainty Argumentation Methods, or SCAMs. The work on SCAMs suggests that, rather than being a reflection of legitimate scientific disagreement, the intense criticisms of climate science may reflect a predictable pattern that grows out of “the politics of doubt”: If enough doubt can be raised about the relevant scientific findings, regulation can be avoided or delayed for years or even decades. Ironically, though, while such a pattern can lead to a bias in scientific work, the likely bias is expected to be just the opposite of the one usually feared. The underlying reason has to do with the Asymmetry of Scientific Challenge, or ASC—so named because certain theories or findings, such as those indicating the significance of climate disruption, are subjected to systematically greater challenges than are those supporting opposing conclusions. As this article shows, available evidence provides significantly more support for SCAM and ASC perspectives than for the concerns that are commonly expressed in the U.S. mass media. These findings suggest that, if current scientific consensus is in error, it is likely because global climate disruption may be even worse than commonly expected to date.

      http://abs.sagepub.com/content/early/2012/09/12/0002764212458274

    • But it’s a self-selecting poll, Richard. First you have to go to the Telegraph site, then read Booker’s column, then you have to give a damn enough to do the poll. Just a look at the comments below the article would show which way it was going to go. But then the Telegraph was home to James Delingpole too and is “rather to the right” (euphemism) on the political spectrum, so it is no surprise that its self-selecting survey would produce a result like that.

      • Of course. It is still a staggering amount of support for what is a rather stupid proposition.

      • It’s a completely unscientific sample, and doesn’t justify any kind of conclusion, but nonetheless Richard can use it to test his “hypothesis” about what “the public” believes in contrast to what is shown by empirically collected data w/r/t public opinion – and to find support for…..well….nothing really except confirming his bias.

        Nice science, that, from someone who’s concerned about activist scientists.

      • Except that the question asked was “Do you think Scientists have exaggerated the threat of global warming?”, not “Do you think Scientists have manipulated the temperature data to exaggerate warming.”

        It’s perfectly reasonable to answer yes for the first question and no for the second, which is likely what has happened. I conceded that in the context of the article it’s not unreasonable conclusion to draw, but I suspect for those who read the article, it wasn’t the first they have read on the issue that informed their views.

    • Matthew R Marler

      Richard Tol (@ Richard Tol): I would hypothesize that the constant stream of climate nonsense — it’s five for twelve, kids will not know what snow is, we’re all gonna die, last chance to save the planet, climate change is coming to blow over your house and eat your dog — has made people rather suspicious of anything climate “scientists” say.

      I think you make a sound point about a problem that these authors have not themselves exacerbated in any way. However, the other important points are: (a) nothing in statistics conforms to intuition or common sense, and Booker appeals to common sense in pointing out the differences between the raw data and “adjusted” data in a few select extreme cases (b) few people have even tried to master the details of the various algorithms
      (there is no accurate, intuitive, non-technical description of the BEST procedure possible.)

    • Re: Telegraph poll. Sample bias. Can’t make conclusions based on results of that poll.

      • Do I really need to spell this out?

        Do I think that the Telegraph is a peer-reviewed, learned journal? No.

        Do I think that the straw poll was designed and tested to the most rigorous standard? No.

        Do I think the result has external validity? No,

        That said, after being prompted by a story about a scientific scandal, readers were asked their opinion. 89% agreed with the author. That means that, at that moment in time, to that readership, Booker told a convincing story.

        Ask the same crowd 15 minutes later and you might well get a different answer. Ask a different crowd and you might well get a different answer.

        But that does not take away from Booker’s power to convince a large number of people to at least temporarily believe a story that is just not true.

      • Do I think that Booker can convince 89% of his readers that the Queen really is a lizard from outer space? No.

    • thisisnotgoodtogo

      Good point, RIchard Tol – but Mosher is all about attacking the skeptic he makes up for the occasion. He needs that skeptic.

  19. If the adjustments are really so “minor” as to be insignificant, why make those adjustments at all?

    • That’s probably a don’t ask, they won’t tell question. But I know that for some it’s fun, for others it’s profitable.

      • Don

        Nearly four years ago I wrote this article on the general unreliability of historic temperature records and how we should not believe they can be accurate to fractions of a degree. This 110 year old book is still worth reading as it highlights the problems inherent in constructing any temperature record, let alone a historic one.

        “The skill and diligence of the observer were of course paramount, as was the quality of the instrumentation and that a consistent methodology was employed, but this did not prevent the numerous variables conspiring to make the end result-an accurate daily temperature reading-very difficult to obtain. Indeed the errors inherent in taking measurements are often greater than the amounts being measured.

        Many of these basic concerns can be seen in this contemporary description from a 1903 book which relates how temperature recordings of the time were handled. The “Handbook of Climatology” by Dr Julius von Hann (b. 23 March 1839 d. 1 October 1921) is the sometimes acerbic observations of this Austrian, considered the ‘Father of Meteorology.’

        The book touches on many fascinating aspects of the science of climatology at the time, although here we will restrict ourselves to observations on land temperatures. (It can be read in a number of formats shown on the left of the page on the link below).

        http://www.archive.org/details/pt1hanhdbookofcli00hannuoft

        This material is taken from Chapter 6 which describes how mean daily temperatures are taken;

        “If the mean is derived from frequent observations made during the daytime only, as is still often the case, the resulting mean is too high…a station whose mean is obtained in this way seems much warmer with reference to other stations than it really is and erroneous conclusions are therefore drawn on its climate, thus (for example) the mean annual temperature of Rome was given as 16.4c by a seemingly trustworthy Italian authority, while it is really 15.5c.”

        That readings should be routinely taken in this manner as late as the 1900’s, even in major European centres, is somewhat surprising.

        There are numerous veiled criticisms in this vein;

        “…the means derived from the daily extremes (max and min readings) also give values which are somewhat too high, the difference being about 0.4c in the majority of climates throughout the year.”

        Other complaints made by Doctor von Hann include this comment, concerning the manner in which temperatures are observed;

        “…the combination of (readings at) 8am, 2pm, and 8pm, which has unfortunately become quite generally adopted, is not satisfactory because the mean of 8+2+ 8 divided by 3 is much too high in summer.”

        And; “…observation hours which do not vary are always much to be preferred.”

        That the British- and presumably those countries influenced by them- had habits of which he did not approve, demonstrate the inconsistency of methodology between countries, cultures and amateurs/professionals.”

        http://wattsupwiththat.com/2011/05/23/little-ice-age-thermometers-%E2%80%93-history-and-reliability-2/

        tonyb

      • I’ll take a look in the morning, tony. I am with you on the hundredths of a degree thing. Not feeling that.

    • vuurklip,

      I guess if you are described as an “. . . energy systems analyst and environmental economist with a strong interest in conservation and efficiency.”, it might make you look sciency. Got to keep those donations rolling in!

      Nobody provides funding for totally useless data adjustments, do they?

      Live well and prosper,

      Mike Flynn.

    • Seen my comments over at ATTP ( ill copy them here)

      But before I do that.
      1. historically we were asked or challenged to investigate the charges that
      people had their thumbs on the scale while doing adjustments.
      2. when you start you dont know what you will find
      3. you report what you find

      Now from ATTP

      “I suppose one does the adjustments to eliminate the possibility that there is some bias in your answer that would cascade into questions that really matter. So what do I mean by really matter.

      1. There is nothing in the adjustments that will change the record in such a way as to influence our confidence in AGW. That is, with or without adjustments our confidence remains the same.
      2. There is nothing in the adjustments ( use them or not) that will cascade into climate reconstructions in such a way as to change our perceptions of say the LIA or the MWP in any
      significant way.
      3. There is nothing in the adjustments (use them or not) that will modify in any material way the consensus position on attribution.
      4. Nothing important about sensitivity that will change. For example, doing or not doing them will
      not change the envelop of uncertainty in any material way.
      5. Nothing in the adjustments ( use them or not ) will materially effect ones view of how well GCMs
      hindcast or forecast.

      So one might, for example, use an unadjusted series and adjusted series in say ‘Curry and Lewis”
      My take is this. That paper wasnt a paradigm shifter before, and it would not be a paradigm shifter after. Numbers might squish around a bit, but you are not going to find out that Lindzen was right on sensitivity. because he was not .

      so you might paraphrase what I mean by materiality as the following: some important core position . To put it starkly, C02 will warm the planet regardless of what I do to the temperature series. we knew it would warm the planet before anyone compiled a series and we know regardless of what anyone does to it.

      I suppose one could argue that the temperature series has some temporary leverage into core beliefs because of the muting of trends in recent years.”

      • I asked why adjustments are necessary considering that they are so insignificant. I cannot discern any justification for adjustments in your above response.

      • @ Steven Mosher

        “There is nothing in the adjustments that will………..” points 1 through 5.

        There IS something in the adjustments that CONSISTENTLY magnifies the ‘problem’.

        “To put it starkly, C02 will warm the planet regardless of what I do to the temperature series. we knew it would warm the planet before anyone compiled a series and we know regardless of what anyone does to it.”

        To put it starkly, when I cooked hamburgers on my gas grill last summer, I DID warm the planet, no matter what anyone does to the temperature series. I knew it would warm the planet before anyone compiled a series and I know it regardless of what anyone does to it.

        In the absence of a firm understanding of the MAGNITUDE of the warming, showing warming as a function of ACO2 (since that is the only part of total CO2 that we can potentially control), what does EITHER statement have to do with the proposition that ACO2 poses an existential threat to the biosphere that demands the establishment of a world wide ‘Climate Change Policy’, giving governments the power to control and/or tax every activity with a ‘carbon signature’, with the magnitude of the carbon signature to be controlled and/or taxed determined solely by the governments doing the controlling and/or taxing?

        One of the posters here has written a paper (http://agwunveiled.blogspot.com) which claims that the historical temperature anomaly can be reproduced with a correlation coefficient of .95 by considering ONLY the time integral of the sunspot number. CO2 is ignored completely. I have no idea if he is correct or not, but does anyone even CLAIM that they can reproduce the historical temperature anomaly BETTER as a function of CO2? I would be surprised if they could, since CO2 has been increasing monotonically since we began measuring it, while the observed planetary temperature (however defined, if it ever has been) hasn’t been doing ANYTHING monotonically over the same time frame.

      • Bob

        ‘There IS something in the adjustments that CONSISTENTLY magnifies the ‘problem’.”

        SST adjustments COOL the record.
        the ocean is 70% of the planet.
        So lets use raw ocean.
        The problem will get magnified like you cant imagine.

        Ya,, team raw data.. that’s a hat-trick owngoal.

        Next the adjustments do NOT consistently magnify the problem.
        we cool the land from 1960 on.

        Go ahead.. ask me what we did to the precious pause.
        did we magnify the pause?

      • Steven Mosher, “SST adjustments COOL the record.”

        You have said that a dozen times, how much?

    • John Smith (it's my real name)

      “If the adjustments are really so “minor” as to be insignificant, why make those adjustments at all?”
      my guess
      subconscious attempt to strengthen the argument for a strongly held belief

    • Matthew R Marler

      vuurklip: If the adjustments are really so “minor” as to be insignificant, why make those adjustments at all?

      Why adjustments were computed has been written about at length, and includes such things as replacing and moving thermometers.

      That the adjustments have made such a small difference to the overall trajectory of the mean was not known until after the adjustments were computed.

      • So, moving termometers to different settings has an insignificant effect? Why move them in that case?

        The state of Denmark comes to mind …

      • Matthew R Marler

        vuurklip: Why move them in that case?

        That’s only known on a station-by-station basis. The BEST team is stuck with the fact that they had been moved (or replaced), and have to work with that.

      • They’d miss station moves which don’t produce a break in the record, and there no reason to think the new location would record the same series as the old one was recording
        ==============

  20. The fact that they need to cheat to scare us really does not help them much.

  21. ‘Moist enthalpy hereafter referred to as equivalent temperature (TE), expresses the atmospheric heat content by combining into a single variable air temperature (T) and atmospheric moisture. As a result, TE,
    rather than T alone, is an alternative metric for assessing atmospheric warming, which depicts heat content. Over the mid-latitudes, TE and T generally present similar magnitudes during winter and early spring, in contrast with large differences observed during the growing season in conjunction with increases in summer humidity. TE has generally increased during the recent decades, especially during summer months. Large trend differences between T and TE occur at the surface and lower troposphere,
    decrease with altitude and then fade in the upper troposphere. TE is linked to the large scale climate variability and helps better understand the general circulation of the atmosphere and the differences between surface and upper air thermal discrepancies. Moreover, when compared to T alone, TE is larger in areas with higher physical evaporation and transpiration rates and is more correlated to biomass seasonal variability.’
    https://pielkeclimatesci.files.wordpress.com/2011/11/nt-77.pdf

    Homogenize that. It’s the reason surface temperature datasets are obsolete for climate purposes.

    Here’s GISTemp and UAH – http://www.woodfortrees.org/plot/gistemp/from:1984/plot/gistemp/from:1984/trend/plot/uah/from:1984/plot/uah/from:1984/trend – the trends are pretty similar. Burt for atmospheric energy content – the satellites are a far more inclusive summation. As the differences between T and Te fade with altitude.

  22. Pingback: Climategate II? | Scottish Sceptic

  23. I wanted to do another type of checking with the Norwegian raw data. I wanted to make calculations of temperature change within each segment and then somehow weigh the temperature changes together.
    That approach was quickly aborted as there existed no such logs over station data for the weather stations in Norway according to our meteorological institution here.
    I am looking forward to the explanation of the adjustments of the surface temperatures!
    I am also very interested in the adjustments of the Argo data and the sea level satellite data
    All these adjustments seems to change data series in the “same way” (you know what I mean). That fact in itself is not proof of any fraud, but is still interesting to look into.
    You know, there may still be a bias. If you look for sources of errors that explains a lack of increase, it’s not unlikely that you find just that.

    Best regards,
    Knut Witberg

    • Anders Valland

      Norwegian station data are a mess. I’ve tried getting something sensible for Trondheim, 3rd largest city here. No good, best we can do is our airport 35 km away. Is the difference large? Not really in terms of absolute temps, but in terms of anomalies? Who knows….

      • Agree. Most stations are either moved, shut down or have incomplete series. So I guess that’s where homogenization comes in. Fine, but I don’t grasp the fact that WMO accepts this. Interpolation of data should be banned. Our new digtial weather instruments have to go through thorough calibration and acceptance tests. And then we just interpolat areas without stations?! In context of discussing hundreths of a degree globally it doesn’t make sense.

  24. The reason that claims such as made by Booker, Delingpole and Homewood gain traction is that they are written in a manner than can be understood by the average punter, who may have some slight back knowledge and perhaps an inherent bias either way.

    Reading all this it strikes me that there is no clear rebuttal in a fashion intelligible to those who may have become concerned by the original articles.

    How about a version that actually rebuts the original articles and makes the case for the BEST method in a manner that accepts that readers can be intelligent without being climate geeks?

    tonyb

    • Tony, Homewood’s stuff uses the NASA GISS homogenization of GHCN raw. The CONUS reposted above showed the NCDC pregressive homogenization changes to US GHCN by comparing the final results in 1999 to 2011. Steirou’s paper was all GHCN with reasonably complete 100 year records.
      I personally checked the BEST result for a number of the more egregious examples used in the essay. In each case, BEST was ‘better’ than GISS or NCDC. For example, Sulina Romania. A Danube delta town of 3000 reachable only by boat. GHCN raw shows slight warming, maybe half a degree since 1880. NCDC homogenization cooled the past so that there almost a 4C trend. BEST gives it a 0.9C trend. Plainly NCDC GHCN adjustment is unjustified.
      OTH, BEST Rutherglen research is problematic. This is the Australian station Graham Lloyd wrote about down under. An agricultural research station in continuous operation at the same location simce 1913. Well maintained. Raw shows no trend or slight cooling. BOM Acorn homogenized into 2C warming since 1913. BEST only has data from 1965! And produces 2C warming. Even worse than BOM, and plainly something not right with the input data feed.

      Homewoods critique of GISS is two fold, and is the same for both a portion of the Arctic and central South America (Booker’s two printed columns). Both areas are data deficient and were infilled from adjacent regional stations. Those stations generally had their past cooled (and sometimes the present warmed) by GISS homogenization. Those exaggerated trends were then infilled, producing large regional warmings that NASA displays as big red blotches on its end charts, compared to grey (no data) on the ‘raw’ surrounded by no red. And anyone can get all of this and see for themselves, which is why it has been so hard hitting.

      If BEST wanted to contribute substance to those specifics, rather than just defend their method as above, they would show their results for these specific regions. They have many more station records than just GHCN, and (based on my spot checks) usually more reasonable automated adjustments. I suspect that were they to do so, they would also show GISS is deeply flawed.

      • Rud

        You know my admiration for Hubert lamb. There is an article about him here

        http://wattsupwiththat.com/2015/02/10/transformation-of-the-science-of-climatology-in-like-a-lamb-out-like-a-lion/

        We seem keen to erase the past and pile insults on it as being comprised of worthless anecdotes. Climate science has stood still for the last two decades as we have chased models, novel proxies and extraordinarily complex statistical analysis and have neglected looking for the information that can give us pointers as to how unusual the current climate is compared to the past.

        From the archives it doesn’t appear to be that strange, indeed it is rather benign. The climate has varied in the past and will vary in the future and why we should be surprised by a temperature rise since the LIA baffles me.

        Tonyb

      • I’m with Rud. BEST uses Rutherglen temps in Oz and only from 1965. It’s nonsense. So now we know BEST contains at least one incorrect data point. Well, since it contains at least one isn’t it possible it contains many more? The answer is yes which is where Delingpole and Booker are coming from.

    • How about you hold delingpole, homewood, and booker to account for IGNORING the stations where adjustments cool the record?

      read this thread here.

      No matter how much work we do there will be people who say

      1. Oh, the warmist controll the databases.. its fraud down to the bottom.
      2. Oh, go check 1.6B records in their paper form.
      3. Oh, never mind, look at UHI
      4. Oh wait.. .1C changes everything we know!
      5. Well, if they dont matter then why do adjustments.
      6. Oh wait, average temperature doesnt exist
      7. Oh wait, look at UHA
      8. What about climategate?

      And at the end of the day you want to use the D word, cause the behavior gets very close to the tactics of people who debate the holocaust by focusing on the actual numbers of people gassed. or by asking where the piles of ash are.

      • You miss the point and become nasty at the same time. In case of Paraguay all stations were substantially warmed by GISS v3.. In the case of the Arctic Region, 19 stations were warmed, four were not, and none were cooled. And those adjusted warm stations were used by GISS to infill much larger regions, in both cases resulting in big red warmings on their global map.. What does BEST show for those regions? That would be a logical way to address Homewood’s very specific regional GiSS critiques.
        Getting nasty, D***** mud slinging does not..

      • Delingpole Booker and Homewood aren’t screeching at me to make me drive less and pay more tax and revere Gaia. So no i don’t have to hold them to account. You always do this Mosh – say it’s up to us to do a better job that you guys have done about x y z issue. It’s not our job out here in commenter land. You just don’t like that many of us have got reasonable doubt about your infallibility.

      • Rud. I have studied your comment with some diligence. I cannot figure out what D with 5 * after it could mean. 7? sure. But 5?

    • Matthew R Marler

      tonyb: How about a version that actually rebuts the original articles and makes the case for the BEST method in a manner that accepts that readers can be intelligent without being climate geeks?

      Consider some attributes of scientific exposition:

      clear

      accurate,

      complete

      nontechnical

      concise

      Which ones do you want to give up? No accurate and concise exposition is likely to be clear to someone who has not mastered the technical language. No accurate exposition is likely to be nontechnical (that’s why the technology and technical language were invented in the first place.)

      Understanding statistics requires understanding conditional distributions (p-values, for example, are estimates of conditional probabilities). Conditional probabilities and conditional distributions are non-intuitive, so the first step in understanding statistical methods is to give up on intuition. Booker has an argument that appeals to intuition: it can’t possibly be technically accurate.

      • Matthew

        Journalists and ourselves are perfectly capable of Extracting information from technical reports if they are written in a clear fashion.

        Having read literally hundreds of science papers from the mid 1800’s onwards, I would say that generally the early ones are a difficult read.

        I have a weighty tome entitled ‘the history of science’ dating from 1886 which verges on pretentious.

        . But the ones exploring pure science and without any agenda or objective from the middle period, roughly the 1920’s to the late 1970’s are, on the whole! models of clarity no matter their complexity.

        As regarDs many, but not all, papers from that time to the present, i would say their narrative and clarity has declined . Often it is difficult to understand what their conclusions are.

        Whether that is because most are for sale and feel they need to fit a certain format, Whether computers have become a substitute for writing, or whether the standard of scientific English has declined I don’t know.

        Tonyb

  25. richardcfromnz

    >”As we will see the “biggest fraud” of all time and this “criminal action” amounts to nothing.”

    Could not be more wrong. See Paul Homewood’s post on New Zealand, in particular the GISS vs BEST comparison of Gisborne Aero adjustments starting here:

    https://notalotofpeopleknowthat.wordpress.com/2015/02/09/cooling-the-past-in-new-zealand/#comment-37569

    BEST make no adjustments over the period of GISS adjustments, 1963 – 2002:

    GISBORNE AERODROME AWS
    Breakpoint Adjusted Annual Average Comparison
    http://berkeleyearth.lbl.gov/stations/157058

    GISS make 7 adjustments over that period:

    At 1963 the cumulative adjustment is 0.7
    At 1968 the cumulative adjustment is 0.6
    At 1972 the cumulative adjustment is 0.5
    At 1975 the cumulative adjustment is 0.4
    At 1980 the cumulative adjustment is 0.3
    At 1982 the cumulative adjustment is 0.2
    At 1986 the cumulative adjustment is 0.1
    At 2001 the cumulative adjustment is 0.1
    At 2002 the cumulative adjustment is 0.0

    For example, in GISS monthly adj series (see graph below for raw monthly),

    The GISS Gisborne Aero 1973 cumulative adjustment is 0.5

    1973 monthly raw (top) vs adjusted (bottom)
    19.4 18.5 16.2 14.6 12.7 10.0 8.6 10.5 12.3 14.2 17.2 17.2
    18.9 18.0 15.7 14.1 12.2 9.5 8.1 10.0 11.8 13.7 16.7 16.8
    0.5 difference for every month

    The 1974 – 1977 range of common cumulative adjustment is 0.4

    1974 monthly raw (top) vs adjusted (bottom)
    17.7 20.6 15.1 14.8 11.2 10.1 10.1 8.9 12.1 13.6 15.5 17.8
    17.3 20.2 14.7 14.4 10.8 9.7 9.7 8.5 11.7 13.2 15.1 17.4
    0.4 difference for every month

    1977 monthly raw (top) vs adjusted (bottom)
    18.4 18.9 17.8 14.5 10.9 10.1 9.4 10.4 10.2 13.4 14.9 17.5
    18.0 18.5 17.4 14.1 10.5 9.7 9.0 10.0 9.8 13.0 14.5 17.2
    0.4 difference for every month

    The 1978 cumulative adjustment is 0.3

    1978 monthly raw (top) vs adjusted (bottom)
    19.2 19.5 17.6 16.4 12.0 10.0 9.7 10.3 11.3 12.0 16.0 18.0
    18.9 19.2 17.3 16.1 11.7 9.7 9.4 10.0 11.0 11.7 15.7 17.7
    0.3 difference for every month

    Apparently, according to GISS (but not BEST), there were 2 distinct 0.1 steps from 1978 to 1977 and from 1974 to 1973. Similarly for the other ranges of common cumulative adjustments.

    There is no justification for these 2 steps (or the others) in view of the raw monthly data series (and no site moves): http://climate.unur.com/ghcn-v2/507/93292-zoomed.png

    GISS has some explaining to do.

  26. Robert Rohde, Zeke Hausfather and Steve Mosher write in the post after plotting global surface temperatures:
    “As Figure 1 illustrates the effect of adjustments on the global time series are tiny in the period after 1900 and small in the period before 1900.”

    “Tiny” is doesn’t tell us anything.

    If you would, please plot the annual global “raw” versus final BEST land surface air temperature anomalies along with their linear trends for the period of 1900 to 2014. The BEST data should have a linear trend of about 0.1 deg C/decade while the “raw” data should have a trend of about 0.08 deg C/decade.

    Thank you.

    • Tiny” is doesn’t tell us anything.

      I will make you a bet.

      1. Take our raw series.
      2. Pick any important science paper you like.. say Curry and Lewis
      3. Use the raw data instead of adjusted data.
      4. See if the conclusions change.

      In other words YOU do some science. You demonstrate how the difference between adjusted and raw CHANGES our understanding.

      does it change attribution arguments? nope
      sensitivity arguments? Nope
      reconstruction arguments? nope
      gcm arguments? Nope.

      in other words, switching to RAW won’t falsify any important work.

      Now, switch to raw on oceans.. everything you ever have done will be junk.

      • Steven Mosher, thanks for the attempt at misdirection, but it’s not going to work. I asked a very simple question and you’ve gone off on a little rant.

        Now, are we to assume by your misdirection that you’re confirming that for the period of 1900 to 2014, the BEST land surface temperature data have a linear trend of about 0.1 deg C/decade while the “raw” data have a trend of about 0.08 deg C/decade? Sure looks like it.

        With respect to HADSST3 data and the ICOADS SST data, while both exclude satellite data, they support, not contradict, my ENSO-related findings during the satellite era…or aren’t you aware of that?

      • thisisnotgoodtogo

        “4. See if the conclusions change.”

        Oh, so in the same way as Mikey’s method was A OK.
        Gotcha.

    • Matthew R Marler

      Bob Tisdale: Robert Rohde, Zeke Hausfather and Steve Mosher write in the post after plotting global surface temperatures:
      “As Figure 1 illustrates the effect of adjustments on the global time series are tiny in the period after 1900 and small in the period before 1900.”

      “Tiny” is doesn’t tell us anything.

      True enough. It is the graph that explains what “tiny” means. Thus, with the graph displayed, you may accept or reject the judgment that the effects are “tiny”. Would you call them “non-tiny”; “a tiny bit more than tiny”; “a lot more than tiny”? Whatever, the graph is there for anyone to judge.

      • Matthew R Marler, I agree that “tiny” is subjective but I disagree that anyone can judge its significance by looking at that time-series graph…which is why I asked them to confirm the warming rates.

  27. “On this metric, Berkeley, NASA GISS and NOAA are all consistent with GCMs but on the low side of the distribution.” I find this state very curious. It seems to be saying that the data (albeit reconstructed) fit the models. I’m generally much happier when the models fit the data. I suppose climate science is different.

  28. ‘Berkeley Earth developed a methodology for automating the adjustment process ‘ this process self aware so has no human input or it is man made in the first place . The fact its automatic after you start it makes no difference to the fact that before it started you make choices has to what it should do . And it is in those choices that the problem lies , for the ‘right’ initial choice can offer you the best chance of the getting the ‘right results ‘

    Its part of the mythology of ‘models’ that gives them this notion of being outside of human control , when in fact now matter how powerful the machine running them they can do nothing but what they are told to and see no data they are not given. They actual have less ability to think for themselves than a new born child .

    • What you may have missed is that we validated the algorithm in a double blind test.

      So yes, we made choices.
      We tested those choices in a systematic manner.
      The choices we made dont change the answer

      BUT, if you like you have these options.

      1. Use raw data ( the trend since 1960 will GO UP, 2014 will go UP)
      2. Create your own algorithm and test it.

      • Mr. Mosher, you have stated several times above that the adjustments are “small” or “tiny” and don’t have an impact but as you have stated again the trend will “GO UP” if the adjustments aren’t used. Well if your first statement about them being “small” and/or “tiny” is true then wouldn’t that make the “GO UP” tiny? it seems contradictory to argue they are small but then if they are not used it is an “own goal” as they will make the trend “GO UP”

      • No think harder.

      • “Think harder”
        You’re mentioned in other posts that adding new data, increases warming, do you notice the inconsistency between your double blind tests that don’t change the answer, and new data that does?

  29. Anders Valland

    Steven Mosher says, Feb.10 at 1:42 am: “GLOBALLY….the adjustments are mousenuts”
    Steven Mosher says, Feb.10 at 4:12 am: “you will actually WARM the record after 1960” – about using non-adjusted data.

    I don’t really get the hang of those two statements. If the adjustments really are mousenuts, then you don’t warm the record after 1960 whatever you do. If the adjustments are larger than mousenuts (rats, rabbits…?) then you might warm that record. I’m sure if you keep the record with your nuts, you’ll warm the entire period….

    Anyways, you have now demonstrated that making adjustments to the record does not contribute anything. So please publish this and make sure everybody stops adjusting the data. You have demonstrated, according to yourselves, that the raw data is more than good enough.

    On a final note, your figure 1 and 4 shows that the adjustments, globally, around 1860 are on the order of 0.4 degrees cooling. That is out of a total change of 0.8 degrees. How you can say that this is minor is beyond me. The cooling pre-1900 looks to be between 0.4-0.1 degrees, which is 10-50% of the entire change. It might be that you find this particular part of the record uninteresting with regard to the “CO2 vs temperature” question, and thus that the only period that can say something useful about the “CO2 vs temperature” question is after 1950 or 60 or 70. But that would also be beyond me.

    • “So please publish this and make sure everybody stops adjusting the data.”
      That’s a good point, Anders!

    • “On a final note, your figure 1 and 4 shows that the adjustments, globally, around 1860 are on the order of 0.4 degrees cooling. That is out of a total change of 0.8 degrees. How you can say that this is minor is beyond me. The cooling pre-1900 looks to be between 0.4-0.1 degrees, which is 10-50% of the entire change.”

      Stevenson screen invented in 1864. It seems very probable that the temperature records from earlier are affected by inadequate sheltering. Many of the stations were located in cities as well. Homogenizing methods adjusting those records show that these methods do what they are supposed to do. Fix inhomogeneities.

      • “Homogenizing methods adjusting those records show that these methods do what they are supposed to do”

        No. Homogenizing methods adjusting those records MAY show that these methods do what they are supposed to do. The point is know if a technique works or not.

      • Anders Valland

        Robert, Zeke or Steve, could you elaborate on this? Was it inadequate sheltering, the warm city dwellings (urban heat island?). How do we know these things account for what amounts to half of th entire warming?

    • “Anyways, you have now demonstrated that making adjustments to the record does not contribute anything. So please publish this and make sure everybody stops adjusting the data. You have demonstrated, according to yourselves, that the raw data is more than good enough.”

      You wont like the answer if that approach is taken with

      A) SST ( adjustments cool the record)
      B) UAH ( adjustments warm the record )

      • Anders Valland

        Why would I not like the answer, Steven?

      • Anders Valland

        Steven, you have explicitly stated several times that adjustments do not not make any difference compared to raw data, that is they do not alter the overall conclusion. Do you still think adjustments are necessary, and if so: why?

      • thisisnotgoodtogo

        “Why would I not like the answer, Steven?”

        Because he needs to relegate you.

  30. I am not sure if joking is regarded as “relevant” but I take the risk:
    I the reason for the hiatus that they have run out of adjustments?

    • @ tunka

      “I the reason for the hiatus that they have run out of adjustments?”

      At least they have been able to demonstrate unambiguously that, as claimed by the experts, at least half of the ‘global warming’ is in fact anthropogenic.

      The ACO2 connection to ANY empirical climate variations remains a bit more nebulous.

  31. Geoff Sherrington

    Zeke,
    Why do you use anomaly data rather than measured T?
    If the reference period is cooler than it should be, other dates are adjusted hotter – and vice versa.
    Further, nobody knows pre-satellite if the reference period is cooler than it should be, because the land temps in particulare are heavily derived from the same original observations. The disagreement that exists between sets is large and man-made.
    You have done some good approaches like the scalpel and a long reference period and kriging, but still there are fundamental problems. One is, what to do when you select a temperature to (say) do a SB calculation raising to the power of 4? Do you consider your adjustments are closer for conversion to credible absolute or not? Relatedly, how do we use absolutes from other data sets that can change monthly? Imagine publishing monthly corrections in the jounal that published your paper. It is almost a truism, BTW, that an homogenisation adjustment will change the prior trend.
    Another feature that looks odd around 1940. On your first graph, red highs predominate 1940-60, blue highs 1920-40 and vice versa for lows. Did your data study cast additional light on the 1940s problem?
    I could go on for hours, but it would emerge that I am old-fashionedly in favour of total avoidance of adjustments in favour of raw data sets with detailed metadata for the region chosen for a study so the investigator can correct what is needed and explain why.

  32. Why concentrate on Berkely Earth? The most often quoted data set is Giss and it is very clear that there have been massive adjustments top this data set.

    One only need look at an article by James Hansen from 1999 where he shows, from the GISS data set, that the USA had cooled since the 1930’s up until that time. Look at the USA graph: http://www.giss.nasa.gov/research/briefs/hansen_07/

    Now, compare that with the GISS data set for the USA today and tell us all why it has changed to warming and we are now getting “hottest day evah!!!”. More word, it is fraud.

    • Matti, good question. The Booker article is about GISS and GHCN.

      • Sorry Paul.
        My link to “others” was supposed to go to the Watts
        articles on Berkeley

        http://wattsupwiththat.com/2015/01/31/saturday-silliness-the-best-adjustments-of-temperature/

      • Your hyper sensitivity shows in this comment/link. The Josh cartoon posted at WUWT is a generalization of the Paul Homewood anaylses on GISS. ‘BEST ADJUSTMENT evah!’ refers to WARMEST YEAR evah, another of his cartoons on the Gavin Schmid 2014 pronouncement that turned out to have a 32% chance of being right. BEST quite appropriately did not make that error.

      • thisisnotgoodtogo

        Rud,
        Schmidt actually had the gall to say in email that it was “likely” – likely, in quotation marks – “It is *likely* to have been the warmest year for the planet”, which indicates by NOAA literature 66.7% to 90% probability, when it was only a 38 % probability – therefore “more unlikely than likely”.

        Schmidt used quotation marks in his email to say “likely” . NOAA literature uses the quotation marks to say:
        66.7%-90% – “likely”

        Schmidt accusing Professor Curry of making things up is the height of dishonest climateballer-talk.

    • Because Berkeley Earth and GISS (and GHCN for that matter) produce nearly identical results globally. Berkeley serves as a useful independent test (and, as it turns out, a validation) of the approach taken by other groups.

      • This is complete nonsense Zeke. The question was regarding the USA temperature record which you have not and cannot address.

      • Ok, Berkeley Earth gets U.S. results nearly identical to NCDC/GISS.

      • You still have not addressed the issue. Please take the time to look at the GISS graph in the 1999 article from James Hansen that I linked to. Then compare it with GISS TODAY.

      • Looks like Steven Goddard also noticed what I did and created a nice overlay for you:

      • Zeke Hausfather | February 10, 2015 at 10:56 am

        Because Berkeley Earth and GISS (and GHCN for that matter) produce nearly identical results globally. Berkeley serves as a useful independent test (and, as it turns out, a validation) of the approach taken by other groups.

        Not necessarily, it just shows both methods have similar results, but that could be that both have the same flaw. I would be pretty surprised if most of the surface series didn’t all augment the analysis by adding modeled surface temps. They all homogenize the data, they all infill missing surface area.

        But I don’t think the derivative of station temp is indicative of a gradual forcing, but appears to be dominated by large short period regional swings. Other than these, temps dither around zero.

  33. So the take-away message is that BEST cools the past before 1900 by about 0.4C, cools the 1900-1940 period by about 0.1C, warms the 1950-1990 period by about 0.05C.

    I missed seeing this description in all of the BEST papers. Why wasn’t the impact shown?

    In addition, what are the possible reasons that a breakpoint algorithm is going to vary so systematically through time? In theory, it should not contain a time function.

    What did the weather observers do in 1880 that recorded temperatures 0.4C higher than they really were (and that would be the average of all weather observers across the whole planet)? What changed in 1950 that made them record temperatures that were 0.05C below what they really were (again, all of them or the weighted average of them)?

  34. richardcfromnz

    >”As we will see the “biggest fraud” of all time and this “criminal action” amounts to nothing.”

    In your dreams. One problem (of many) is the instances where the different adjustments to the same raw data don’t match just around New Zealand and Australia (let alone elsewhere) i.e. NIWA vs GISS vs BEST (NZ) and BOM vs GISS vs BEST (AU).

    I’ve already posted the NZ example upthread of Gisborne Aero NZ (in moderation at this time) where GISS adjusts but BEST doesn’t and here’s a couple in OZ:

    BEST make an adjustment to Rutherglen Research 1980:
    http://berkeleyearth.lbl.gov/stations/151882

    BOM doesn’t adjust 1980 (page 24):
    http://www.bom.gov.au/climate/change/acorn-sat/documents/ACORN-SAT-Station-adjustment-summary.pdf

    GISS neglects Rutherglen Research entirely:
    http://data.giss.nasa.gov/gistemp/station_data/

    GISS make no adjustment at all to Alice Springs:

    Raw after removing suspicious records
    http://data.giss.nasa.gov/cgi-bin/gistemp/show_station.cgi?id=501943260000&dt=1&ds=13

    Adjusted
    http://data.giss.nasa.gov/cgi-bin/gistemp/show_station.cgi?id=501943260000&dt=1&ds=14

    BEST make 8 adjustments:
    http://berkeleyearth.lbl.gov/stations/152286

    Some BEST adjustments correspond to BOM adjustments, some don’t (page 9):
    http://www.bom.gov.au/climate/change/acorn-sat/documents/ACORN-SAT-Station-adjustment-summary.pdf

    Just a few examples but certainly not the only ones. Another problem is the comparison of methodologies e.g. Auckland NZ, NIWA vs NZCSC vs BEST vs GISS:

    http://www.niwa.co.nz/sites/niwa.co.nz/files/import/attachments/Report-on-the-Review-of-NIWAas-Seven-Station-Temperature-Series_v3.pdf
    (No adj for UHI/Sheltering)

    http://www.climateconversation.wordshine.co.nz/docs/Statistical%20Audit%20of%20the%20NIWA%207-Station%20Review%20Aug%202011.pdf
    (Adj for UHI/Sheltering. This series now in the literature – De Freitas, Dedekind, and Brill, 2014)

    http://berkeleyearth.lbl.gov/stations/157062
    (No adj for UHI/Sheltering)

    http://data.giss.nasa.gov/cgi-bin/gistemp
    /find_station.cgi?dt=1&ds=13&name=auckland
    (A hilarious dogs breakfast – Auckland airport in 1879?)

    The respective methodologies just do not stand up to comparison.

    • Richard, when you say GISS make no adjustments to Alice springs, that is slightly misleading. GISS start from the GHCN adjusted data, in which virtually flat raw data has been adjusted into almost 2 degrees of warming:

      ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/products/stnplots/5/50194326000.gif

      So what you describe as ‘raw’ is not raw at all.

      Alice Springs is one of the sites Paul Homewood discussed quite a bit at his notalotofpeopleknowthat blog 2 or 3 years ago.

      • Paul, it doesn’t matter how many times you raise the point; ‘raw’ means what ever Steve wants it to mean.

      • richardcfromnz

        Paul Matthews
        >”Richard, when you say GISS make no adjustments to Alice springs, that is slightly misleading. GISS start from the GHCN adjusted data, in which virtually flat raw data has been adjusted into almost 2 degrees of warming:”

        Yes correct Paul (good someone is paying attention). But for other stations GISS adjust the GHCN data too (i.e. GHCN is as “raw”) e.g. Gisborne Aero upthread i.e. there is no consistency.

        Paul graphs the timing of GHCN (not GISS) adjustments to Alice Springs back to 1940:

        From this post:
        https://notalotofpeopleknowthat.wordpress.com/2012/03/26/alice-springsa-closer-look-at-the-temperature-record/

        Major adjustments 1975 back to 1970, down and back up a little. Major adjustment 2010.

        Now compare that to the BEST adjustments:
        http://berkeleyearth.lbl.gov/stations/152286

        1952 treated differently. 1970 – 1975 treated differently. GISS misses 1979 and 1990. 2000 – 2010 treated differently.

        The BEST adjustments only make a 0.28 C/century difference right back to 1880.

        The GISS adjustments make a 2 C difference just by 1940.

        The respective final datasets may look vaguely similar but they certainly are not in detail.

      • richardcfromnz

        Should be “The [GHCN] adjustments make a 2 C difference just by 1940.”

        Yes Paul, with GHCN we are dealing with one level of adjustments. With GISS we can be dealing with one level (GHCN) or two levels (GHCN + GISS).

        It’s nightmarish. At least with BEST there’s only one level but they still don’t adjust for UHI/Sheltering e.g. Auckland Albert Park, and overdo it in Australia compared to BOM e.g. Rutherglen 1980. If they did correct for UHI/Sheltering BEST would corroborate the NZCSC Auckland series but not NIWA’s.

        Where BEST comes unstuck in NZ is kriging from far-flung places when there is data at hand e.g. Hamilton in the Waikato province is compiled from data in the Auckland province (e.g. Albert Park) and Bay of Plenty Province (e.g. Tauranga). Neither of which is similar to Waikato climatically because Waikato is hinterland whereas Tauranga is on the Pacific coast and Albert Park on an isthmus between harbours on Tasman Sea side and Pacific ocean side of the North Island NZ. Oddly, BEST overlooks NIWA’s Ruakura Research station right at Hamilton which forms part of NIWA’s Eleven Station Series (11SS).

        Probably obvious but BEST’s Hamilton series is nothing like the Ruakura Reseach series situated in Hamilton.

    • Set NZ to Zero.
      calculate the global average.
      See? it didnt change.
      Why, because NZ is not the world.

      • Let me extend your argument:
        NZ fiddle doesn’t matter. Not the world. (land)
        Aus fiddle(Alice Springs, Darwin, Rutherglen) doesn’t matter. Not the world.
        US fiddle (GISS 1999-2011 upthread) doesn’t matter. Not the world.
        Central South America fiddle (Homewood) doesn’t matter. Not the world.
        Arctic fiddle (Homewood) doesn’t matter. Not the world.
        Europe fiddle (Steirou) doesn’t matter. Not the world.
        Pretty soon you run out of world.

  35. [1] The assertion that BEST chose an automated method in order to answer skeptics’ is a non-sequitur.

    [2] Of the data sources available, BEST is comprehensive and readily accessible. BEST deserves kudos for this. Booker and Delingpole started by questioning the records in Paraguay using Homewood’s work with GISS. Why is BEST answering? The questions were not directed at BEST.

    [3] The questions that do point at BEST show BEST to have performed the same adjustments as the other data keepers to local stations. How does showing the global land record answer these questions?

    [4] BEST labels station ‘moves’ on their charts for local stations. This was used by Ed Hawkins to imply corrections made in GHCN were not without basis. But ‘station moves’ in BEST are not really ‘moves’ – the term is more a form of short-hand. How many people know this? Ed Hawkins doesn’t know this. Even Carrick doesn’t

    [5] With the current episode, listening to BEST, Nick Stokes, Victor Venema et al – it is clear the data keepers have abandoned defending their local station products.

    [6] How do you do this:

    Show a graph with adjusted and unadjusted global temperatures and say ‘they look the same so it’s all good’
    and
    Call the guy who’s comparing adjusted and unadjusted local temperatures to say ‘they don’t look the same so it’s not good’, an idiot?

    Even if you have your reasoning lined up, if your best argument is from consequence, you’ve half lost the game. The same questions will keep popping up (and making headlines).

    • [1] The assertion that BEST chose an automated method in order to answer skeptics’ is a non-sequitur.

      No.
      1. people complained ( remember climategate) that CRU adjustments
      were bogus, and not transparent. They were accused of fiddling
      the data. We wanted to remove the human element.
      2. people complained that hansen was fiddling the data and wanted to
      see how he did it.
      3. people complained that NOAA had their thumb on the scale.

      So we took a “hands off” approach. Not a human sitting there subject to bias adjusting series. But rather an algorithm that asks “what is the best estimate”

      [2] Of the data sources available, BEST is comprehensive and readily accessible. BEST deserves kudos for this. Booker and Delingpole started by questioning the records in Paraguay using Homewood’s work with GISS.

      Why is BEST answering? The questions were not directed at BEST.

      Because Watts and others dragged us into it.see my link.
      Because ONE APPROACH to verify adjustments is to use different
      data and different method. Its called replication, as opposed to reproduceablity. Standard stuff. shame on you.

      [3] The questions that do point at BEST show BEST to have performed the same adjustments as the other data keepers to local stations. How does showing the global land record answer these questions?

      The SCIENTIFICALLY INTERESTING question is this.
      what do adjustments do to the GLOBAL average.
      That’s important because core aspects of AGW rest on the
      GLOBAL average. For example. would Curry and Lewis CHANGE
      if you used only raw data.

      [4] BEST labels station ‘moves’ on their charts for local stations. This was used by Ed Hawkins to imply corrections made in GHCN were not without basis. But ‘station moves’ in BEST are not really ‘moves’ – the term is more a form of short-hand. How many people know this? Ed Hawkins doesn’t know this. Even Carrick doesn’t.

      Yes the term is a short hand. People who go through the code know.
      users who ask me, I mean people of good faith who want to use the data
      who ask me.

      [5] With the current episode, listening to BEST, Nick Stokes, Victor Venema et al – it is clear the data keepers have abandoned defending their local station products.

      Huh. We’ve been saying for a long time that if you want to look at small spatial scales, its best to start with the raw data. I spent months with Robert Way doing special versions of Labrador. REAL USERS not internet keyboard jockies know this.

      [6] How do you do this:

      Show a graph with adjusted and unadjusted global temperatures and say ‘they look the same so it’s all good’
      and
      Call the guy who’s comparing adjusted and unadjusted local temperatures to say ‘they don’t look the same so it’s not good’, an idiot?

      very simple. I ask the question.
      What Key paper will change if I use raw data?
      Put it this way, Will Nic Lewis suddenly run out and say
      “using raw data changes everything!!”
      will Mann suddenly redo his science
      will ANYBODY who has done key science redo their paper because
      raw data will change their science.
      That is one reason why we had such a hard time getting published.
      People forget that science works by FALSIFYING. so what key finding
      will change if you change the global series by .05C? No key finding.
      Its not interesting to the CORE science. It’s interesting to tecchnicians.

      Even if you have your reasoning lined up, if your best argument is from consequence, you’ve half lost the game. The same questions will keep popping up (and making headlines).

      And people like you will support the accusations of fraud.
      I’ll ask you again.. what I asked you on twitter
      How much do I have to cool the record to get you to stop supporting
      the accusations of fraud with your writing.

      • Matthew R Marler

        Wow! You are certainly working hard today. My hat is off to you. I have read all the way down to here, but now I have to stop.

      • matthew I read from bottom up

      • Starting right out working on the bottom can be a time saver.

      • shub:

        [4] BEST labels station ‘moves’ on their charts for local stations. This was used by Ed Hawkins to imply corrections made in GHCN were not without basis. But ‘station moves’ in BEST are not really ‘moves’ – the term is more a form of short-hand. How many people know this? Ed Hawkins doesn’t know this. Even Carrick doesn’t

        Actually I am fully aware of this. It’s only been explained a dozen times now.

    • [1] The reason why BEST was initiated is a non-sequitur to current issues.

      BEST tries to position itself as having addressed and successfully solved some of the problems with the earlier methods. It hasn’t. With construction of a global average there are only two pieces of data – temperatures and meta-data. BEST is stuck on problems of non-independent data just as every other method is – of trying to glean non-temperature information from instrument series.

      BEST attempts to solve some of these problems. It includes a large number of stations. It follows an arms-length approach to the data. But these come with their own limitations: chopping up station segments raises the question of how to align them back together and choices have to be made. The constraints simply pop back in again into the machine.

      [2] Watts did not drag anyone into anything. Watts nearly completely stayed out of the whole thing and has been dragged in by you and by the excitable Venema. Publishing a cartoon by Josh is ‘dragging’? Josh’s cartoon plots show NCDC data and he writes:

      “There has been much discussion recently about the adjustments made to past temperatures: see Paul Homewood’s excellent posts on Paraguay, Bolivia and around the world; also from Shub; Brandon at WUWT and on his own blog; and a very readable summary by James Delingpole. All very interesting.”

      You got ticked off by this?

      [3] What’s ‘scientifically interesting’ is a meaningless metric. Many a beautiful theory butchered by a cold fact. Sometimes the most boring, mundane things are the most interesting. I find it scientifically very interesting that scientists think they can adjust a local record toward an average in the process of calculating the average.

      Core aspects of AGW rest on the global average. Linear trends in the global average tell people what the average is doing. Having artificially small error bands inflates confidence in false trends and causes scientists to see trends where there are none. A little error would make for a large blessing. It would protect the core aspects from activist oversell and the subsequent backlash.

      [4] on station moves: “Yes the term is a short hand. ”

      Then don’t label your website graphs with a red diamond labeled ‘move’. It’s not a move if you don’t know for sure. Don’t know it to be? Don’t call it to be.

      ‘People of good faith’

      So you say you display a label outside for public consumption – ‘move’. The people of good faith get a different answer. I think this was a very bad move by BEST. Its moves are just another family of breakpoints, i.e., artifices of a chosen methodology.

      [5] “We’ve been saying for a long time that if you want to look at small spatial scales, its best to start with the raw data. ”

      Is that so?

      This is Tim Osborn of the CRU on local station data:

      “It turns out that you can get a good global temperature picture this way, … though the reliability at regional scales will be poor”

      Nick Stokes says:

      “In fact, the adjusted files that appear on the web should probably be held more internally. They really are a collection of regional estimates, using the station names for convenience. ”

      [6] I don’t think you understood the point. People are right to be surprised that raw stations when adjusted look completely different, because you are out to convince them the adjusted global average look rather the same as the raw.

  36. Could someone on the BEST team comment on the quality of the “raw” data? How was it pre-processed before BEST got it? How much of it is really just the numbers recorded by the station keeper and how much already modified in any way?

    • I would still like a simple answer relating to the Phil Jones, statement in the emails, that he had: ‘dumped the original data to save space at CRU’s new building… When, how (burned, shredded, landfill…), also a manifest of what was collected. A simple statement for the sake of history. Until there is an answer to this question, this is all just an ongoing argument over the best looking pieces of garbage. If we don’t get an explain of the destruction we will never understand anything. It is the root of this error tree. The source of all these numbers we see today. How much was there? How was the data transferred from paper to the data sets?

    • Sure, we use only raw data. It may come as a surprise to some, but Phil Jones is not actually the curator of the world’s climate data. Rather, he kept copies from individual national MET offices.

      • Do you know just what was “dumped”? In the 80’s the press had given the impression that the world was putting all their records into a merged data set. CRU was a large part of this effort. Then in 2009, he said what he said… Yet, I have never been able to find out just exactly what is was, that had been dumped. Where could I find out? Thank you, Zeke.

      • Something like ‘Phil Jones, had 47 tractor trailers full of various written weather records from 1765-1972, burned at Lands End.” It is a simple question that needs an answer. No more trouble than that.

      • What records did Phil Jones, ‘dump’?

        How much was there?

        How was “it”, disposed of?

        These are all simple, direct questions. The answers should all be easily addressed in a short statement. It does not require a consensus. Any reasonable person would freely give the facts for what they are, if known. Obviously I am not trying to steal the work, since it has already been valued as trash and dumped somehow. If a person is unable to get satisfactory answers to simple questions like this, what is the point of your whole exercise?

      • Let’s see now…

        Phil Jones,… it was taking up to much space, whatever it was.
        Eli,…his watch got busted.
        Steven Mosher,… you are not a user like me.
        Tony B.,.. Who, do you MET?
        Zeke,… You don’t know that Phil Jones, is not actually the curator of the world data.

        You need to be a physicist or a small child to understand the invisible.

        Everyone can see that… it’s as plain as the noses on your back.

  37. When you saw ‘raw data’, do you mean that you use the data that was recorded in the original recordings for each station or data from individual stations that has already been adjusted?

    • Recorded in original recordings. No adjustments made prior to data intake. No pre-homogenization (or even quality control when possible, though some data intake streams are pre-QCed).

      • I gave Steve a link a while back to the pdf’s of some US stations, signed off by the State official. The numbers in the State record did not match the ‘raw data’ that was used by BEST. On challenge Steve resorted to sophistry as to the meaning of ‘raw’.

      • actually Doc you compared the wrong stations.

        there is no doubt that you will find paper records that disagree with electronic records. You will find multiple records for the same site.
        you will find sites with multiple sensors each reporting to different agencies.
        you will find paper records that are later ammended because of entry errors.

        There is one school of thought that says.. if we study all these documents then we can recover the “truth”

        There is another school of thought that says.. we have no time machine.
        we cannot go back in time to verify.
        All we can do is use the data we have.. check for consistency, and give our best estimate.

        When you find a way to verify that a paper record is the absolute truth.. lemme know.

      • Mosh

        So in essence the temperature number records are as anecdotal as the written records?

        I had hoped that you would refute the specific claims made by homewood, delingpole and booker but to date that hasn’t been addressed. Are you able to refute their claims in a straightforward manner that would convince those who were uncommitted ?

        Tonyb

      • tonyb, Nope, they aren’t anecdotal, they are extremely accurate on a “global” scale.

        “% Estimated Jan 1951-Dec 1980 absolute temperature (C): 8.70 +/- 0.06

        Estimated Jan 1951-Dec 1980 monthly absolute temperature (C):
        % Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
        % 2.68 3.28 5.36 8.38 11.41 13.58 14.46 13.96 12.14 9.29 6.15 3.72
        % +/- 0.07 0.08 0.07 0.07 0.07 0.07 0.07 0.07 0.07 0.06 0.07 0.06”

        You can see from Berkeley Earth they they have the global mean real, honest to goodness, temperature down to 8.70C +/- 0.06 C.

        When they did the combined land and ocean product I am sure that put that global real surface temperature in C degrees +/- something somewhere, I just can’t find it yet.

      • nottawa rafter

        Tony
        The nice article about Lamb on WUWT has a certain relevancy about efforts to understand the historical record. I gained a greater appreciation for his work. Everyone is doing what they believe is right but it still falls short.

        Lamb was correct when he said you cant figure out the signal from CO2 if you don’t know everything else that may be going on.

      • Just not the original recordings in Rutherglen in Australia before 1965, right Zeke? Cos like, according to BEST Rutherglen just didn’t exist before 1965, right? Quality control control isn’t BEST’s strong suit.

      • “Mosh

        So in essence the temperature number records are as anecdotal as the written records?

        No for statistical purposes I would treat them as equal.
        Then I might test trusting one more than another.
        Or I could sample the written records, find that they are 99.9% the same
        as the digital records. and then doc would accuse me of fraud.
        simple

        I had hoped that you would refute the specific claims made by homewood, delingpole and booker but to date that hasn’t been addressed. Are you able to refute their claims in a straightforward manner that would convince those who were uncommitted ?

        This post. read it. Adjustments are not a scandal or hoax.

        1. its all open
        2. the algorithms are tested.
        3. they will warm some records
        4. they will cool some.

        I need to prove that Im not a criminal, nice inquisition.

      • Steve,
        You do not have to prove your innocence, it is up to others to prove that you are guilty.

        some poetry, author unknown

        Some folks will always be with you
        Some you can never understand
        Black Sabbath never was the same
        Since Ozzie left the band.

        Could you get paid less and appreciated less for all that you have done?

      • > Black Sabbath never was the same
        > Since Ozzie left the band.

        And vice versa.

  38. My biggest concern with the adjustments is that they tend to demote the Dust Bowl era from temperature primacy. The future climates the Warmist Cult members swoon over are benign compared to the harsh conditions in the southern and western US in the ’30’s.

    We see the same nonsense with Sandy vs the Long Island Express of ’38: Sandy was a butter knife smearing across the Northeast; the LIE was a scalpel that carved off chunks of Long Island and remade the Long Island Sound. Had the LIE hit instead of Sandy, the death toll likely would have been in the thousands, not the 124 who lived in a mostly bucolic Long Island.

    I don’t see fraud (never did), and I appreciate the Berkeley-ite’s transparency, but I am still concerned about the [unintentional] revision of history it represents.

  39. I am a profane.
    I am puzzled by the fact that the effect is significantly larger in the US than, say, in Afrika. It is all fine to say that the black curve more or less cancels, but I would think that it is not by accident (I mean the result of randomly distributed causes) that the US deviations are large, there must be some
    reason behind it. Could you, experts, explain to us profane which is the dominant mechanism giving this result?
    Thank you so much
    Bacpierre

  40. So Robert, Zeke and Steve, you have a nice determination of global temperature, by area and globally, that covers >100 years.
    Now you have to test the calibration. You have stated here and in the past that you treatment of data has emulated the actual temperature field, in spite of station moves and in spite of sensor changes and the way the sensors were read. It is now time to test your hypothesis to destruction.
    You are aware of the type of sensor, and the housing of the sensor, used in the USA 100 years ago. You know exactly these sensor were read and how the data was recorded. You must, a prior, describe the minimum number of stations and their spacial distribution required to generate a temperature field.
    You reproduce, exactly, the type of stations used 100 years ago, and distribute them in adequate sites and monitor the temperatures that the reproduction thermometers record. After three years you compare your BEST derived temperature field with that of the actual temperature field, recorded by historically accurate stations.
    The data of the historical sites is to be drawn from the actual paper images or authorized copies of the original data logs, and not from the easily accessible digitized datsets.
    The historically matched thermometer field is compared to that of 100 years ago, to give us the average monthly max, min and (max+min)/2, recorded in the same way, under very much the same conditions.
    You compare the differences your algorithm manages to compare modern sites/sensors vs historic sites/sensors, having a prior, stated how these two types of data will be assessed for statistical significance.
    I think you have the line shape, but not the calibration.

    • Let’s not forget that modern electronic thermometers will instantly record brief fluctuations in temperature that bulb thermometers would never record…. such as that hot breeze momentarily coming across from the concrete car park. All modern high temperatures above 30C need to be dropped down by at least 2C.

      • Matti,

        I’m only familiar with airport-based systems, but these all use a 1-minute average for individual measurements so any brief fluctuation would need to last at least that long.

      • I believe they compare averages like 30 min or one hour to avoid spurious peaks. Still electronic instruments have different biases, for example electronic recorders would be limited at really low temperatures like -70C because batteries tend to drift. On the whole I doubt it is much of a problem. UHI appears to be more of a water cycle issue.

      • Cap’t, there is a big difference between knowing and believing; we can all guess as to what biases this or that instrument will give, but until you actually perform an experimental determination, it is just bias.

      • “I’m only familiar with airport-based systems, but these all use a 1-minute average for individual measurements so any brief fluctuation would need to last at least that long.”

        Such a pity that you cannot see the logical fallacy with your statement. Please take the time to think further upon it.

      • Matti,

        Nope, you’ll have to enlighten me. But please note that all the systems I work on are designed to WMO climatic standards, in order to avoid exactly the ‘brief fluctuations’ you mention.

      • Sorry Jonathan, but any “1-minute average” will include and fluctuations and can read higher. Are you aware that with UHI there can be spikes of more than 10 degrees??

      • Matti,

        Ah, now we’re getting into a different kettle of fish. The systems with 1 minute averaging provide very smooth temperature records, I am yet to see a sudden fluctuation or jump within daily records.. Now in theory as you suggest these are bundled up into the 1 minute averages, but if that were a major factor I would still expect to see less stable traces. Which I don’t.

        As I understand it, the adjustments that are applied to readings between bulb thermometers and electronic ones are based on extensive field trials, which would of course then allow for exactly the effect you mention. I haven’t looked into the details of the field trials and adjustments, if anyone has some data on it I would be interested.

        I would expect the effect of sudden fluctuations to be more pronounced at stations where UHI is a problem, and UHI does concern me but pending release Anthony Watts’ work-in-progress paper on this I don’t have any robust data to go on. Without going into details, Mosher’s previous defences of BEST against the problems of UHI have passed the smell test at the time.

    • So Robert, Zeke and Steve, you have a nice determination of global temperature, by area and globally, that covers >100 years.
      Now you have to test the calibration. You have stated here and in the past that you treatment of data has emulated the actual temperature field, in spite of station moves and in spite of sensor changes and the way the sensors were read. It is now time to test your hypothesis to destruction.

      You are aware of the type of sensor, and the housing of the sensor, used in the USA 100 years ago. You know exactly these sensor were read and how the data was recorded. You must, a prior, describe the minimum number of stations and their spacial distribution required to generate a temperature field.
      1. Actually we dont know for sure ANY of that.
      2. We know the type of sensor reported in a document. Not the
      actual sensor in the field.
      3. We know the type of shield reported in a document, not
      the condition of the actual sheild in the field
      4. We know the INSTRUCTIONS to the observer, we dont know
      that he followed them
      you are not skeptical ENOUGH.

      You reproduce, exactly, the type of stations used 100 years ago, and distribute them in adequate sites and monitor the temperatures that the reproduction thermometers record. After three years you compare your BEST derived temperature field with that of the actual temperature field, recorded by historically accurate stations.

      Actually a better approach since our field is a prediction of the temperature AT PLACES WHERE IT WASNT RECORDED, is to
      place new sensors at random places and test the prediction.
      The standard error of prediction on a monthly measurement is something
      on the order of 1.5C. The other way to do the same thing is to hold out
      stations. Been there done that.
      #########################
      The data of the historical sites is to be drawn from the actual paper images or authorized copies of the original data logs, and not from the easily accessible digitized datsets.

      You realize that the paper copies are not QC’d
      you realize that a single site can have multiple records

      However, if you want to disprove the work we did, go get the paper copies.
      for the whole world. find the errors. report them. they wont amount to anything.

      ####################################
      The historically matched thermometer field is compared to that of 100 years ago, to give us the average monthly max, min and (max+min)/2, recorded in the same way, under very much the same conditions.
      You compare the differences your algorithm manages to compare modern sites/sensors vs historic sites/sensors, having a prior, stated how these two types of data will be assessed for statistical significance.
      I think you have the line shape, but not the calibration.

      We dont aim at getting the calibration correct. In fact the error on the temperature field is much higher than the error on the anomaly.
      Its the trend we care about. its the CHANGE we care about. The absolute
      calibration to true temperature.. you’d have to trust way too much stuff that is questionable: basically historical documents

      • “Its the trend we care about. its the CHANGE we care about”

        Why is something so simple so difficult for some to get?

      • No cigar. You either want to establish the DeltaT or you don’t.
        The only way to do that is to emulate the measurements used in the past; all else is arm waving.

        That your ‘raw’ and actual ‘raw’ are different and you are uninterested is most reassuring.

      • I dont know joshua.

        Since there is NO calibration record, ACCURACY is going to be hard or impossible to verify.
        But we can still get the trend.

        shakes head..

      • hidethedecline

        Mosher doesn’t trust historical documents. Mosher says recorded history is questionable. This is the nub of climate science – it’s just a theory complete divorce from actual local historical weather.

        I’d suggest that the likelihood that any historical record has ever falsified the weather, is somewhere between the zero and zip. Think about this for a moment like a lawyer thinks about evidence. Record A B and C show BEST’s amazing hands-off no human interference algorithm has got the 1930s and 1940s wrong by at least a degree, say. What possible reason would a recorder of history, on any topic, anywhere in the world ever have had to falsify the record of the weather happening at that time? Horse racing meeting records, football game records, cricket game records, Farmer John’s rain fall records etc etc, reports about a school fete, hill climb races, Cinco de Mayo festivals, the literature of the day, paintings by landscape artists, photographs of weddings on Sydney Harbour etc etc, and yes, even the notes of the guy monitoring the weather station at location A, whether it was a Stevenson screen or a glass of mercury on a stick. Every one of those records, wherever they were taken will have noted the weather and there’s no reasonable basis to doubt them. History is just day old+ news.

        Mosher thinks we should be sceptical of history. Seriously, mate, read more history, comment less.

      • I didn’t ask for a calibration record, you are again using language to make the world a more muddled and confused place.
        You claim that you are able to construct a temperature field based for the year 1900, in a particular local, on past measurements and will be able to construct a temperature field for the year 2016, in the same local, using the present temperature sensor set up.
        I do not believe that the two temperature fields are necessarily comparable, as I have no faith in your temperature field construction. I have no faith, because of your lack of controls. I therefore suggested a control that I believe is valid if one wants to be able to compare a temperature field of the past with one of the present. The control is to run the present day network over a local, and at the same time emulate the type of measurements recorded in 1900. Compare the temperature field of 2016 taken from the existing sensor system to the temperature field generated by max/min mercury thermometers in 1900 era screens.
        That you are so intellectually arrogant that you refuse to contemplate the use of internal controls, caution us that anything you generate as a work product must be viewed as biased.

    • Jonathon,

      Averages will always be smooth. There is no teaching a true believer, so just forget it.

  41. Can someone tell me what is to be done with the actual temperature values of the 1930s in the US if they were records, given the modern adjustments that have taken place since then?

    Are we to assume that if we stuck the same thermometer out at the right time in the same place in the present, we definitely should have seen a higher record temperature at some point?

    I’m trying to reconcile the accepted adjusted increasing temperature anomaly trend with the companion downward trend of reduced “days > 90°”, having peaked in the ’30s (for example, at many stations in West Virginia).

    How can any “record” high for an area be accepted if things have moved around and changed since the last time it was that hot…..?

    • Hi morebrocato,

      Original data is of course kept, but you need to be careful with records if your station has moved or your instrument has changed. For example, a typical city weather station was on top of a building roof prior to 1940 or so (a location that tends to be pretty hot!), after which it was moved to a grassy patch on a newly built airport or wastewater treatment plant (where temperatures tend to be a bit lower). In the 1980s the old liquid-in-glass thermometer was likely replaced with a new MMTS electronic instrument, which is more accurate but (unfortunately) tends to read maximum temperatures about half a degree lower than the old mercury thermometer.

      The adjustments we are discussing are an attempt to deal with all of these changes (and any station thats been around for 100 years has multiple moves and generally at least one instrument change).

      • “which is more accurate but (unfortunately) tends to read maximum temperatures about half a degree lower than the old mercury thermometer.”

        This is an outright lie! A modern electronic thermometer will instantly record any brief fluctuation in temperature which a bulb thermometer never would…. such as a breeze momentarily bringing all that car park hot air across.

      • Thanks Zeke!

        I feel like I’m still at a little conundrum with this though. Based on your response, is there a dirty little secret that the 1930s ‘should not’ have been as hot, or all these extreme events likely contaminated? Isn’t the lion-share of sites that stretch back that far requiring adjustment in some way? (Time-of-day, Siting change, etc.)

        Is it postulated rather that if we maintained what we had back then all the way to the present that we’d have seen even more impressive extreme temperature records?

        If either one of these are true, then why not a legion of asterisks applied to extreme max temperature records that exist from 80 years ago? Too much wading in the weeds? Something only for scientists to understand, and not laymen?

      • Zeke thanks so much for this comment because it reveals how completely misguided BEST and the rest are. So what if the temp measure was on the roof in the 1940s? Doesn’t mean it wasn’t hot. It was still hot in the 1940s. BEST has adjusted the actual heat away.

      • I find remarks like this baffling:

        This is an outright lie!

        I don’t know too much about the mechanics of the temperature sensors being discussed. I can accept the possibility someone might show what Zeke Hausfather said about them is wrong. What I can’t accept the idea he is lying about it. Even if I thought Zeke a dishonest person who’d be willing to lie about such things, why would he lie about something so easy to check?

        Another remark I find baffling is:

        If all you can manage is blog posts Zeke, don’t bother.

        I’ve said quite a few things in blog posts. I believe some of them have been somewhat important. I don’t understand why them being said in a blog post would somehow mean they shouldn’t matter. Blog posts are just a medium. Like any medium, they can convey good material or bad material. They can convey high quality or low quality information. They can contribute as much or as little as anything else.

        What I find most baffling about remarks like these is they have nothing to do with positions. The same sort of remarks can be made by anyone, about practically any argument. What’s the point of making them? Why say something anyone could say about anything? How does it contribute to anything?

        I don’t get it. I wish people, no matter what side of the argument they may be on, would reject things like these.

      • Matthew R Marler

        Brandon Shollenberger: I wish people, no matter what side of the argument they may be on, would reject things like these.

        I support your defense of Zeke Hausfather. I know that sounds pretentious, tendentious and such. But really that guy went too far.

      • I am sorry for Brandon and others who seem incapable of comprehending the presence of absolute piffle.

  42. What a wonderful rebuttal of Booker and company.
    It is oh so compelling.
    However there is one little problem with it, which everyone so far has been too polite to mention.
    The problem is that the results you present are using BEST algorithms for the RAW Global Temperature Trend.
    We do not know what affect that has on what you insist is the RAW data trendwise.

    But of course we do know because Mr Hausfather made a very big mistake in the post on WUWT rebutting the original Steve Goddard posts.
    It was entirely glossed over at the time, but I was totally shocked by it, let me remind you of what the GHCN RAW data REALLY looks like according to Mr Hausfather.

    Now does this graph presented by Mr Hausfather look anything like the RAW data trend presented by BEST?
    What happened to the 1.5 Degree C step up in temperature around 1950, what happened to the 1.5 downward trend after that, what happened to the 1.8 degree C step up around 1992 and the subsequent almost 2 degree drop around 2000?
    Is Mr Hausfather now going to deny the graph of GHCN Raw data that he presented to rebut Steven Goddard?

    • Hi A C Osborn,

      That graph is an illustration of why simply averaging raw temperatures when your network composition is changing over time is not a viable way to estimate a global temperature. Using anomalies (or the Berkeley approach, which is effectively similar) avoids that issue. See this discussion for an explanation of the basics: http://rankexploits.com/musings/2010/the-pure-anomaly-method-aka-a-spherical-cow/

      However, Goddard’s misguided averaging of absolute temperatures is somewhat immaterial to the discussion at hand.

      • Sorry, you do not get out of it that easily.
        If that is what the simple RAW data looks like when it is averaged, then THAT IS WHAT IT LOOKS LIKE.
        Carrying out Homogenisation, Dicing & Splicing, TOBS and the Gridding or Krigging or what ever you use has lost the original simple data.
        If you showed the original data and the Final Product that would be understandable, but you don’t.
        Those upward Steps were produced by something, it was not the change of a few stations, it could not possibly have that much effect. So your work has lost that valuable data.

  43. I pretty much accept what you’ve said here, thanks for the explaination. It does raise some questions for me though. I take it you accept what Homewood has discovered in that Puerto Casado has been adjusted upward by one degree since you didn’t address that specifically. He is also looking at upward adjustments elsewhere. Previously others have pointed to upward adjustments in New Zealand and Australia. Obviously there has been upward adjustments in some places and like you said downward adjustments made in Africa and elsewhere. It apparently results in a net zero affect as you state. So what is the point of doing any adjusting at all? It just seems to raise suspicion and I don’t understand why you do it anyway? It also introduces the question of bias. Since all of the land instrument data bases are controlled by warmests, it doesn’t take much imagination which way that bias would go.

    • I might also add if the problem with all of the raw data is from the urban heat isle shouldn’t all the data be adjusted downward? If the adjusted data is net zero wouldn’t that mean it is still running hot?

    • The question you should be asking is this.
      Why didn’t homewood present any cases where we cool the record?
      GISS cools the record as well. Why did he avoid those?

      Next:

      So what is the point of doing any adjusting at all? It just seems to raise suspicion and I don’t understand why you do it anyway? It also introduces the question of bias. Since all of the land instrument data bases are controlled by warmests, it doesn’t take much imagination which way that bias would go.”

      1. We did adjustments because people thought GISS and others were doing diabolic things. What we showed is that our global record matches there global record. And using ALL the data, adjustments amount to very little
      2. We do adjustments because some people want the best estimate we can come up with.
      3. We do adjustments for the same reason folks wear glasses.

      “Since all of the land instrument data bases are controlled by warmests, it doesn’t take much imagination which way that bias would go.”

      Wrong. You are aware that the Ocean database is “controlled by warmists” and the adjustments are a LARGE cooling.. 70% of the earth data is COOLED by adjustments and 30% ( the land) is largely unaeffected

  44. Or compare the Mr Hausfather USA graph with their USA Raw Trend

      • Now the global chart next to the U.S chart looks very weird.

        1.The gap between adjusted and raw data is much bigger for the U.S. data.

        2.There is no significant gap in the global data after about 1975, while there is still a significant gap in the U.S. data up to the mid 1990s.

        3.Looks like the U.S. was on a different planet with a different climate from about 1930 to 1960.

        What’s up with this stuff? Are we to believe that the U.S. raw data has required more adjustment than the global data? Sum Ting Wong, here.

      • The global raw data before 1940 was likely useless.

      • I gotta agree with Don here. The notion that US measurements require more adjusting than the rest of the world just doesn’t pass the laugh test.

        I don’t want to get hung up on the adjustments stuff (for the reasons in my comments down thread) but stuff like this sticks out like a sore thumb.

      • The U.S. had big systemic adjustments in the 1960s-1990s (time of observation changes, instrument changes, etc.) that were much less common in the rest of the world. Much of this is due to the fact that the U.S. network was largely volunteer-run, while most other countries have the (generally smaller) network run by the MET office.

        Again, this is discussed in great detail here: http://www.judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/

      • The U.S data had significantly more adjustment, from the start of the chart 1900.

        Most of the global data is sea surface, polar ice caps, sparsely inhabited desert etc., isn’t it? Why should U.S land data need more adjustment than that sketchy data, especially in the earlier years? I will look at your reference to previous discussion.

      • OK, I get why the U.S. data is adjusted. But the U.S raw data that needed so much adjustment was of better quality than the data from the rest of the planet, with large parts having no data at all. Wasn’t it? Doesn’t this tell us that the data from the rest of the planet, particularly before about 1940, is rather suspect, to put it mildly. I am guessing that data from the U.S. and a few other civilized countries, using some fancy kriging from coastal and island stations, is a better representation of world temperature than the so-called global data. If I cared as much about this stuff as some people do, I would investigate that theory:)

      • Why are you questioning your own Graph, don’t you believe you could have cocked it up so badly or something?

      • “But the U.S raw data that needed so much adjustment was of better quality than the data from the rest of the planet, with large parts having no data at all. Wasn’t it? ”

        I dont think its settled science that the US has the best network.
        it has a lot of stations.
        But many of them run by volunteers in the past.
        thats why we have TOBS..

      • “with large parts having no data at all.” I should have said most of the globe having no data at all. No need TOBS on no data.

        “I am guessing that data from the U.S. and a few other civilized countries, using some fancy kriging from coastal and island stations, is a better representation of world temperature than the so-called global data.”

        I am suggesting that for pre-1940 global temp estimation. You can easily do that:)

        You are swatting a lot of mosquitoes today. It’s almost cocktail hour. You can take a break. They will still be here when you get back.

  45. As a matter of curiosity, have there been any attempts to characterize the ‘noise’ in temperature data? I believe the historic record consists mostly of daily high and low integers and that their average is used to construct series for subsequent regression analysis. From the git-go, half the available information is discarded.

  46. Pingback: Climate Denial Empire Strikes Back with Bogus Temperature Story | Climate Denial Crock of the Week

  47. Urban heat effect Zeke 15/11/2012
    21% rise since 1985 and 9% since 1960
    Homogenisation used to reduce this difference from 1930 USHCN onwards.
    NOT TO REMOVE IT.
    So current temps from city records incorporate a known 21% urban heat effect but this is OK because you gradually reduce it back to insignificance by 1930.
    Hence warming the present by “real” measurements and cooling the past, twice in fact.
    Congratulations Mosher, on your Hand on heart statement. ” I checked the figures and could not find a problem with the UHE.”
    Guess it depends on what you were checking?

    Only half removed prior to 1930 so a second homogenisation done by NASA GISS to remove the other 50% prior to 1930.
    Finally the homogenised Rural subsets compared to the homogenised Urban heat islands is “sufficient” to limit the effect of using these homogenised heat islands.
    Problems
    Zeke, this could be interpreted as you leaving the heat island effect in all the urban data you have used and continue to use.
    Homogenising wrong data is not the same as removing it. Alleviating it with rural data strongly suggests you left the heat in instead of removing it,
    Doing a second ” remove ” or adjustment pre 1930 suggests that you dropped those pre 1930 temperatures, possibly only the rural ones, but probably both way down.
    Finally it makes your comment that the raw and adjusted data are comparable in the past totally laughable. How can there possibly be any similarity with raw and adjusted USHCN figures with such massive, self admitted adjustments.

  48. Cool, thanks. I do like the idea of comparing variances between models and datasets. People get really hung up on point values alone, but variances and higher order statistics have an important story to tell as well.

  49. One question for the authors, please.

    I understand that BEST was designed to deal with global metrics but I wonder about the value of your data on smaller scales. I am specifically interested in the temperature trends in New York State and have argued that BEST data are appropriate to use. Any thoughts?

    • Hi rogercaiazza,

      At a state-level Berkeley Earth provides a high-resolution (quarter degree) dataset that might be of use. Series like Berkeley may not be as useful for an individual city, as non-climatic factors local to the city (e.g. a very real local heat island effect) may have been removed as they are not representative of the larger region.

      Here is the quarter-degree US dataset: http://berkeleyearth.lbl.gov/auto/Global/Gridded/CONUS_TAVG_LatLong0.25.nc

      • Thank you!

      • Don’t listen to that without testing ir for yourself, BEST can’t even model (and that is what it is doing), the UK as I proved to Mosher on another post. It can’t handle coasts and Islands with prevailing winds and Ocean Currents.
        Also Mosher made the most outragous statement that given the Latitude & Elevation he could give the correct Temperature of any location to within 1.5 degrees C with 93% confidence.
        When I asked him to prove it he ignored the question.
        He also said and I quote
        “Steven Mosher | July 2, 2014 at 11:59 am |

        “However, after adjustments done by BEST Amundsen shows a rising trend of 0.1C/decade.

        Amundsen is a smoking gun as far as I’m concerned. Follow the satellite data and eschew the non-satellite instrument record before 1979.”

        BEST does no ADJUSTMENT to the data.

        All the data is used to create an ESTIMATE, a PREDICTION

        “At the end of the analysis process,
        % the “adjusted” data is created as an estimate of what the weather at
        % this location might have looked like after removing apparent biases.
        % This “adjusted” data will generally to be free from quality control
        % issues and be regionally homogeneous. Some users may find this
        % “adjusted” data that attempts to remove apparent biases more
        % suitable for their needs, while other users may prefer to work
        % with raw values.”

        With Amundsen if your interest is looking at the exact conditions recorded, USE THE RAW DATA.
        If your interest is creating the best PREDICTION for that site given ALL the data and the given model of climate, then use “adjusted” data.

        See the scare quotes?

        The approach is fundamentally different that adjusting series and then calculating an average of adjusted series.

        in stead we use all raw data. And then we we build a model to predict
        the temperature.

        At the local level this PREDICTION will deviate from the local raw values.
        it has to.”

      • Mosher, I don’t care about your papers, BEST doesn’t work for the UK, how many other places doesn’t it work for with similar climatic conditions?
        And you still haven’t answered my question, when 3 places on the same Lat and elevation vary by over 10 degrees C how can you justify your outrageous statement?

      • “And you still haven’t answered my question, when 3 places on the same Lat and elevation vary by over 10 degrees C how can you justify your outrageous statement?”

        Simple. you forget to remove seasonality. You forget that I’m talking about the average error of prediction. while 3 places may differ by 10 degree, 100000 places will differ by tiny amounts.

      • It’s Mulligan Stew. What you can fish out of it may not look at all like what you put into it, in fact, it may not be the same thing at all. But, bon appetit!
        ==================

      • Steven Mosher | February 12, 2015 at 12:19 pm |
        Simple. you forget to remove seasonality. You forget that I’m talking about the average error of prediction. while 3 places may differ by 10 degree, 100000 places will differ by tiny amounts.

        JC SNIP, you do not have 100000 places with Weather stations in the whole world let alone on the same Lat and Elevation.
        You do talk some absolute crap at times.

        2 locations just in the UK 40 miles apart either side of the Pennines with a 10 Degree C temperaure differential on one day.
        Your Algorithm could not possibly handle it and niether can your original statement.

      • AC you still dont get it.

        You have an infinite number of places you can check..

      • Mosher, no it is you who “doesn’t get it”.
        Thr=ere is nothing wrong with the original data, but your algorithm will change it.

    • rogercaiazza, ” I am specifically interested in the temperature trends in New York State and have argued that BEST data are appropriate to use.”

      Climate Explorer has a mask that includes states that you can use to compare different interpolation/adjustment versions.

      http://climexp.knmi.nl/selectfield_obs.cgi?someone@somewhere

      This is one for Georgia.

    • rogercaiazza,

      There appears to be a consensus for New York State

  50. All data sets are problematic and uncertain.
    Land/Ocean indices are in same ball park considering MSU,Ocean,Raob.
    Homgenization can’t fix UHI.
    All temperature trends since 1979 are less than model and IPCC4 predictions.

  51. As has been mentioned, the premise of this article from Berkeley Earth:- “Christopher Booker’s recent piece along with a few others have once again raised the issue of adjustments to various temperature series, including those made by Berkeley Earth, is completely wrong.

    The Booker article in the weekend UK Daily Telegraph (still at http://www.telegraph.co.uk, last I saw comments were unprecedented over 20,000) has no reference at all to BEST. It is entirely based on Paul Homewood’s (www.notalotof peopleknowthat.wordpress.com) analyses and observations from NASA NOAA and GISS data.

    It’s interesting but not very useful for BEST to provide their own explanations. A full and detailed response need to come from NASA NOAA/GISS.

    • Note that I linked to one of the others who made an issue of BEST.

      Booker raised the issue WRT NOAA, OTHERS ( see the link ) made the issue BEST as well.

      read harder; comment less

      • shub niggurath

        BEST enters the picture only because Ed Hawkins and others used it to defend adjustments made to a station in Paraguay. I showed their moves were not ‘moves’. Delingpole mentioned my article.

        Berkeley et al are welcome to defend the practice of adjustments but no one directly involved them.

      • Data Adjustments Save Models !

        Temperature adjustments are minor compared to adjustments made by space “scientists” to hide the fact that the Sun selectively moves lightweight elements and lightweight isotopes of each element to the photosphere and solar wind.

        The curved line defined by data points from analysis of Apollo lunar dirt was converted into a flat, horizontal line to confirm the Standard Solar Model’s prediction: The interior of the Sun must be composed of H and He just like the Sun’s outer photosphere:

        http://www.omatumr.com/Data/1983Data.htm

      • No shub, you and anthony piled on.
        and then morano and drudge pulled up a old quote by anthony that this was criminal.

        In other words we were getting smeared.. I get mail from users.

        cant just stay quiet when people are playing a guilt by association game.

        you wouldnt.

      • here shub

        http://wattsupwiththat.com/2015/01/31/saturday-silliness-the-best-adjustments-of-temperature/

        we literally enter the picture.

        now. I had the common sense to call out bad behavior on the part of Jones and mann in climategate.

        yet.. you remain silent. Thanks for helping with the smear job.

      • I did not pile on and you cannot pin such things on my head.

        Ask whoever’s getting worked up about cartoons to take it easy. Not a good time to be getting worked about caricatures.

  52. Zeke,
    A comparison of adjusted and raw series has no interest if you do not know the average length of the series. How long is it?

    • Hi Phi,

      Its common to start land temperature series in 1850, as that is a point where there is reasonable global coverage. You can go earlier, but the error bars increase pretty rapidly. Both the raw and adjusted series are calculated over the same period.

      In terms of average length, I don’t know offhand, but probably on the order of 20 years or so? We are dealing with 40,000 stations, and while some have nice long records, many are much shorter. However, even if you only use long stations (and ignore the short ones) as folks like NCDC or Hadley do, you get similar results.

      • Thank you for your reply.
        With an average of 20 years, it is quite logical that adjustments are weak or nonexistent. The real adjustments are made between series but are invisible.
        Do you know the average length of the series for GHCN, (I think it must be of the same order of magnitude)?
        CRUTem use in principle the homogenized data from national offices. These data are generally based on long series, depending on the country probably from 60 to 150 years of average length. Adjustments are fairly steady in these cases and it is about 0.5 ° C per century.

      • Hi Phi,

        GHCN uses only stations with a complete record during their common anomaly period (1961-1990). The average record they use is 75 years long.

      • Erm, I meant NCDC, not GHCN per se (though NCDC uses GHCN).

    • Assumes that continuous long records are better.
      bad skeptic

  53. I don’t see many fish in this barrel.

  54. All that any of this demonstrates is just how desperate the warmists are.

    Some facts:

    1. There is no statistically significant difference between the warming rates of the late 1800’s, early 1900’s and between 1978 and 1998 (Professor Phil Jones, CRU East Anglia). Thus there is no “signature” for CO2 “forcing” in the temperature record. It has not and cannot be measured.

    2. The planet cooled between 1940 and 1975, all the while that anthropogenic CO2 levels were escalating wildly. Why did it not warm according to the AGW/Co2 hypothesis.

    3. 1/4 of the total of anthropogenic CO2 emissions since the beginning of The Industrial Revolution have occurred since 1998, yet the temperature trend since 1998 has remained flat. Why has it not continued to warm?

    These three points alone completely falsify the AGW/CO2 hypothesis, let alone other well known factors crucial to the hypothesis.

    We constantly hear cries of “Hottest year evah!!!”, yet we are talking in thousandths of a degree and factors which are not statistically significant.

    Grow up guys, AGW/CO2 theory is a crock!

    • Well said Matti Ressler. The 3 points you make clearly falsify the CAGW/CO2 meme and show it to be a hoax. Emphasis on the C in CAGW.

      As I understand the AGW/CO2 hypothesis is (as repeated many times by Richard Linden) that “all other things being equal” a doubling of CO2 in the atmosphere will lead to a 1 deg temperature rise. This is a “so what” hypothesis that may or may not be contradicted by your 3 points.

      • “about 0.2 C per decade trend”

        trend since WHEN? You only have since 1950 and there is no way on Earth you can show a 0.2C warming trend since 1950. No way.

        That is 64 years sonny, with less than 1/3 of that with any warming trend at all.

        There is nothing to falsify, it is piffle.

    • AGW/CO2 theory is a crock!

      ehh….. I’m not there.

      Observed trends are all less than modeled, but they’re all positive, also.
      The AGW result may be exaggerated ( perhaps negative feedbacks ).
      AGW impacts may be insignificant to human and ecosystem well being.

      But it’s probably real.

      • No Lucifer. Unless you wish to contend that the warming of the late 1800’s and early 1900’s were not natural (you are very alone there), then you cannot in any way show CO2 “forcing” in the temperature record.

      • you cannot in any way show CO2 “forcing” in the temperature record.
        Sure you can – temperature trends are all positive since 1979, less than modeled but greater than zero, consistent with theorized increase in radiative forcing. That’s not the end of the story, of course, the atmosphere continues to churn. But zero? No.

      • “Sure you can – temperature trends are all positive since 1979, less than modeled but greater than zero, consistent with theorized increase in radiative forcing. ”

        This is a complete crock Lucifer. You must now contend that the warming of say the late 1800’s was induced by CO2 forcing. You are totally lost with that argument.

        To show “forcing” the temperature trend must be greater than natural warming. It’s not.

      • “temperature trends are all positive since 1979”

        Absolute piffle! Temperature trends were positive from 1979 to 1998. We has a WHOPPING warming of 0.3 degrees in that time, for which we are all supposed to run around squarking “The sky is falling…. the sky is falling”. Since 1998 the trend has remained FLAT. You do understand statistical significance do you not?…. like 3 thousandths of a degree is NOT.

      • You don’t understand statistical significance.

        The trends since 1998 do not exclude the “about 0.2 C per decade trend”

        So you fail in your falsification attempt.

        So try harder

      • “about 0.2 C per decade trend”

        trend since WHEN? You only have since 1950 and there is no way on Earth you can show a 0.2C warming trend since 1950. No way.

        That is 64 years sonny, with less than 1/3 of that with any warming trend at all. That was 20 years with a warming trend on 0.15 degrees. From where do you conjure your 0.2 degrees Mandrake?

        There is nothing to falsify, it is piffle.

  55. gallopingcamel

    Matti Ressler,
    Thanks for seeing the big picture (the dog) while these learned gents study the fleas.

  56. Here is a question for Mr Zeke and Mr Steve from a layperson who graduated many moons ago with NO knowledge of science. All that I have learned I owe to Steve McIntyre and more recently to Climate etc.
    I have always felt that BEST had science’s best interests foremost. (Never doubted its accuracy the way I did with GISS/NDCN/NOAA//HADCRU).
    What is the value of a global temperature? I am not saying that there isn’t, just that I cannot see it. I see perhaps that after having several 1000’s of years of world data we could start to prove that the world does have an equilibrium system (or not) or if there were a sudden rise of several degrees in temperature…(but we would see and feel that long before there was statistical proof.) But global temp x is made up of 10’s of thousands of local temps. A world where the entire Australian continent, huge chunks of Canada and Africa and the Middle East were in extreme drought with very high temps – that world could have the same GT as a world where Australia et al had cool temps and plenty rain but a whole lot of other places had slightly elevated temps. But averaged out the GT could be the same. Until we have regional/sub continental temps to work with and can make accurate predictions – I cannot see that GT is useful for climate policy.
    I am ready to learn….

    • This is a good point. I’ve never understood the total emphasis on a “global temperature”. The main concern of the world’s population, is there is one, would be how and whether the local or national climate where they are is changing, or is expected to change and why.

      It must be possible to produce comprehensive temperature records and trends from the actual reported national temperature data. In the case of geographically large countries e.g. Canada, USA, Russia, Australia, it regional records would be needed as the national climate varies so much.

      This approach would not need adjustments, gridding, homogenization, but should as logically and accurately as possible take account of UHI effects of the applicable stations, or perhaps only “rural stations” should be included.

      If few national records and trends demonstrate obvious warming, then I don’t see how a “global temperature” could be claimed to show warming.

      • John Smith (it's my real name)

        GT is a political construct
        and like politics…all climate is local

        “If few national records and trends demonstrate obvious warming, then I don’t see how a “global temperature” could be claimed to show warming.”

        It can’t

      • Well gee. Did we come out of The Little Ice Age or did we not? Was that a natural occurrence? Was it a good thing?

    • its a single limited metric of a very complex system.
      people think its more important than it is.. my view

  57. Zeke – many thanks for addressing the questions on homgenisation.

    Have “Steve Goddard’s” contentions that it is the removal of rural temperature records (globally) and the the use of “estimated” temperatures that is driving the reports of record global temperatures.

    If somone has already addressed his questions please point me to the link.

    • Hi rogue,

      As far as Steven Goddard goes, he has a tendency to make things up. In the Berkeley dataset, at least, we have more stations than ever in recent years: http://berkeleyearth.lbl.gov/auto/Regional/TAVG/Figures/global-land-TAVG-Counts.pdf

      (The dip at the end is because not all stations report instantly; some countries still use paper records!). No rural temperature records have been “removed”.

      The old GHCN-Monthly dataset does have fewer stations in recent years, but not because rural stations are “dropped”. Rather its because they did a big data collection effort in 1992 to get prior records from all over the world, and set up an automated reporting system post-1992 that used a smaller network of well-distributed stations. However, the old GHCN-Monthly product is going away shortly, to be replaced with GHCN-Daily, which has no drop off in stations in recent years similar to the Berkeley Earth compilation. In either case, the resulting temperature reconstructions are largely identical.

      • “More stations than ever” you say Zeke, as if that’s somehow a good thing. But we know at least one, Rutherglen in Oz, is a truncated nonsense station only starting in 1965. Isn’t it possible BEST contains “more stations than ever” like Rutherglen? The only answer is yes. It’s possible. I’d say probable.

  58. Alexej Buergin

    Since I do not read articles that are so long, I need to ask these embarrassing question:
    What did our authors say about Puerto Casado, which was so important to Booker?

      • You can of course check the station record.

        http://berkeleyearth.lbl.gov/stations/157455

        What do you know. Station moves that coincides perfectly with breakpoints. Of course it should be adjusted.

        Homewood did of course not mention that.

      • richardcfromnz

        Rooter
        >”Homewood did of course not mention that.”

        Homewood did of course not mention that because he was referring to GISS/GHCN, not BEST.

        The BEST site move adjustments are 1971 (about 0.8) and 2006 (about 0.6) ish, total cumulative adjustment say 1.4. Only 2 adjustments made.

        http://berkeleyearth.lbl.gov/stations/157455

        GISS/GHCN raw and final:

        http://data.giss.nasa.gov/tmp/gistemp/STATIONS/tmp_308860860004_1_0/station.txt

        http://data.giss.nasa.gov/tmp/gistemp/STATIONS/tmp_308860860000_14_0/station.txt

        GISS/GHCN 1951 total cumulative adjustment -1.7, about 0.3 greater than BEST

        1967 cum adj -1.78, 1972 cum adj is 1.0 (missing data at 1971). 1971 site move step is therefore 0.78, same as BEST.

        Except if you look at the raw data in the 5 yrs immediately prior to 1971, the trajectory of the data was on the way down and actually matches the data on the other side of 1971 at 1971. So why was a -0.8 adjustment required by both GISS/GHCN and BEST when the data was little different immediately either side of 1971?

        There was a bigger break in the opposite direction at about 1987 that GISS/GHCN adjust for but BEST does not (see # below). And another at 2004 that neither GISS/GHCN nor BEST adjust for.

        1980 cum adj is -0.3, about 0.3 less than BEST
        1985 cum adj is +0.11, opposite to BEST (# see above)
        1990 cum adj is -1.15, about 0.5 more than BEST
        1995 cum adj is -0.65, similar to BEST
        2004 cum adj is -0.01, about 0.6 less than BEST
        2006 cum adj is -0.01, about 0.6 less than BEST

        Missing data precludes nominal 5 yr intervals.

        GISS/GHCN do not make the -0.6 2006 site move adj that BEST makes. But they obviously make several more adjustments than BEST do.

        Now do you see the problems here Rooter?

      • After a longer introduction richardcfromnz says:

        “GISS/GHCN do not make the -0.6 2006 site move adj that BEST makes. But they obviously make several more adjustments than BEST do.

        Now do you see the problems here Rooter?”

        The two adjustment algorithms does not end up with the same result.

        Of course the don’t. How could they? Different algorithms in use and there are also different number of stations the algorithms can utilize. Of course that will lead to differences on station level. It is a problem that you don’t understand that.

      • richardcfromnz

        Rooter
        >”The two adjustment algorithms does not end up with the same result. Of course the don’t. How could they? Different algorithms in use and there are also different number of stations the algorithms can utilize. Of course that will lead to differences on station level. It is a problem that you don’t understand that.”

        Oops, “differences at station level” from different algorithms?

        Speaks volumes Rooter.

        I notice you side-step this question:

        “So why was a -0.8 adjustment required by both GISS/GHCN and BEST when the data was little different immediately either side of 1971? ”

        And you don’t address the fact that there was a bigger break in the opposite direction at 1987 that BEST doesn’t adjust for but GISS does.

        Was this a bit too tricky for you Rooter?

  59. Steven Mosher | February 10, 2015 at 11:58 am | Reply
    Note that I linked to one of the others who made an issue of BEST.
    Booker raised the issue WRT NOAA, OTHERS ( see the link ) made the issue BEST as well.
    read harder; comment less

    Booker article does NOT refer to “NOAA, OTHERS” It is based on Paul Homewood’s website, which has never referred to BEST or has any analysis of BEST data.

    Maybe read more accurately…………..

    • If BEST is ok and the others surface indexes matches BEST.. What is the problem for you?

      Booker’s claim based on Paul Homewood’s blog is just crap. They see that there are different values in gistemp 1987 met stations index compared tho the gistemp land-ocean index in 2015. Even excluding the oceans the met stations quadrupled from 1987 to 2015. The better question would be why were those indexes not more different?

      • You just don’t get it do you, or you haven’t actually read all of Paul Homewood’s latest posts.
        Historic data in Iceland proves the original temperatures were correct, they have been massively adjusted by the inability of a simple Computer Algorithm to recognise reality from bad values.
        Of course Mosher & co say but it is only Iceland, but “only Iceland” catagorically proves that the Algorithm does not work properly because of the Assumptions built in to it.
        Just like GCMs don’t work because of the Assumptions built in to them as well.

      • Osborn: Is that all you have? The scandal is wrong adjustments in Iceland? Well, take comfort. There are probably more wrong adjustments in the Arctic in GHCN (not necessarily BEST though). Taken together these adjustments in the Arctic reduce the Arctic temperature trend in this century.

        A scandal? But not a warming adjustment like Homewood tried to convey.

    • In delingpole see the links
      See posts on watts. That link got dropped.
      I can add it if you demand

  60. The compilation and analysis of temperature data from worldwide stations as a measure of global surface temperature is a natural thing to do. What the results mean is a completely different story. While it’s a temperature record of some kind, it’s not at all clear to me that it is fit for purpose as currently used. Just my opinion.

    The fact that Robert, Zeke and Steve have accepted the chore of QC’ing the data and it’s manipulation and are working to convince the rest of us that this process is transparent and unbiased is laudable. Especially given that they seem to be doing this on a voluntary basis. Cudos guys and thanks for taking the abuse. Been there, done that.

    However at the end of the day, and accepting the BE results at face value, we are talking about a change of less than 1 deg. C in a hundred years, with interannual fluctuations of tenths or hundredths of a degree. This is interesting but not particularly meaningful from a scientific or climate perspective.

    We know that atmospheric CO2 concentration has also increased during the past hundred years and based on “green house science” we presume that some part of the temperature increase is resulting from increased CO2. Personally I don’t think the CO2 contribution is all that much.

    None of this is very alarming to me nor should it be to reasonable people. The alarm arises only after the introduction of the GCM’s and their systematic misuse to project future high levels of warming that may or may not be cause for alarm. The fact that model projections are not consistent with measured warming during the past 15-20 years is a pretty good reason to question the advisability of making extraordinary policy decisions on the basis of the models.

    I think the terms of fraud and hoax are more applicable to misuse of the temperature data re. warmest year diatribes, misrepresentations of the veracity of the models and misleading assertions of extreme weather events being a consequence of CO2 driven climate change.

  61. Pingback: Rising sea level, melting ice or...? - Page 5 - Fuel Economy, Hypermiling, EcoModding News and Forum - EcoModder.com

  62. So, wait, BEST still includes stations like these?

    • Of course it does. A great many Australian BOM temperature stations have the Stevenson screen sitting on top of a heavy STEEL frame…. nice heat sink there, you cannot touch them on a hot day! Just look at what the BOM thinks is “good exposure”…. steel frame, next to a steel fence: http://www.bom.gov.au/climate/cdo/about/sites.shtml

      • Lucifer: You can of course leave it out. But by what criteria? How to decide?

        The main problem for you here is that you do not like that the adjustments are spot on in this case. The adjusted data represent the temperature history far better than the raw unadjusted data.

      • rooter,

        You wrote –

        “Mike Flynn really got problems when discovered that the BEST algorithm adjusted the station baddy in Tucson. Everything was suddenly chaotic. Looks like a breakdown of some sort.”

        I assume that English is not your native language, based on your written expression. This might explain your apparent inability to comprehend what I wrote. Might I tactfully suggest you reread what I wrote. Let me know if you don’t understand, and I will put those parts more simply.

        I admit I cannot understand what you wrote. What is a “station baddy”? Why would it suddenly become chaotic? Why would it break down?

        Is this an example of some new Warmist language, or just more bizarre Warmist obfuscation?

        I realise that expecting Warmists to provide rational answers to simple questions is probably an expectation too far, but one can always hope!

        Live well and prosper,

        Mike Flynn.

      • Mike Flynn does not understand why a station like this:

        is a station baddy. Does not understand why its temperature record is biased. And why it is a good idea to adjust the series.

    • I believe it is this:

      http://berkeleyearth.lbl.gov/stations/27670

      Sure are much to adjust there. Lots of upward breakpoints. That means homogenizing results in clearly reduced trend.

      But as I understand, that is not allowed. No adjustments!

      Anyhow. In the file from BEST there are unadjusted and adjusted. Red unadjusted. Blue adjusted.

      A scandal!

      • You’re focused on the trends.
        I’m focused on the fact that the station is worthless.

      • rooter,

        If the components of the Earth system behave chaotically, pointing at one station, a group of stations, or all stations, and endeavouring to draw a useful conclusion might be completely pointless.

        Have you any reason for believing that following this trend is not merely placing you closer to an inevitable inflection point?

        Can you produce any evidence that the three dimensional movements of the atmosphere, aquasphere, and lithosphere are not unpredictable in any meaningful sense?

        Is there a point to this endless examination of the estimated past?

        Do you really believe that the past extrapolates to the future? If you do, most casinos will provide you with data showing individual results of roulette wheel turns. They love people who believe that the past predicts the future.

        It’s all good for a bit of light amusement and relaxation – albeit at great expense to someone – not much good for anything else, what?

        Live well and prosper,

        Mike Flynn.

      • Mike Flynn really got problems when discovered that the BEST algorithm adjusted the station baddy in Tucson. Everything was suddenly chaotic. Looks like a breakdown of some sort.

      • rooter,

        You wrote –

        “Mike Flynn really got problems when discovered that the BEST algorithm adjusted the station baddy in Tucson. Everything was suddenly chaotic. Looks like a breakdown of some sort.”

        I assume that English is not your native language, based on your written expression. This might explain your apparent inability to comprehend what I wrote. Might I tactfully suggest you reread what I wrote. Let me know if you don’t understand, and I will put those parts more simply.

        I admit I cannot understand what you wrote. What is a “station baddy”? Why would it suddenly become chaotic? Why would it break down?

        Is this an example of some new Warmist language, or just more bizarre Warmist obfuscation?

        I realise that expecting Warmists to provide rational answers to simple questions is probably an expectation too far, but one can always hope!

        Live well and prosper,

        Mike Flynn.

    • Global temperature: a towering and intricate mathematical structure wobbling on its base of old tripe.

  63. Really, all that is left to do is to compare the “raw data” used in BEST and other analyses to the actual paper records generated at the time (the true raw data). If someone like Mosher assures me they are exactly the same, I am quite willing to take his word for it as I have no reason to doubt him, and plenty of reason to trust him.

    • I have a nice Bridge in London for sale, would you like to buy it?

    • 1.6 billion records.
      I’m on it.

      But you realize that the brandons of the world would remain skeptical. You know there is uncertainty over exactly how many died in the holocaust.

      • Steven Mosher,

        Am I correct in thinking that the supposedly raw data you analysed is really data you made up, because there were too many real records to examine? Or did you just look at a handful of original records, and get bored?

        You wrote –

        “1.6 billion records.
        I’m on it.”

        No you’re not. You’re making that up too.

        Guessing, estimating and modelling, pretending that statistical sampling is equivalent to rigorous scientific examination of recorded results, can produce nonsense. In this case it has. Warmist PC nonsense, to be sure, but nonsense nevertheless.

        Get cracking. Examine 1.6 billion records, if such there are. It looks like a lifetime job, at the very least. Good luck to you! I can’t see any point at all, so I won’t volunteer to help. Maybe you could recruit some of the millions of Warmists to assist in such an important task. I’m sure they’d come rushing to your assistance.

        Or maybe not.

        Live well and prosper,

        Mike Flynn.

      • Really? you write “You know there is uncertainty over exactly how many died in the holocaust.” Repulsive.

      • Mosher used a lower case ‘h’ and had some numbers in his stuff. Try that, Brandon.

  64. The data in Bolivia and Paraguay has definitely been corrupted by GISS ect My father was responsible in 1964 to 1977 to fix all the stevenson boxes here for the UN WMO. The RAW data from these areas is correct and it has been fraudelently corrupted beyond belief. He would be revolving in his grave if he knew how corrupted the WMO ect has become. He is one of the persons mentioned here just to justify LOL
    http://www.rti.org/pubs/bk-0003-1109-chapter04.pdf

    • Why the need for fixing the Stevenson shelters if they were ok?

      Perhaps you inadvertently told the reason for the breakpoints in Paraguay 1960-1980. And the reason for the adjustments in Paraguay

      • Stevenson shelters need regular maintenance. The whitewash wears off and has to be redone for example. The fact that someone was employed to maintain the instruments is an assurance that the data is of good quality, not a sign that it should be distrusted.

        If data from instruments I was responsible for was adjusted in this cavalier fashion I know I would be annoyed. Such adjustments are a slight on the scientific reputations of those who gathered the data. It is tantamount to a suggestion that they were so incompetent they couldn’t use a thermometer properly.

      • Let me add to my reply because it is worse than that.

        Suppose you are correct and maintaining the Stevenson screens did lead to the breakpoints you allude to. A Stevenson screen which is bare of paint or lacking in ventilation or otherwise in bad condition usually records a spuriously high temperature. Such issues develop slowly as the instrument deteriorates. Fixing the problem on the other hand causes a sudden correction, a temperature drop which will be detected as a breakpoint. So what should we do about sucha record?

        What breakpoint correction procedures do is shift the two parts of the temperature record to align them and eliminate the ‘break’. But remember that the break in this case is actually the correction caused by fixing the instrument. The bias caused by the slow deterioration of the instrument is not detected as a breakpoint and the warming that this causes is preserved.

        Breakpoint correction in this situation is a systematic procedure for undoing all the good caused by actually maintaining the instruments.

      • Ian H:

        Again: Why the need to fix the screens if they were ok in the first place?

        Your logic that microsite changes that results in a cooling bias should not be taken into account is very strange. You could have answered your own question of what to do with such records. No adjustments for changes?

      • Do you have any reason to suspect that the distortion to the record caused by a pattern of instrument deterioration and periodic maintenance causes a spurious warming or cooling trend? I certainly can’t see the justification for believing this. If not do nothing.

        So yes – no adjustments at all for changes.

      • Ian H says he will not adjust for shelter changes.

        Nice to know.

  65. I am still concerned about the 2012 presentation by E. Steirou and D. Koutsoyiannis to the European Geosciences Union. It is only available as a power point. The analyzed the effects of homogeneity adjustments on 181 GHCN stations and found that half of the warming trends were a result of the adjustments, a far cry from what we see from Berkeley Earth.

    Now their sample was not geographically very representative, but that number of stations usually can reproduce global (if evenly distributed) or regional changes quite well. Further, I don’t have a problem that correction for time of day issues–which is somewhat implied in their methodology–could induce a scientifically defensible warming.

    The second author is quite highly regarded in the field of statistical hydrology, and yet the authors have been in radio silence about this result; it hasn’t turned up anywhere in the literature (with props to the fact that this is a very contentious field, and editors are in fact intimidated by some of my more goon-like friends) and you would think if it were robust and convincing it would have appeared somewhere. An additional reproducibility problem is that it is the dissertation of the first author and that is in Greek.

    Any thoughts on this one with regard to Berkeley Earth?

    • Here are the slides that Pat Michaels refers to:
      http://88.167.97.19/temp/Investigation%20of%20methods%20for%20hydroclimatic%20data%20homogenization_2012EGU_homogenization_1.pdf

      There was a lot of discussion when this first came out, some of which is listed and linked here: http://www.itia.ntua.gr/en/docinfo/1212/ including another link to the presentation. All before BEST, so a good question how BEST relates?

      • “From the global database GHCN-Monthly Version 2, we examine all stations containing both raw and adjusted data that satisfy certain criteria of continuity and distribution over the globe. In the United States of America, because of the large number of available stations, stations were chosen after a suitable sampling. In total we analyzed 181 stations globally. For these stations we calculated the differences between the adjusted and non-adjusted linear 100-year trends. It was found that in the two thirds of the cases, the homogenization procedure increased the positive or decreased the negative temperature trends.

        Problems.

        1. GHCN V2. this is DEPRECATED. BAD Skeptic.
        2. Continuous records. You have to be careful with GHCN -M because they can combine records from multiple sources even when those stations are different.
        3. No code
        4. No data.
        waste of my time.

    • A large part of those stations were for the continental US. And we know the TOBS issues in the US. No wonder they got a large proportion of adjustements. No wonder that presentation did not result in a paper either.

      • Rooter, incorrect. Europe is well represented. Asia is well represented. Oceana is well represented. Only South America and Afrika are underrepresented. Page 9 of 18.

        Mosher, the sample is 181 GHCN stations, essentially all that had a reasonably continuous 100 year record. Warming bias in NCDC homogenization.

      • Rud Istvan do not quite see what he writes:

        “Rooter, incorrect. Europe is well represented. Asia is well represented. Oceana is well represented. Only South America and Afrika are underrepresented. Page 9 of 18.

        Mosher, the sample is 181 GHCN stations, essentially all that had a reasonably continuous 100 year record. Warming bias in NCDC homogenization.”

        54 of 181 from Norht America. That is not a large proportion.

        Wellwell.

        And because they are using only long series, what does that tell us about adjustments after for example 1970? Not much.

    • Too few stations

      • Er., I thought you could knock out the majority of stations and still get the same lineshape. I recall you stating that the CUS could be done rather well with only 100 stations.

      • For evaluation of homogeneity approaches.

      • Yet another lazy response from SM, a pompous grunt as it were. There are no minimums in sampling theory, so your grunt is meaningless.

    • It is possible with 182 well-sited stations to get a reasonable mean. But it’s not much use for sub-sampling. And papers like Lawrimore et al 2011 have analysed the whole collection, to see the actual effect on a global index. So have BEST, as shown here. Even I have. Why would a journal be interested to publish such a limited set? Well, I suppose it beats Booker’s.

      • One cannot but note that all of the temperature reconstructions have essentially the same shape and apparent delta Temp for all pairwise comparisons; thus they are all either correct or all fatally flawed.
        I also seem to recall that the smallest magic number for Jack-knifing is a true n=24, which gives triplicates of n=8, the magic n=number.
        Delta T is important as it gives us a way to arrive at a TCS, which on the bottom line is what we are all interested in.

    • Hi Pat,

      Regarding Steirou and Koutsoyiannis, the lack of any station list prevents replication. Its fairly trivial to just do a global temperature reconstruction with and without homogenization (as we’ve done here, and other folks have done elsewhere) and see that the global effect is relatively small. There are certainly regions and individual stations where homogenization has a much larger impact, and it may well be that Steirou and Koutsoyiannis happened to select a subset of stations where that was the case. This does not mean that the adjustments they found are unjustified, as there are real systemic biases (TOBs change, instrument changes, etc.) that introduce non-random errors into the raw data.

      I think the most compelling evidence that adjustments are not creating additional bias come from tests using synthetic data, e.g. the Williams et al 2012 paper: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/williams-etal2012.pdf

      We also presented a poster at the AGU a few years back expanding this approach to the Berkeley data: http://static.berkeleyearth.org/posters/agu-2012-poster.png

      • Well, no, Zeke. The presentation is a summary of a phD thesis in Greek. That apparently contains what you would seek. Hint: both authors are also fluent in English. A global requirement these days. Have you asked?

  66. Zeke,

    Thank you for your helpful response. At the risk of trying your patience can you provide any insights as to the claimed divergence between BEST and satellite records that Steve Goddard cliams in this chart?

    regards

    Rogue

    • There are simpler plots. WoodforTrees can do a monthly comparison plot for BEST v. RSS. Growing anomaly divergence since 2000, now on the order of BEST 0.25C warmer. It would be interesting to get an explanation

      • There is a pretty substantial disparity between land and satellite records over Alaska and Northern Canada (and I would guess, also Siberia, etc….). I noticed it when working over some Alaska data, and what came to mind is that the satellites can’t see down to the level of the winter inversions. Given that cold air should preferentially warm (because the wavelengths where there is water/carbon dioxide overlap are more saturated in a wetter (warmer) atmosphere, so the effect of co2 is not very logarithmically damped), that would mean that the satellite would not be picking up the very low-level warming and hence could be a source–not the only one–of the disparity. I ran this by Christy when I was looking at this in 2011 and he agreed.

      • I agree with this also.

      • But aren’t cases where warm air overrides cold more common?

      • TY Pat and Judy. Makes sense. Testing current measurement systems against each other for consistency and ‘modern’ quality is something I have been exploring recently for the period since 1998, and especially since ~2005 and ARGO. Just consistency, not trends.

      • WFT, last I looked, had old data. sent him mail. no response.

        RSS has a problem with NOAA15 according to Roy.

        Now I suppose you guys want me to defend RSS adjustments.

        do I get a cookie at some fricking point or just more allegations of being part of a fraud?

      • Climate Explorer has the RSS lower troposphere product in degrees K which is kind of nice. That indicates the average altitude they are measuring is about 2.5 km at around 271 K degrees.

      • Cookie! No cookie! Just ‘thanks from a grateful nation…’

      • Well, I prefer moshe raw to moshe adjusted.
        ================

      • Pat Michaels:
        ” Given that cold air should preferentially warm (because the wavelengths where there is water/carbon dioxide overlap are more saturated in a wetter (warmer) atmosphere, so the effect of co2 is not very logarithmically damped)”

        I’ve seen this before but I’m not convinced it’s correct.

        Take the extreme cold air T=0K, by definition no energy emitted, and so, zero radiative forcing from increasing CO2.

        Now, radiative forcing depends on the profile of temperature, so things vary with height. And the net flux from 2xCO2 going into the surface is greater at the poles and lesser (even negative!) at the equator, suggestive of the overlap effect that you raise.

        But the radiative forcing aloft is lower at the poles than the equator which we see in the upper left plots here:

        So, rather than cold air should preferentially warm I’m thinking that the surface beneath inversions ( which, yes, are cold ) should preferentially warm, but that it is actually the warm air (tropics) that should also preferentially warm due to increased CO2.

      • One other aspect of polar radiative forcing comes from Dr. Curry:

        curry.eas.gatech.edu/currydoc/Curry_JAS40.pdf

        Polar air masses are very dry, but also loaded with the ubiquitous ice crystals which are ( I’m guessing ) pretty good IR emitters and also ( I’m guessing ) probably poorly included in models.

    • RSS and UHA both adjust their “data” its actually not temperature, Read their documents. find out how and why they adjust data for

      1. Changing sensors ( different satillites)
      2. Changing time of observation ( overflight paths migrate I believe.
      3. Changing location ( orbital decay)

      YUP the same stuff we have to correct for they have to correct for.

      Ask for their code and data. Then give me a cookie and I’ll look at it.

      Then ask how can I compare a Tmax/ Tmin estimate with a
      satellite estimate..

      especially if AGW expresses itself in a change to Tmin?

      • Steven, ignoring your immediate previous tantrum, I have read everything available from both UAH and RSS on how they interpret the MSU results at the various frequency channels. And also BEST, GISS v3, NCDC PHA, and even NOAA nClimDiv. Piecing together a temp measurement puzzle.
        As for the satellites, bird shifts are easy. Step changes, easy to calibrate from overlapping time periods. They do. No different than sat altimetry for SLR (essay Pseudo Precision). TOBS equivalent is equatorial drift time in polar orbits. Known, easy to compensate, see essay in new book. Excuse was imcorrectly used for a massive ‘FAIL’ paper on UTrH constancy called out in the climate chapter of the my book (has importantly to do with WV feedback). Orbital decay (altitude) not so important. The hard one is instrument degradation over time in space. Unshielded radiation damages electronics over time. The differences between UAH and RSS mainly apparently involve this last on NOAA15. Or so they both seem to think.
        No cookies for you, sorry. I do my own work on all this.

      • “especially if AGW expresses itself in a change to Tmin”

        LMAO! That is just pure guff. We can’t show warming, so we say that no, it’s the rising Tmins. You guys are so funny!

      • Well, you done the BEST that you can.

        And it remains consistent with warming at the low end:

        MODEL: IPCC5 (RCP8.5): 4.2C/century
        MODEL: IPCC4 Warming High: 4.0C/century
        MODEL: Hansen A: 3.2C/century ( since 1979 )
        MODEL: Hansen B: 2.8C/century ( since 1979 )
        MODEL: IPCC4 next few decades: 2.0C/century
        MODEL: IPCC5 (RCP4.5): ~2.0C/century
        MODEL: Hansen C: 1.9C/century ( since 1979 )
        MODEL: IPCC4 Warming Low: 1.8C/century
        ———————————————————————
        Observed: NASA GISS: ~1.6C/century ( since 1979 )
        Observed: NCDC: ~1.5C/century ( since 1979 )
        Observed: UAH MSU LT: ~1.4C/century (since 1979 )
        Observed: RSS MSU LT: ~1.3C/century (since 1979 )
        MODEL: IPCC5 (RCP2.6): 1.0C/century
        Observed: NCDC SST: ~1.0C/century ( since 1979 )
        Observed: RSS MSU MT: ~0.8C/century (since 1979 )
        Observed: UAH MSU MT: ~0.5C/century (since 1979 )
        ———————————————————————
        No Change: 0.0C/century

      • Data from mercury in glass, bimetal, or thermocouple instruments isn’t temperature either. It’s a measure of volume expansion of mercury, the differential expansion of two different metals, or the voltage differential between two different metals. No temp is measured in any of the three cases. So the microwave method is no different in that respect. You are just trying to muddy the conceptual waters.

  67. Matthew R Marler

    Robert Rohde, Zeke Hausfather, Steve Mosher:

    Thank you for another good presentation.

  68. Robert Rohde, Zeke Hausfather, Steve Mosher,

    There seems to be an infestation of duplicated data in your analysis. I found the same with GHCNM and notified them about it. Whether they will fix it at some point is still to be determined. They have more than 450 pairs of excessive data duplication at two stations in a single year. 184 of those are all 12 months. Some of the station pairs duplicate data for a decade or more. By duplicate I mean stations within a single country having 7 or more months of identical data within a single year. This should be rarer than a massive asteroid strike. Most of these occur during the overlapping GHCN-GISS anomaly periods 1951-1990 with no excessive duplication this century. It seems people had a strange penchant for tidying records containing ugly missing data months especially during the anomaly period.

    For comparison I checked the USHCN(4250*) subset of GHCN using +- 0.05C to eliminate the 100ths decimal since most other stations don’t report that figure, and rounding methods used may not be consistent. That subset has zero occurrences of more than 7 months duplicated and only 4 of those out of more than 78 million comparisons.

    Took a quick look at your data files and immediately found these two station pairs which demonstrate what I am saying. I don’t intend to do a comprehensive analysis of your dataset. What I can also say is, the ISTI database which is about the same size as your’s is roughly 10% duplicate data.
    Berkeley ID#: 151423, Berkeley ID#: 151425 1951-1960
    Berkeley ID#: 153938, Berkeley ID#: 158939 1961-1971

    I don’t know how much difference it makes globally, but I suspect it must have an effect on calculating breakpoints and result in erroneous weighting being given to those monthly values regionally.

    • see my comments above.

      Unless the data from stations ( there locations, names, and data )
      is duplicated for the entire record, they are kept as separate stations.
      That means if the location, name and data dont match exactly they
      are kept as separate stations.

      The other option is to merge stations that are slightly different and “assert’ they are the same.

      If you check the status updates for GHCN Daily ( for example ) one of our primary sources, you’ll see that there are a pretty good stream of changes where “false duplicates” get removed.

      false duplicates make your estimate more certain than it should be.

      • Steven Mosher,

        I notice you didn’t mention anything about the station pairs I listed. You also didn’t dispute it could have an effect on breakpoint calculations which I believe you calculate by looking at surround stations. Each station in those pairs above would lead you to assume no breakpoint in the other during the duplicated portion of the time-frame. You also didn’t dispute the idea of those monthly values getting more weight in a regional analysis.

        I don’t know about your dataset other than the cases I mention above. I must say it sure looks odd to find most of the duplication in GHCN occurring during the anomaly periods. Is it that way in your dataset?

        If GHCN-D is concerned about duplication, don’t you think checking your own dataset to uncover and remove such data would be wise?

        Maybe you should run a check by deleting all such duplicate data prior to doing an analysis. At least then you would have an idea how much of a difference it makes. I can’t comprehend a reason for leaving erroneous data in the database. It’s not like you have a shortage of stations.

      • I looked at the first pair.
        different name
        different identifier
        different location
        different data

        But your science is settled. they are the same?

        Now imagine I had joined them

        some guy would argue the opposite case.

        Here is what I also know. pick 1000 stations.

        answer doesnt change

    • Are you asserting that http://berkeleyearth.lbl.gov/stations/151423
      Is the same as http://berkeleyearth.lbl.gov/stations/151425

      We have two Stations because the location is different
      The name is different
      And the data is
      Different

      • Steven,

        You seem to think it is OK to have dozens of records from single location as long as those records have been assigned slightly different locations or names and some years of data are different values thereby making them all independent sources for the entire length of their record. This gives each of them equal weight with other truly independent records which would only have a single input into the analysis. Sounds like consensus building.

        The idea that most excessive duplications are found in the anomaly periods rather than in the other portions of the record strongly suggests finagling has been done. All that duplicated data should be removed. It’s not like their use is required. Plenty of other stations available.

        More than 93% of the within country two station comparisons in the entire GHCNM dataset had zero matching months of data in the same year. That applies for the subset of stations strictly within the USHCN and also for the remaining portion of the world’s stations. The rest of the world is where most of the duplication has happened. Do the calculation for the likelihood of finding any 12 months matching in a single year between two stations in the same country. You are looking at trillions to one against it happening.

    • Bob the wmo numbers are different as well.
      There appears to be a partial overlap. But we don’t assert that stations are identical unless they are substantially the same in position names identifiers and data.

      • Sounds like you are OK with leaving obviously erroneous data in the database and then using it.

      • Look at the plots.

        1. the location is different.
        2. the name is different
        3. the identifier is different.
        4. the data is different.

        You to argue that both of those records came from the same place?
        go ahead.

        here is another hint. Sometimes stations have two sensors. and the records go to different agencies.

      • Mosher, The Australian has a front-page story on the issue today. It concludes:

        “However, a report on US climate scientist Judith Curry’s website yesterday rejected any claims of fraud in the homogenisation process. The report by Robert Rohde, Zeke Hausfather and Steve Mosher said it was possible to find stations that homogenisation had warmed and others that had cooled. It was also possible to find select entire continents that had warmed and others where the opposite was the case. “Globally, however, the effect of adjustments is minor. It’s minor because on average the biases that require adjustments mostly cancel each other out,” the report said.

        “In a statement to The Australian, NOAA said it was understandable there was a lot of interest in the homogenisation changes. “Numerous peer-¬reviewed studies continue to find that NOAA’s temperature record is ¬reliable,” NOAA spokesman Brady Phillips said. “To ensure accuracy of the -record, scientists use peer-¬reviewed methods called homo¬g¬enisation to adjust temperature readings to account for a variety of non-¬climate related effects such as changes in station location, changes in observation methods, changes in instrumentation such as thermometers, and the growth of urban heat islands that occur through time,” he said.

        “Mr Phillips said such changes in observing systems cause false shifts in temperature readings. “Paraguay is one example of where these false shifts artificially lower the true station temperature trend,” he said. Mr Phillips said the largest ¬adjustment in the global surface temperature record occurs over the oceans. “Adjustments to ¬account for the transition in sea surface temperature observing methods actually lowers global temperature trends,” he said.”

        http://www.theaustralian.com.au/national-affairs/climate/arctic-monkeys-climate-agencies-revise-weather-records/story-e6frg6xf-1227215290929 (paywall)

  69. Pingback: The Climate Change Debate Thread - Page 4646

  70. Pingback: Global Warming - The "settled" science unwinds. - Page 19

  71. “And I used to be very suspicious… I think that Booker and Delingpole suddenly jumping all over this will eventually reflect badly on them…”

    No long skeptical? The entire land record is fatally flawed by collecting raw data in the midst of urban heat islands. “Up to 80 per cent or more of the Earth’s surface,” reports Christopher Booker (…STILL being tricked with flawed data on global warming), “is not reliably covered.” Fewer than 6,000 official thermometers (with their individual readings being extrapolated to cover up to 1.6 million square miles) are used to arrive at the average temperature of the world; and, since about 1900 most of the official thermometers “are concentrated in urban areas or places where studies have shown that, thanks to the urban heat island effect, readings can be up to 2 degrees higher than in those rural areas.”

    I think any good scientist ought to be a skeptic. ~Freeman Dyson says,

    • Waggy,

      All I said was that BEST has done pretty much as well as anyone can with the available data, and that Delingpole and Booker are mistaken if they think they can stop alarmism in its tracks by asserting fraud in adjusted data.

      Oddly, Mosher jumps down my throat every time I mention how I became a sceptic but lets it pass when I say that I used to be suspicious of the temperature adjustments, but he and Zeke (OK, mainly Zeke) have convinced me that the BEST approach is more or less sound.

      Anyway, I agree completely that the coverage and quality of the data is totally inadequate to the task of measuring any sort of average global temperature (absolute or anomaly) to within 0.1°C. And in any case, as noted elsewhere, it’s moist enthalpy we should be measuring anyway, not just temperature.

      Am I still sceptical? Sure. I’d call myself a microwarmer (as in AGW is real but too small to worry about and probably too small to measure with the network we have now). But shouting about temperature record adjustments is the wrong way to go to fight the politicised alarmism of Greenpeace et al.

      • In the first link Christopher Booker reports, “One of the more provocative points arising from the debate over those claims that 2014 was ‘the hottest year evah’ came from the Canadian academic Dr. Timothy Ball when, in a recent post on WUWT, he used the evidence of ice-core data to argue that the Earth’s recent temperatures rank in the lowest 3 per cent of all those recorded since the end of the last ice age, 10,000 years ago.” If true it invalidates the whole approach to gathering data than can be massaged to sell a story temperatures are hotter now than ever.

      • Wag, the ice core data Ball referred to is mostly GISP2 from Greenland. He was probably incorrect to stretch that location to the world. However, if one uses the Marcott thesis–not the abominable science paper— then global paleoproxies indicate that perhaps half of the past 2000 years was globally warmer than now.

      • Good points and we know the paleoproxies have been abused to make a case for warming that just does not exist, a more recent example being Mann’s ‘hockey stick’ science. “Overall,” says Wegman when testifying before congress, “our committee believes that Mann’s assessments that the decade of the 1990s was the hottest decade of the millennium and that 1998 was the hottest year of the millennium cannot be supported by his analysis.”

    • My take is that none of it matters because sea levels and overall temps have done nothing but swing for millennia and it’s hard to find anything odd in the utterly commonplace.

      But, regarding UHI, it’s interesting to reflect that most of Sydney’s (Obs) hottest years are indeed clustered after 2000, while two long-record coastal+rural stations I just happened to glance at this morning for comparison have their top temps clustered well back in the past.

      Yamba Pilot Station’s 95th percentile of hottest years were between 1884 and 1896. Kempsey, opened later and further from the coast, had its entire 95th percentile in a straight run from 1910 to 1919 (and that’s within BoM’s safer screening era). Similar story for the 90th percentile at these stations. Other and more mixed stories from elsewhere, of course.

      A history of max temps tends also to be an history of cloud behaviour, land clearing, re-vegetation, screens, UHI etc so it’s hard to care much. (Contrary to what has been cleverly half-implied, much of Australia was drier for the half-century before 1950.) If people want to turn this vague stuff into graphs and if those graphs indicate some global cooling or pausing or warming in various lines or cycles…well, it’s good to know the world is still the world! We still haven’t flatlined after all these millennia! Still got it!

      Of course, a degree of heat in Sydney is far more distinguished than a degree of heat here in the boonies. If a degree falls in the forest…

      • Mosomoso

        I suspect that the rural stations you quoted did not have Stevenson screens and therefore their data will be discounted byBOM who are removing records set pre Stevenson.

        Tonyb

      • tonyb, not always the case in regard to Stevenson screens. Australians were early adopters and BoM often seems not to know (or won’t say) if screens were in place.

        It’s also interesting that these heat clusters in the past were preceded and immediately followed by more average runs of temps.

        Certainly, with screens in place and everything good to go, that cluster of av max heat post 2000 is not apparent up the coast here as it is in Sydney.

        Greetings from the mid coast, where the almost-El Nino has forgotten the rules (again!) and brought us a coolish and damp late summer after a hot start.

      • Mosomoso

        These clusters recur continually throughout the record. We see clusters of heavy rain, then drought, then heat then cold with, as you say, ‘normal’ conditions in between. Which beggars the question as to whether our usual weather state is for extremes or normality. In the uk I would go for extremes

        Tonyb

      • Sydney’s a great call, mosomoso. It’s had a continually published local newspaper since 1860 or so and climate scientists can’t disappear what actually happened, hot days, cold days, drought etc since that time. BOM is left just arbitrarily ignoring history and stating that they only care about temps from 1910, as if the country didn’t exist beforehand. Absurd. Late 19th was real hot on the eastern seaboard in Oz. It just was. It was all over the newspapers. Trust the contemporaneous records. It took the Oz climate scientists Karoly and Gergis too (I think) until 2014 to bother to look at news records to map actual temps from back then and lo, they found it was hot, probably hotter even than today. No great press release for that finding though.

  72. John Vonderlin

    Robert, Steve, Zeke and Judith,
    Thank you for this posting. Between this and the ongoing dissection of the F & M paper at Climate Audit I feel like I’m witnessing some of the best manifestations of the new paradigm of Internet science discussions. Even the angry ego clashes, the nattering nabobs of negativism and nitpickers are helpful in comprehending the sausage-making aspects of the evolution of knowledge. Kudos to all. I feel a little smarter today.

    • Both these discussions seem more like angels on pinheads.

      The surface temperature is not remotely the energy content of the atmosphere. The energy content at the surface has both latent and sensible components. These change with surface water availability. Temperature at the surface changes with rainfall and drought and not just ‘global warming’.

      The L&M paper purports to analyse for divergence of models from observations. For each of these models there are thousands of feasible solutions. Divergence is a matter of choosing a warm solution from amongst the many. It has no deeper significance. Climate models at best define a probability space – there are no unique, deterministic solutions.

  73. Rohde, Hausfather and Mosher

    Your first paragraph states, “— and Anthony Watts previously insinuated that adjustments are somehow criminal.”

    I went to your link and did not see where Watts insinuated anything about your BEST efforts. The earlier discussion at that link was all about NOAA adjustments.

    • a link to the latest cartoon got dropped.

      Essententially you had Booker raising an issue.
      Delingpole piled on with links to Zeke and Shub who had a swipe at us.
      Then WUWT with BS and Josh taking a swipe at us.
      Then Morano and drudge reposting an old article where the claim of criminal activity was made.

      So the minute all of those guys want to say that we are not a part of a hoax or fraud or criminal activity I’ll be satisfied..

      • Stop showing your thin skin. This is politics, Chicago style. Obama style.
        Josh’s cartoon had nothing to do with BEST. Already posted before upthread. The MSM columns you cite have nothing to do with BEST.

        Your paranoid (for evidence see reply upthread ending in ‘show thempiles of ashes’), increasingly vehement, and increasingly irrational responses do, however, suggest something might be rotten in BEST’s Denmark.
        To paraphrase the Bard.
        So, to rehash previous jousts. BEST station 166900 is just a temperature expectation field, not a temperature. BEST station 151882 is just a data/metadata ingestion glitch… But it all evens out. Trust you. Well, learned long ago in Chicago style politics, never trust, only verify.

        How about BEST doing that on the central South America and partial Arctic regions Homewood specifically and irrefutably criticized concerning GISS?

      • Booker and Delingpole do not constitute a ‘pile’. No one took a swipe at BEST, it was used to fact-check GISS’ adjustments. Too bad Berkeley makes the same type of changes to data the other agencies make.

  74. All this sturm und drang over whether the made up, computer generated data was massaged in the optimum way.

    People are missing the forest for the twig.

    1. Nothing you do with statistics will generate a number that can with any seriousness be called a “global average temperature” with the precision and accuracy claimed by the reporters, let alone that necessary to implement massive global intrusion by the government into the energy economy.

    2. Notwithstanding point 1, the CAGW movement desperately needs to claim that they do know the GAT to within tenths of a degree per decade so they can push their decarbonizaton/socialization policies.

    3. Actual accuracy and precision are irrelevant for political purposes, what matters is the headlines.

    “WARMEST YEAR EVER! (by .002 degrees, with 35% probability)

    4. The reported temp trends were full of methodological statistical errors, which undermined the headlines.

    5. BEST was created, not to correct the reported trends (because ‘correctness’ is irrelevant and unattainable), but to ‘fix’ the statistical problems and so improve the ‘suitability’ of the reported GAT for headline purposes.

    It is hilarious in this context that one argument in support of the supposed objectivity of the BEST adjustments is that in some instances their statistical legerdemain results in ‘cooling.’. It is hilarious because the miniscule adjustments BEST makes in the reported record, you know, the proof that BEST is objective ‘science’, was the intended goal of the BESTers.

    “See? We can prove we are objective and did not set out to provide PR support for the CAGW movement because our results are exactly what we predicted them to be. Within hundredths if a degree!”

    (And don’t get me started on ‘predicting’ past temperatures.)

    BEST may be the best ‘we’, meaning they, can do. But that does not make their work sufficient for the purpose for which it is being sold.

  75. John Smith (it's my real name)

    just want to say
    Robert Rohde, Zeke Hausfather, Steve Mosher
    thanks for this post
    read with great interest
    trying to remain skeptical of my own skepticism
    thanks also to Judith Curry

  76. Then why do we see that WUWT graph of temp adjustments in the GISS data that shows only a warming trend to account for >50% of the current anomaly?

  77. A good check on the accuracy of global warming observation is the bulge in temperature between 1910 and 1940 when the temperature rose by 0.45C and then just as surprisingly fell again. Berkley shows this well. It is explained in my website (underlined above).

  78. So if I were to use a randomly crap set of algorithms, and attempted to justify them by saying that ON AVERAGE there were just as many spurious ups as downs, and claimed to be doing science, would people fall at my feet and worship my magnificence?

  79. Boston has 62 inches of snow since winter started…?

    • No… 62 inches in about the last 2 weeks, and counting and 76.5 inches this winter. The only predictions that are coming true are those about global cooling and Old Farmer’s Almanac did that back in 2009 but not by looking at homogenated data.

  80. When dealing with data in such massive amounts random errors will disappear into the statistical wash and are unimportant. The only errors that should matter are the systematic ones. And in my opinion those are best addressed not by tweaking the data – a fundamentally dishonest procedure – but by analysing the data to estimate the size of each systematic effect.

    The more data is adjusted the less information it contains. It is neither necessary nor wise to try to detect and correct every breakpoint in the data. That represents a massive overadjustment. The adjustments become so significant that the adjustment procedures themselves risk becoming sources of systematic error. These procedures turn unsystematic random breakpoint errors that should have no effect on the overall temperature record into sources of systematic error.

    Furthermore the focus on breakpoints means that real sources of systematic error are ignored. Often it is implicitly asserted that correcting breakpoints will somehow correct these errors too, but that assertion is false. Consider UHI for example – a real and significant sourece of systematic error. Breakpoint adjustment and geographic averaging will not fix a systematic error like UHI. Indeed it is far more likely to amplify it by preserving slow spurious warming trends recorded at instruments in urbanising regions and eliminating the jump corrections caused by station moves to more pristine environments. Geographic averaging will then smear these adjustments to adjacent rural stations further contaminating the data. If your supposed UHI correction procedure leads you to adjust the temperature recorded 50 years ago at a rural station adjacent to where a city will later develop downwards by 2 degrees, then you are quite simply doing it wrong.

    • Actually Ian, homogenization seems to do a reasonably good job of dealing with UHI, at least in the U.S.

      I wrote a paper with the NCDC folks that looked at how well homogenization does at removing UHI in the U.S. record. We looked at various definitions of urbanity based on things like lights visible at night from space, population density, population growth, and impermeable surfaces. We also ran the homogenization algorithm both with all stations and with only rural stations used to detect breakpoints, to avoid any risk of falsely adjusting rural stations upwards. We found that there is a sizable UHI signal in the raw data, and that most of it is effectively removed by homogenization even when only rural stations are used to homogenize, across all the different urbanity definitions examined. ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/hausfather-etal2013.pdf

      • My fundamental criticism is that things like breakpoint adjustment and geographic averaging, while they are attempts to address real issues, are excessively complicated and therefore likely to cause more harm than good. They destroy information and hide the data in a way that makes it difficult to see what is going on. The fact that you needed to spend such a significant amount of effort to try to understand the effect of these adjustment procedures is illustrative of the problem.

        I argued that breakpoint adjustment cannot address UHI and indeed is likely to make it worse. Homogenisation however might be able to address UHI if high quality rural stations are properly identified and pinned. Some kind of geographical adjustment is needed in any case to compare the measurements from differently distributed instruments at different dates.

        What we normally see however in most of these records, is a combination of several adjustment procedures – for example breakpoint adjustment and homogenisation and possibly also TOBS. This introduces further complexity. Order of adjustments starts to matter for example. You end up in a place where it is very hard to understand what is going on.

        My real beef is with breakpoint adjustment which I believe is excessive and unnecessary. For example many breakpoints are caused by maintaining the instrument or shifting it a short distance to a more desirable location. By eliminating these breakpoints you are effectively undoing these changes which were usually made for good reason. Why do that? Is it even desirable to try to do that?

        Bear in mind that the problems that those who maintained or moved the instrument were trying to address often involve gradual deterioration which will not be caught by a breakpoint detection algorithm. While there are countervailing examples of breakpoints caused by sudden changes to the environment – laying concrete and so on – in the absence of any information as to what caused breaks and in particular in the absence of any reason to suspect the the effect of these breaks is to introduce a SYSTEMATIC bias into the temperature record, I would suggest that we would be better off not making any breakpoint adjustments at all.

    • UHI is not gradual.

      1. It is seasonal.
      2. it is tied to specfic synoptic conditions.

      So if you look at a long term rural verus urban plot, what you’ll see
      is certain months or seasons where it’s pronounced and the rest of the year… nothin.

      • The onset of UHI clearly is gradual as it is determined by the rate of urban growth. Cities don’t string up overnight.

        Yes UHI is seasonal. It effects minima more significantly than maxima, we get more of it in winter than in summer, and it is more significant at night than during the day. So …

      • You have not looked a long term studies of uhi

      • Steven – I regard your claim of a sudden onset for UHI with considerable skepticism.

        For this to be true UHI would need to have a sharp boundary (so that adding a single building would shift the UHI boundary enough to suddenly put an instrument inside it). It would need to have a constant magnitude in all urban areas subject to it no matter how built up so that continued development around an instrument does not increase the size of the effect. Those ideas are at odds with the physics of what causes the effect. I’m just not buying it.

        You seem to be claiming to have found this empirically from looking at records? Telling the difference between a step change and a gradual change in a record subject to any kind of noise is very difficult. The noise will make you see steps where there are none. You would need to apply some kind of statistical significance test – eyeballing a graph isn’t good enough to settle this claim. I am extremely skeptical that you have evidence that UHI is a step change (as opposed to a gradual one) capable of passing a statistical test of significance.

      • Wrong Ian
        I am not arguing that the onset is sudden.

      • Finding gradual change is easy.
        See my answer to pielkes challenge on
        This thread.
        You have not worked with the data.
        That shows.
        Process data more. Comment less

  81. After reading the post and comments, I am fairly satisfied that the people who are working on BEST data set are sincere and their statements of why they did what they say they did have traceable logic behind those efforts.

    The Urban Heat Island effect seems to me to be problematic as with the increase in global population, there has been a population movement from rural to urban that impacts the UHI effect. Adjustment for Mumbai may not be the appropriate adjustment for Rio de Janeiro because all things are not equal.

    This leads me to Rob Ellison’s comment:

    ” Climate models at best define a probability space – there are no unique, deterministic solutions.”

    • RiHoo8, since you say you’ve read the comments here and are satisified with what BEST has said, may I ask you about this comment I made and the responses I received? Specifically, after reading it, do you have any reason to believe BEST’s estimation of “empirical breakpoints” improve its results?

      It is trivially easy to see this step introduces a great deal of spatial smearing. It is also trivially easy to see because of it, BEST has far less spatial resolution than that of other groups. Can you offer any reason that should be desirable?

      Whether or not BEST is sincere, and whether or not there is a reason for what BEST does, it seems reasonable to me to question BEST’s “empirical breakpoint” estimations. As far as I know, BEST has never demonstrated these estimations have skill, and they cause BEST’s results to be very dissimilar from previous ones.

      And for the record, this isn’t a minor thing. If BEST discarded its “empirical breakpoint” estimations, they would find ~20% less warming and would have far greater spatial resolution.

      • Steven Mosher, “Why include data you know to be unsuitable.
        You’re still wrong. Officially in my ignore bucket.”

        It is an reference to compare best and giss to. Best and GISS agree well with that crap in FL but don’t in GA, call it third party noise if you like. btw ghcn/cams on climate explore only goes to 1948 or I would use that.

    • Thank Ri.

      I’m continuing some work on the UHI issue and on improving the fidelity of local predictions. The chief difficulty is getting anyone with funding to believe that is actually an interesting science issue.

      • If they thought it would help the cause, they would fund it.

      • Steven Mosher, “I’m continuing some work on the UHI issue and on improving the fidelity of local predictions.”

        Then reduce interpolation range with instrumentation density. If you don’t need to interpolate 1000 miles, don’t do it.

      • Georgia wid LRI

        Georgia wid out

        Optimal range interpolation, what a novel concept.

      • Captain cruTs is unsuitable. Read their docs.
        Want to know what Harry read me worked on??

      • Captain. Interpolation isn’t the issue.
        The correlation decides the distance for kriging the weather..

      • CRU Ts is unsuitable for some things. That chart compares GISS 1200 long range, GISS 250 short range, NCDC and CRU TS. You focus on just one you find “unsuitable”. Typical.

      • Mosher, “Captain. Interpolation isn’t the issue.
        The correlation decides the distance for kriging the weather..”

        Interpolation isn’t an issue for “global” it does smear regional. People are picking at you with “regional” issues which aren’t what your product is about. On a “regional” basis, interpolation “adjusts” as much or more than the standard adjustments and in just about the same way, some warm some cool That is unavoidable for a “global” product.

        For correlation determining the distance for “weather”. Some areas are naturally out of phase with fairly close neighbors. Paraguay is at 25C which is right on the shifting westerlies line. Admundsen Scott is a corner located in the eye of a nearly perpetual polar cyclone. Even kriging has it limits.

      • That should be 25 south not C

      • captdallas2 0.8 +/- 0.2:

        Interpolation isn’t an issue for “global” it does smear regional. People are picking at you with “regional” issues which aren’t what your product is about.

        I don’t think it’s reasonable to say BEST is only about global temperatures when it goes out of its way to encourage people to look at temperatures on a much finer scale. Regardless, you’re wrong to say interpolation isn’t an issue at the global scale. Interpolation can influence and bias results at any scale depending on the distribution of sampling. An obvious example is how the presence/absence of coastal stations can influence temperature trends.

        Of course, that doesn’t mean interpolation introduces problems with BEST’s global results. It just means we can’t automatically rule out the possibility BEST’s results, even on a global level, have problems caused by interpolation.

        Even kriging has it limits.

        This isn’t even just about kriging. BEST’s homogenization process compares stations to their 300 nearest neighbors. It will look for such neighbors as far out as 2500km. That means BEST’s “empirical breakpoints” can get estimated using data as far apart as 5000km (2500 km in each direction). I suspect that affects their results more than the interpolation distance for the kriging step.

        To be fair though, the amount of weight stations are given in these calculations is dependent upon how far away they are. That means close stations will “matter” more than far ones. The more densely sampled an area (for any given time), the less this issue should matter.

      • Brandon, Kriging is superior to just long range interpolation, but as stations drop out it ends up being about the same. My biggest issue is where the nearest neighbors are coastal or on the other side of a “weather” boundary. Georgia is a good example because it has both coastal and mountain neighbors so it seems to be adjusted more by interpolation than standard station adjustments.

        Georgia btw had a huge land use change with tree farming as did most of the southeast. Coastal and mountain stations would have less of that impact so they seem to negate what should be obvious land use influence.

        Since BESTs adjustment method is different and tends to verify the others, I doubt there is much more that can be done there. Anomaly baseline is an issue but mainly for the “warmest” month ever stuff.

      • Brandon & Mosher,

        On the “global” versus “local” subject. You have climate regions. Georgia and the Carolinas have coastal, Piedmont and mountain climate regions so like to like interpolation or kriging would produce a better local climate reconstruction. The plains should be weighted to plains climate etc. That isn’t all that important for “global”. So yes, BEST is actually “globally” oriented and should not be expected to reproduce state climate. That is what state climatologists are for.

      • captdallas2 0.8 +/- 0.2:

        Brandon, Kriging is superior to just long range interpolation, but as stations drop out it ends up being about the same. My biggest issue is where the nearest neighbors are coastal or on the other side of a “weather” boundary.

        Agreed. I just don’t think that’s as much of an issue right now due to the figures showing the amount of spatial smearing caused by BEST’s homogenization. The difference in spatial resolution between its results with and without their “empirical breakpoint” calculations is enormous even though the same kriging process is used in both cases.

        I think BEST’s “empirical breakpoint” calculations have so much influence they drown out most of the concerns about the kriging process. I’d definitely be interested in looking more closely at the kriging process and seeing if it could be improved, but I think any gains we could get in improving it would pale in comparison to those we could get in improving (or even just removing) the “empirical breakpoint” step.

        Georgia btw had a huge land use change with tree farming as did most of the southeast. Coastal and mountain stations would have less of that impact so they seem to negate what should be obvious land use influence.

        It would be interesting if there were two competing data issues in that area which mostly cancelled out. In such a situation, what would a breakpoint algorithm do?

        Since BESTs adjustment method is different and tends to verify the others, I doubt there is much more that can be done there.

        I think BEST’s adjustment method could be greatly improved, and I think that would greatly improve the BEST results for sub-global scales. On a global scale, I don’t think it’ll make that large of a difference. I still think it matters though.

        I don’t think problems with BEST’s work are going to completely overturn anything, but BEST portrays its results as being accurate to less than a tenth of a degree. BEST released a report for the media on 2014 temperatures, paying attention to variance of less than 1%. If variance that small is enough to go to the media, I’d say it is certainly worth examining the adjustments they perform which increase their results by ~20%.

      • captdallas2 0.8 +/- 0.2:

        On the “global” versus “local” subject. You have climate regions. Georgia and the Carolinas have coastal, Piedmont and mountain climate regions so like to like interpolation or kriging would produce a better local climate reconstruction. The plains should be weighted to plains climate etc. That isn’t all that important for “global”. So yes, BEST is actually “globally” oriented and should not be expected to reproduce state climate. That is what state climatologists are for.

        My problem with this argument is that’s not how BEST portrays their work. They release their results in data files with ~100km x ~100km grid cells. They’ve talked about wanting to get the resolution of those files down to ~25km grid cells. I don’t see how to square that with the idea BEST is globally oriented. If BEST is globally oriented, why is it giving out results for such local scales?

        Similarly, BEST’s website has a feature which lets you look up the trend for individual regions, or even individual cities. You can even pull up the results for any part of the planet by clicking on a map. Why would that be true if BEST is globally oriented?

        If BEST wants to be globally oriented, I’m okay with that. What I’m not okay with is encouraging people to look at BEST’s results on a fine, local scale then turning around and saying people shouldn’t use BEST’s results on anything other than a global scale.

        Though to be honest, I can’t even see why we’d care about BEST if were only supposed to be used at a global scale.

      • Brandon, it isn’t what BEST wants to be or thinks they are, it is they are limited to what they have.

        That is something close to the “raw” data for the four southeastern Atlantic states. Georgia and SC changed from huge cotton, tobacco and rice states to tree farms, peaches with more limited cotton and tobacco, still a little bit of rice thankfully. Florida rerouted rivers and installed drainage and moved to cattle, cane and potatoes along with planted pines and oranges. The group that should resolve local climate history issues would be the state climatologist and department of agriculture.

        btw, you don’t know what hot is until you are chopping cotton or pulling tobacco in the summer.

      • Brandon, I have another comment in moderation, but here is a quick comparison. Using CRUTS as a “raw” approximation.

        SC GA NC FL
        trends
        CRUTS 0.14 -0.28 0.42 0.68
        BEST 0.73 0.65 0.82 0.76
        GISS250 0.42 0.06 0.55 0.71

        Adjustments from “raw”
        BEST 0.59 0.93 0.40 0.08
        GISS250 0.28 0.34 0.13 0.03

        If GISS250 and BEST station adjustments are about the same its the interpolation causing most of the change in trends. The most “normal” climate for long range interpolation would be coastal which also has greater station density, especially long term. Note how Florida has almost no correction and NC with less coastal exposure is less than SC and GA.

        “Globally” it means nothing, just like Paraguay, Alice Springs and New Zealand. Local is a different matter.

      • Why include data you know to be unsuitable.
        You’re still wrong. Officially in my ignore bucket.

      • Steven Mosher, “Why include data you know to be unsuitable.
        You’re still wrong. Officially in my ignore bucket.”

        Whatever. CRUTS is looking for things like first frost and extremes which would be important for a local climate record and Best tends to over smooth those kinds of events. While best warms Georgia, it cools the Bahamas and Cuba, while it leaves Florida and New York about the same. that appears to make CRUTS3.22 better in most cases for a “local” climate and BEST better for a “Global” climate.

        As far as “global” absolute temperature goes, BEST is cooler than CRUTS3.22 mainly because of the Antarctic which is at an average elevation of 2000+ meters. If you use potential temperature they are about the same.

  82. Robert Rohde, Zeke Hausfather, Steve Mosher – I am pleased you are presenting this in-depth analysis and discussion. However, when will you (and BEST) perform an equivalent analysis of maximum and minimum temperature trends?

    Regards Roger Sr.

    • The data is there. The code is there. and there is a joint paper prolly coming out on a related issue. That’s a Rohde effort with other people.

      We have to choose between doing science and de bunking lies.
      it’s not always an easy choice.

      • “Debunking lies” – do you mean the lie of the Rutherglen record in BEST that only exists from 1965? Oh no, not that lie. You mean the “lies” of a couple of opinion columnists who didn’t even mention BEST. Research harder, comment less, mate.

      • Well, when you do not address Homewood’s measured factual discussion of GISS but rather defend BEST globally rather than in the two regions he specifically analyzed in detail using only GISS’ own publicly available info, my opinion is that we would be better off you doing neither, just calming down. Want to be useful, show the BEST results for those two regions compared to GISS. Really would like to know that, rather than the general global stuff.q

        As for Rutherglen, see upthread. BEST data starts in 1965, claims two recent station moves, and imputes a 1.98C century rise. BOM Acorn at least starts in 1913. It claims a move in the 1970s. Rutherglen is a major rural agricultural research station, same location, Stevenson Screen from the beginning, always well maintained, no moves–per eye witness researchers and the BOM metadata. BEST does not even get the raw data and metadata right. Now, one example cannot be generalized to conclusions about temperature trends. But it sure can point out potential quality issues in the BEST automatic ingestion and analysis algorithms.

      • Shifting the burden to me.
        Go get the pha code.

        Our approach makes a similar adjustment.

      • Hide. Delingpole linked to shub who discussed us.
        Watts pictured us and linked to the whole discussion.
        Drudgereport and morano ran quotes that this was criminal. Pulling up a years old quote.
        My users ask me what the hell.

      • Yes, shifting the burden to you. We point out that BEST has Rutherglen ingestion and metadata both wrong. It is not our responsibility to fix BEST or explain the goof. It is yours. Problem is, BEST goofs like 151882 or previously discussed 166900 do not inspire confidence overall despite your impassioned arguments. All your goofs just wash out? Show me two coolings equal to those two warmings, please. In roughly the same regions since you have what, 44000 something stations? I have not been able to find them. And looked. Africa offsets US? Surely you jest!
        As for NCDC PHA, I read it. Published 2007, IIRC. But looking at what it has done, something isn’t right. And in undocumented ways PHA has gotten worse since. Graphical Evidence in the book essay. I think there are two underlying logical presumption flaws. 1. Scope of regional expection, core to BEST also. Warming bias from UHI. 2. Menne stitching. Warming bias from presumption last is best. Please respond with new counters, not, for example with a reference to Menne’s 2009 paper. Cause there is the 2014 observational refutation paper to that cited in my book footnote on all this.
        I make neither fraud nor criminal allegations–althoughnatbthismpoint, with the table stkes, those cannot be completelynexcluded.. It is most likely just confirmation bias blinding to subtle logical flaws. Although with Gavin and GISS 2014 PR, am inclined to be much less foregiving.

      • Wrong rud.
        Read zekes paper.

        And lastly people want explanations of adjustments.

        There is no human making decisions to justify or explain. The algorithm is the explanation.
        When you show that the algorithm moved the mean answer away from the truth in a systematic way
        Then you have science.

      • “There is no human making decisions to justify or explain. The algorithm is the explanation.”

        Who decided to use the algorithm to adjust the data?

      • Rud
        “Yes, shifting the burden to you. We point out that BEST has Rutherglen ingestion and metadata both wrong”

        actually you havent.

  83. For all their self-aggrandizing, the data manipulators of the global warming movement have become the Brian Williams of science.

  84. if they were serious, they’d clean up the glaring quality problems.

  85. Pingback: If You’re Not Perfect, You Don’t Matter | Izuru

  86. Robert Rohde, Zeke Hausfather, Steve Mosher

    I also have this question based on the long term record at the Blue Hill Meteorological Observatory. They have an informative set of analyses of long term trends – http://bluehill.org/observatory/2014/02/graphs-of-annual-blue-hill-observatory-climate-data/

    Among there findings is a warming trend at this relatively pristine site. However, they show a remarkable long trend in a reduction of wind speeds -http://www.bluehill.org/climate/annwind.gif

    This implies, perhaps, growing trees? If so, that by itself will alter the temperature trend. How does BEST (and others) account for this type of effect? It certainly is not likely to be ferreted out in the homogenization.

    Roger Sr.

    • Roger

      UnleSs the full and evolving circumstances of each station is examined the resultant data is little more than anecdotal

      Camuffo and jones and others were paid 7million dollars to review seven historic European temperatures in the EU funded ‘ Improve’ project.

      http://link.springer.com/book/10.1007%2F978-94-010-0371-1

      The book goes into minute detail and basically revises the original data. Circumstances change over the years with regards to buildings, observers, instrumentation, location etc and to derive the correct figures over many decades or centuries requires a certain amount of investigation.

      Tonyb

      • Tony, so long as the Euro records acknowledge it was way hotter in the Roman period when Hannibal was taking his elephants over the Alps into Italy, I’ll be okay with that. Cos goodness knows no one could get one midget elephant over the Alps today much less a bunch of them like hannibal and his Carthaginian crew. What I cannot stomach is the ignorance of history of most of climate science, and as demonstrated by Mosher ably in this comment thread, the utter contempt for locality and history when it is presented to them but disagrees with their preconceptions. It’s taken til 2014 for PAGES2K, the sort of BEST of paleoclimate, to quietly recognise the medieval warm period existed, back from Mann’s patently absurd hockstick of flat past temps. Looking forward to your next opus on temps.

      • Cos goodness knows no one could get one midget elephant over the Alps today much less a bunch of them like hannibal and his Carthaginian crew.

        An interesting change from Fahrenheit and Centigrade units of temperature: n Hannibals is the Alpine temperature at which a Roman general can get n elephants over the Alps. :)

        You can read about what geologists have to say about all this at

        http://www.earthmagazine.org/article/hannibals-trail-clues-are-geology

        The following passage is particularly relevant.

        “Polybius described how snow and ice made the climb down the mountain pass into Italy a dangerous, slippery descent. Soldiers’ feet sunk in a fresh layer of snow and then slid in layers of icy compacted snow from the previous winter, called firn. Climate studies show that the climate in the Alps at the time of Hannibal’s march was similar to the climate today, Mahaney says, so one way to test mountain passes along the proposed routes is to look for firn. It’s only present at one of the potential passes — Col de la Traversette — he says. At nearly 3,000 meters high, the Traversette has a microclimate that keeps snow from completely melting during the summer. If the Carthaginians were indeed sliding down the slopes of an alpine pass, then it must have been this one, Mahaney says.”

        The crossing was in fall of 218 BC. Although close to the onset of winter, Hannibal preferred this to the following spring so as to give the Romans less time to prepare.

      • Vaughan

        The route proposed in your link was dismissed as unlikely half a century ago. Professor Hunt has spent many decades authenticating a route that does not follow the one you describe. It is mentioned in your link, I am surprised you did notr mention it.

        As for the climate of the times, current research believes the Glaciers then were smaller than today, which would fit in with Hanibal’s ability to cross what already very hostile territory that would have been impossible with copious amounts of snow and ice.

        http://www.spiegel.de/international/spiegel/the-coming-and-going-of-glaciers-a-new-alpine-melt-theory-a-357366.html

        tonyb

      • Elephante, gentille elephante…

        @tonyb: Professor Hunt has spent many decades authenticating a route that does not follow the one you describe.

        Two decades, to be precise (“since 1994”).

        [That route] is mentioned in your link, I am surprised you did not mention it.

        My link mentioned two routes, the one favoured by my Stanford colleague Patrick Hunt, and that preferred by York University’s Bill Mahaney. I am surprised you did not mention the latter.

        (I should mention in passing that two routes hardly exhausts the possibilities. Altogether six passes have been proposed at different times for Hannibal’s route: Great St. Bernard, Little St. Bernard, Mont Cenis, Mont Genevre, col de la Seigne, and Col de la Traversette.)

        If two distinguished professors, both of whom have traversed these routes in person, cannot agree on which one Hannibal is more likely to have followed, what chance do you or I have to resolve this ongoing puzzle?

        I’m also surprised you did not take note of the following passage from my link:

        Modern historians know the timeline and general trail of this march based on the writings of two ancient historians: Polybius, a Greek born in roughly 200 B.C., and Livy, a Roman born in 59 B.C. Hannibal researchers must place their complete faith in the work of these men because no other ancient texts regarding Hannibal’s march have survived. On details where Polybius and Livy differ, historians tend to defer to Polybius because he actually traveled through the alpine terrain that Hannibal covered, Hunt says.

        This surely should make Hunt fans Polybius fans.

        Now your source says “Hannibal probably never saw a single big chunk of ice when he was crossing the Alps with his army.” But no actual probability. 99%? 1%?

        The most reliable sources, the Greeks Sosilos (Hannibal’s teacher and biographer), Silenos (a war correspondent of the time), and Polybius (who retraced Hannibal’s route 60 years later), contradict this. Polybius gives perhaps the fullest account of Hannibal’s 15 days on the Alps prior to the descent into Italy: “It was already October and snow was falling on the summit of the pass, making the descent even more treacherous. Upon the hardened ice of the previous year’s fall the soldiers and animals alike slid and foundered in the fresh snow.”

        “Never saw a single big chunk of ice”? Give me a break, Tony.

        Unfortunately Polybius gave too little detail for modern readers to reconstruct Hannibal’s route. Today there are ongoing disagreements about the route, and a huge interest in general in the crossing.

        Incidentally I have great respect for California professors, not excepting Bruce Franklin (whom I watched walk right outside my office window in February 1971 accompanying students bent on destroying the computer center). But to your “Hunt spent many decades authenticating” I have to point out that Linus Pauling spent even more decades authenticating Vitamin C as a cure for the common cold. His cure also worked for me, but had I not eventually realized it was the water Pauling was using to down his ever-increasing dosages that was doing the trick, I’d still be as sold on Pauling as you seem to be on Hunt.

        Give Mahaney a chance.

      • Vaughan

        Here is Professor Hunts impressive cv

        http://www.patrickhunt.net/arch/arch.html

        Of course writing a lot doesn’t automatically make him right, but he has a long and distinguished record. A few years ago I did some research, together with good old Max Anacker and as far as practical, without it being a proper expedition, loosely traced a couple of the routes. Unless he stuffed the elephants in the cable cars, which ever way Hanibal went would be extremely difficult

        Research appears to indicate Roman glaciers were rather limited at the time, but notwithstanding that you would want to avoid some of the highest routes as snow can occur at any time. What might have been possible one year may not have been possible 60 years later as the climate changed .

        We can only hope that as more research come to the fore that the likely route becomes more apparent.

        tonyb

      • @me: Give Mahaney a chance.

        @Tonyb: Here is Professor Hunts impressive cv

        Give Mahaney a chance.

        he has a long and distinguished record.

        Not sure why length counts, but if it does then Mahaney’s is twice as long.

        Research appears to indicate Roman glaciers were rather limited at the time, but notwithstanding that you would want to avoid some of the highest routes as snow can occur at any time.

        Yes, that’s consistent with Hunt’s choice of a low-altitude pass. But not even Napoleon found that reason convincing.

        Take a look at this book.

    • It certainly is not likely to be ferreted out in the homogenization.

      well we DID

      RAW trend 1.58
      Adjusted 1.19

      read it and weep

      http://berkeleyearth.lbl.gov/stations/174146

    • Surely the first assumption should be that they have measured environmental wind speed properly. It looks like a decrease of about 0.02 m/s/year. That is not so out of line with these numbers.

    • Roger,

      Trees were a reason to move the measurement site at KNMI in De Bilt (DB) to a slighly different location. There has been a very detailed three year field experiment to characterize the effect of a possible move, which led to the following report.

      http://www.knmi.nl/publications/fulltexts/hisklim7.pdf

      “The results show that a large tree barrier in the vicinity of the DB260 has a
      significant effect on the operationally observed temperatures. Compared to more open locations at the KNMI terrain, DB260 shows higher maximum
      temperatures and lower minimum temper atures. In the summer half year
      the daily maximum temperatures are on average 0.28°C higher than those
      for the most open site and the daily minimum temperatures are on average
      0.48 °C lower. Individual daily differences may, however, be much larger.”

      And this is one of the best maintained sites in the world for which there is detailed information on what has happened and changed in the past.

      Surface temperature measurement sites never meant to monitor climate and as such never designed and maintained for that purpose. Hence, there will always be a need for homogenization, but the question is how to evaluate the quality of the homogenization. There are different homogenization methods and I’m sure people will come up with new methods, they will be evaluated and they will be compared. But I’m pretty sure we’ll never really solve the issue.

      Cheers,

  87. Judith.. No fair counting all my comments against my total…

    please adjust the counting algorithm

    • not sure why some of yours are landing in moderation, the counting algorithm doesn’t seem to work, i manually put people in moderation if needed (I have obviously not put you in moderation)

  88. Why is it that these Global Temperature costs devolve into so much BS?

    • Because if the temperature records are untrustworthy, then the CAGW conclusions drawn from them are also.
      Frankly, this post is a tempest in a teapot. Weather stations were never intended to provide accurate long range climate data. The argument above is only about how fit for purpose can they be made? Warmunists think very (they have to in order to banish the uncertainty monster and maintain that their science is settled to unbelievable precision). Skeptics not so much.
      The bigger picture is simpler. The pause has falsified CMIP5 by the modelers own pre-established criteria. Warmunist Efforts to deny this continue to go down in flames, most recently Marotzke. Observational sensitivity is about half what IPCC has asserted, having never deviated from the initial Charney estimate of 1988. SLR is not accelerating. The golden toad of Costa Rica was done in by chytridiomycosis, not CAGW. Polar bears are thriving… The whole CAGW thing is being busted just in time for COP21. Warmunists will attack each and every such observation about that until then out of desperation, since defunding and job losses loom thereafter.

      • As we used to say Q f–cking ED!

      • One problem that continues to oersist is the reporting by the MSM and amplification by gollywood, teachers, the gren mob/blob, etc. Reporting of things like the “hottest year ever” are pronounced with little to no context, like revealing a record shattering differenceof .02 degrees, making it indistinguishable from several other years. Had dilingpole et al reported a finding that supported the cause, it would have been reported as more evidence that urgent drastic measures are required, and politicians would use it as one more weapon in their arsenal against the fossil fuel industry and our well being. Hunger games anyone?

  89. Not one piece of this response deals with the issue.

    Why is the entire Paraguay temperature record adjusted (or should I say reversed)? Every surrounding temperature reading recorded cooling until after adjustments.

    • Wrong.

    • It’s simple. No human being adjusted the temperature.
      It’s done by an algorithm.
      The algorithm is the explanation.
      You test the algorithm by looking at it’s total behavior.
      Does it move answers towards the truth.

      In tests double blind tests these algorithms do

      • “It’s simple. No human being adjusted the” egg shell. “It’s done by an algorithm.”

        The integrity of the data (internal structure of the egg) is protected?

      • Curious George

        Long live the Saint Algorithm.

        Can we get an interview with it?

      • Mr. Mosher, this is a real thing of beauty:

        ” No human being adjusted the temperature.
        It’s done by an algorithm.”

        No human ever shot anyone.
        It’s done by a gun.

  90. A few things from the above (and beyond.)

    # Regarding the imprecision, the unknowns of the variable,
    regional data, B,H and M appear ter have done as much as
    anyone could with what’s available.

    # As Tony Brown observes, ‘unless the full and evolving
    circumstances of each station is examined, the resultant
    data is little more than anecdotal.’

    I think it was Doc Martyn who suggested a testing procedure.

    # And then there’s OZ BOM suss ACORN temperature
    adjustments.

    # Not so much ter go on fer cli-sci doomsday predictions
    from the data and costly, costly policy initiatives in the
    billions if not the trillions, that emasculate economies
    and make them less adaptable to black swan events.

  91. To Zeke and Mosher. Take a look at the two Stillwater, OK twin sites in the new USCRN. One is very close to town, one is a few kilometers further out in the rural area. The station nearest town consistently reads 1 degree F warmer for an extended period. Could this be the positive evidence of UHI?

    • Hi Dale,

      From a climatic viewpoint, we don’t really care. If they both rose 0.8C over a century, that’s what counts and that’s what BEST (and others) try to measure.

      UHI is real. Anthony Watts did science a favor by getting his volunteers to catalogue the state of affairs with US temp stations. What BEST did was try to address many skeptic complaints.

      UHI is real, but not significant at a global scale. Sadly, so is the 0.8C rise in global surface temperatures in the past century. What caused it, what the effects will be… that’s for different areas of science.

      • thomaswfuller2, I think UHI is an interesting problem. I’ve stated my views on it a number of times, but something which stands out to me is a lot of the tests used to try to find it are bad. Oddly enough, I wrote a quick post providing an example a few hours ago. I’ll quote the conclusion:

        BEST homogenized rural and non-rural stations then found little difference between the two. Rather than saying, “Oh, that’s what homogenization does,” it said, “Clearly, UHI isn’t a problem for our results.”

        I don’t have any particular views on how large an effect UHI has on the global record (other than it’s not the main cause of the warming we see), but I can’t accept tests like as being dispositive. Finding little difference between rural and non-rural stations after they’ve been homogenized with one another does not show the UHI effect has been removed from the BEST data set. It could just mean BEST homogenizes its data so much the UHI signal cannot be extracted.

        There’s been a lot of good work done on the UHI effect. I just don’t think that translates into knowing what effect UHI has on our modern temperature records. I’m confident we know UHI isn’t the cause of all the warming we’ve observed or anything like that, but I see little reason to believe UHI’s effect is non-meaningful.

      • Your UHI post is interesting, Brandon. It would be interesting to see unadjusted rural data compared with unadjusted non-rural data. Or maybe not. But I am not convinced that UHI is a negligible factor.

      • nottawa rafter

        Don
        And even if a team of trusted skeptics determined an appropriate adjustment for UHI, in the final analysis it will remain an adjustment, not what really is happening with the temperature. I don’t distrust anyone in this massive attempt to have a reliable record of historical temperatures. I just don’t think the task is possible given the complexity and limitations on our knowledge.

      • Why “sadly”? Is a .8c rise a real problem or is it in fact beneficial?

      • Brandon,

        I’d argue that my paper took a much better approach at examining UHI and the effect of homogenization on it than the original Berkeley paper, though it was limited to the U.S.
        ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/hausfather-etal2013.pdf

      • Zeke Hausfather, I agree. I’m not convinced BEST’s paper on UHI tells me anything particularly useful, but I think the the paper you link to is much more insightful. I’d like to see much of the same work done for the BEST data set.

      • Who’s thomas w fuller the 2nd? ;) :)

  92. I directly asked Zeke and Steven to provide this information at least a dozen times over the years.

    They never responded.

    Yet, when the adjustment issue started heating up in the last few weeks, suddenly they can produce the data of how much the adjustments are.

    I don’t think that represents a genuine response. It is just a reactionary response designed to deflect the very real criticisms of the adjustment algorithms. It certainly appears that there is a “thumb on the scale”.

    • Bill Illis, one of the points I’ve been making in my criticisms of BEST is simply that it doesn’t disclose things. Consider, for instance, BEST’s homogenization process. BEST homogenizes its data by breaking station records up when it believes there is a data issue. There is obviously uncertainty involved in where their algorithm finds these supposed breakpoints. Did BEST ever warn people it ignores this uncertainty? Not that I can find. People could figure it our by carefully reading some of what BEST has written, or by examining BEST’s code, but how many people will do that? Not many. Most people will assume the published uncertainty levels reflect the uncertainty in the results, not just the uncertainty in one part of the calculations which produce them.

      BEST has now admitted it ignores the uncertainty in the timings of its breakpoints:

      Hence the effect of uncertainties in the magnitude, but not the timing, of homogeneity adjustments is included in the overall statistical uncertainties.

      But BEST only did that because I baited them into it. As you point out, trying to just talk to BEST about concerns with their work doesn’t work. Trying to just ask simple questions, like, “What effects do your adjustments have?” doesn’t work. BEST feels comfortable not telling people relevant information. It’s only when put under pressure and forced to respond that they’ll disclose things people ought to have known all along.

      For instance, a week ago, how many people knew BEST’s adjustments increase the amount of warming it finds by ~20%? I’d wager not that many.

      • Matthew R Marler

        Brandon Shollenberger: But BEST only did that because I baited them into it. As you point out, trying to just talk to BEST about concerns with their work doesn’t work. Trying to just ask simple questions, like, “What effects do your adjustments have?” doesn’t work.

        My experience has been that the uncertainties in estimating the breakpoints do not contribute very much to the uncertainty in the estimates of the trends. However, paraphrasing what I wrote at WUWT, they really will not put the issue to rest until they publish the results that include the estimates of the uncertainties that include the uncertainties in the breakpoints.

      • Matthew R Marler:

        My experience has been that the uncertainties in estimating the breakpoints do not contribute very much to the uncertainty in the estimates of the trends.

        I think this would likely be true on a global scale. I’m not so sure about that for sub-global scales given how much we now know BEST’s homogenization smears data. This post shows there would be a significant difference in regional trends if not for BEST’s empirical breakpoint estimations. It wouldn’t be a stretch to think including homogenization uncertainties would result in BEST’s regional uncertaintiess being significantly higher.

        However, paraphrasing what I wrote at WUWT, they really will not put the issue to rest until they publish the results that include the estimates of the uncertainties that include the uncertainties in the breakpoints.

        I agree. I don’t think the issue of uncertainty in BEST’s results will be put to rest so long as BEST ignores sources of uncertainty. This is especially true is BEST is going to publish reports for the media where they put a great deal of focus on the size of their uncertainty levels. If you tell people, “Look at how small our uncertainties are,” you have to expect some people might respond, “But we know they’re not actually that small.”

        There’s another relevant issue I find remarkable. I commented on it upthread. When BEST sought to examine the UHI effect, it compared non-rural and rural stations to see if it could find a difference. However, it did this comparison only after it homogenized those rural and non-rural stations so they’d be more alike. As I said in my post about this:

        BEST homogenized rural and non-rural stations then found little difference between the two. Rather than saying, “Oh, that’s what homogenization does,” it said, “Clearly, UHI isn’t a problem for our results.”

        The result is BEST’s conclusions about UHI depend entirely upon the assumption BEST’s homogenization removes the UHI signal rather than just smearing it around a lot. That assumption may be true. It may be false. We don’t know. BEST has never done anything to establish it.

        You can’t hope to put concerns about your methodology to rest with tests which ignore a significant portion of your methodology. It’s that simple.

      • Brandon,

        I followed your link to where you say Mosher took the bait at wuwt jan 29 2015 / 8:24pm
        His first sentence reads:
        “Zeke has posted the graph above that illustrates Brandon’s misdiagnosis of the change in uncertainty.”
        He then jokingly refers to the need for an editorial correction.

        In part of your reply you state:
        “I can accept I may have misdiagnosed the cause of this step change, but we’ve now had Zeke, Mosher and Rhode all make false claims in explaining it.”

        If what you say is true and you intentional deceived them and you are in fact cleaver like a fox are you going to now correct the record on your guest post there at wuwt? I also wonder what Watts thinks of being used like that?

      • ordvic:

        If what you say is true and you intentional deceived them and you are in fact cleaver like a fox are you going to now correct the record on your guest post there at wuwt?

        If you look at that post at WUWT, you’ll see it is not labeled as a guest post. That’s actually remarkable as WUWT has labeled a number of posts as being guest posts by me even though they weren’t. Regardless, I have no editorial control over that post. That post quotes an e-mail I sent, but that doesn’t mean I can somehow change it.

        Moreover, I publicly talked to a moderator on the site to ask for the post to be changed. They didn’t make the change (even though they said they would before I asked), but I have no control over that. I added a note to my post to address the issue, wrote a new post to explain it and asked the person who quoted me to add an update. That’s pretty much all I can do.

        I also wonder what Watts thinks of being used like that?

        No clue. I e-mailed him about the issue, but I didn’t get a response. I don’t know if he was busy or what.

      • Brandon,

        Well that’s good you notified the moderator. The whole discussion is somewhat confusing to an outsider like me. Not so much the technical issues (although that adds to it) but simply that there is three things to keep track of and it’s easy to get mixed up. Regardless I’m glad you managed to elicit a response from Best as it brings out a more revealing picture of how they operate.

      • ordvic:

        Well that’s good you notified the moderator. The whole discussion is somewhat confusing to an outsider like me. Not so much the technical issues (although that adds to it) but simply that there is three things to keep track of and it’s easy to get mixed up.

        I certainly understand that. I don’t think it’d be so bad if people would try to respond directly to the points people raise. Unfortunately, a lot of commenters don’t. In fact, a lot of commenters seem to prefer going out of their way to avoid straightforward communication. I don’t get it myself.

        Regardless I’m glad you managed to elicit a response from Best as it brings out a more revealing picture of how they operate.

        Likewise. I don’t like how it happened. I don’t think BEST should have to be forced into disclosing things. I think this current post should have been written two years ago, without BEST being pushed into it by media pressure.

        But regardless of how or why any of this happened, it is definitely good more information is available now.

      • Actually, I requested the post of Mosher and Zeke. The request was triggered by an email from Bjorn Lomborg, wondering where Part II was from Zeke. I’m very pleased that the Berkeley group promptly replied with this post

      • Judith, I don’t doubt that, but the origin of this post does not disprove anything I said. It can’t. The information I refer to was disclosed before this post was published. I wrote about it before this post went up.

        What matters isn’t why we have a post on this site. What matters is why BEST has decided to discuss this issue at this point. I made my judgment on that based upon the public remarks by Steven Mosher where he explained BEST was considering writing a post about these issues in response to the accusations being flung around. Those remarks were made about a week ago.

        Given Mosher said BEST was considering writing a post about this issue in response to what was being said, I think it is perfectly reasonable to say BEST was pushed into it by media pressure.

        (One could suspect the reason BEST “promptly replied” is BEST had been planning to discuss this before you made your request.)

  93. Those who have put this post up must think we are idiots. They say that you are just as likely to find evidence of adjustments cooling the record as well as warming the record. The problem that virtually all of the adjustments shown warm the record after ~ 1950 and cool the record prior to ~1900. This has a direct impact upon the rate of warming.

    In Australia they have gone one better. They have completely got rid the temperature record prior to 1910 because it was embarrassingly warmer than they would have liked.

    • Ian Wilson,

      The reason that the Australian BOM got rid of the pre 1910 records is because the BOM employs none but the world’s finest meteorological minds.

      Anyone who uses any pre 1910 official records from anywhere in the world is a second rater, as it is well known that Australian meteorological records surpassed all others. This is still the case.

      It is likely that the BOM will consider abolishing all records more than one week old. This will both save record storage money, and allow the BOM to more frequently announce the hottest, coldest, wettest, windiest days on record.

      It will also avoid the necessity for endlessly adjusting, kriging, averaging, and interpolating in vain attempts to ascertain historical fact.

      What happened, happened. What will happen, will happen.

      If anyone believes they can divine the future, I wish them every success with their new enterprise. Many psychics and fortune tellers make quite a good living predicting the future. Climatology appears to have been created to make fortune tellers look respectable.

      I have noticed that the ground etc., appears to heat up during the day, and cools at night, setting a twenty four hour record high temperature, low temperature, or both, or neither. Wow, just wow! Where’s my Nobel Prize, eh?

      Live well and prosper,

      Mike Flynn.

  94. THis is what ZEke ect are doing https://stevengoddard.wordpress.com/2015/02/11/basic-math-for-academics/#comments
    Its really easy to understand

  95. Thomas W Fuller
    There is a distortion because of UHI as the majority of stations are located in urban areas. The heat absorbed due to asphalt, concrete, vehicles, people, eic. causes the minimum temperature to remain high. Over the years as the urban areas have grown, the minimum temps have increased because of the heat retaining facilities.

    A properly sited rural station will not show such a large anomaly increase as the urban stations show. I believe there have been some studies that reveal this factor. The Central England Temperature has been affected by the urban sprawl. Likewise, similar effects have been noted in Australia.

    Do you know if a temperature anomaly of clean rural sites exist that might show this effect?

  96. NOAA admits they adjusted their raw temperature data to make the past cooler and the present warmer. http://www.ncdc.noaa.gov/monitoring-references/faq/temperature-monitoring.php “Adjustments largely account for the impact of instrument and siting changes but appear to leave a small overall residual negative (“cool”) bias in the adjusted USHCN version 2 CONUS average maximum temperature.”

    NOAA’s explanation for this does not hold water in my opinion. If the data is bogus why not just discard it? It’s pretty fishy when all the so called bogus data is adjusted to cool the past and warm the present. Other examples include in-filling of missing data with warm data rather than cooler data from nearby stations. So much adjusted and missing temperature data says past temperature data are so rotten all the records should all be tossed out. The State Of Denmark Comes To Mind

  97. Pingback: I am a Dirty Denier? | Izuru

  98. To the main players. I thank you for the fortitude to post and attempt to provide edification and transparency all while riding a “no win” scenario. Regards. I’ve learned something today also.

    To Faustino: “In a statement to The Australian, NOAA said it was understandable there was a lot of interest in the homogenisation changes. “Numerous peer-¬reviewed studies continue to find that NOAA’s temperature record is ¬reliable,” NOAA spokesman Brady Phillips said. “To ensure accuracy of the -record, scientists use peer-¬reviewed methods called homo¬g¬enisation to adjust temperature readings to account for a variety of non-¬climate related effects such as changes in station location, changes in observation methods, changes in instrumentation such as thermometers, and the growth of urban heat islands that occur through time,” he said.

    All the while NOAA chose to ignore thier own confidence level regarding 2014 being the “warmest year ever”.

    As FOMD might say………The world wonders………….why skepics exist.

    Did we move “forward” today?

    Signed,
    A not so disinterested newcomer.

    • Danny

      The temperature issue has been made hugely complex and, like paleo proxy reconstructions such as the Hockey stick, often rely on obscure difficult to understand statistical analysis.

      As someone who does not see hoaxes and conspiracies round every corner I am seeking clear reassurance that the headline story is completely false-that is to say past temperatures have not been routinely cooled to fit an agenda. As of last night I did not find that reassurance. Mosh et al did not do themselves any favours by being unable to rebut the proposition in a clear manner

      This morning I will scroll from the bottom up and hope the numerous comments overnight will have made it clearer that the past has not been deliberately biased.

      tonyb

      • TonyB,

        This is why I described this thread as a “no-win”, and asked if we moved foreward today. I see it as you do and as KenW does. The air is a bit clearer, but not many seem that happy with the results although some do. The BEST analogy is they’ve put on the table what they’ve done and why. It’s a response to those who’ were unhappy with before, and many are yet unhappy today.

        Did we move “forward” today? It’s not clear. But the methodology is here for all to see, criticize, and offer feedback. And it took guts to put it forth.

        As you, I see no conspiracy. Just hard working folk putting their work on the table. There is no way to declare a victor. Such as it is in the world of climate discussion. This is so important, yet still so unresolved.

        To those, who chose to put forth the effort, I can only offer my thanks. And I do, to you, also. As well as to all those who chose to respond. Maybe the key lies in the open discussion.

        Regards,

      • tonyb, I do not think for a moment people set out to cool past temperatures in a nefarious plot to exaggerate global warming. I still wouldn’t be surprised if BEST’s methodology cools past temperatures by an inappropriate amount. We don’t need hoaxes or conspiracies for people to make mistakes in their work which happen to coincide with their expectations. All we need is confirmation bias.

        Danny Thomas, I think we’ve definitely moved forward. A week ago, I had no idea BEST’s homogenization systematically cools the (pre-1900) past by a couple tenths of a degree. I think that’s something people ought to know. I’m glad BEST has acknowledged it (even if they keep insisting it doesn’t matter and the adjustment is “very little”). I wish they would have done so sooner.

        Similarly, we can now see BEST’s homogenization results in their results having far less spatial resolution. That is progress. I’ve pointed out the problem of limited spatial resolution in BEST’s results before, but I don’t know how many people saw it. More importantly, I wasn’t able to diagnose just what caused it. Now we know. Now we know it caused not just by BEST’s homogenization, but by the portion of BEST’s homogenization which calculates “empirical breakpoints.”

        I think BEST is taking the right path in being more open about these things. I hope it keeps it up. I think doing so will help move things forward. In fact, I think it could have helped head off some of the current discussions if they would have done it two years ago.

      • BEST had already revealed all about their methodology, Danny. From day 1. I think you might be over-egging it to say it took guts for them to prepare a post for this site now when it’s nothing more than a weak rehash of material cobbled together for no other reason than to respond to the popular works of a couple of opinion journos in the UK who wrote about different data sets and Paul Homewood’s various revelations about the inadequacies of those datasets, which got picked up by Drudge and then Fox News in the US. The post doesn’t undermine Booker, Delingpole or Homewood.

        What I have learned from this long comment thread is that even the BEST crew aren’t above behaving like a climate rapid response team a la Schmidt and Mann. And I’ve learned that BEST uses flawed data and they don’t have any answers about that.

      • Hidethedecline,

        Re: Took guts. They had to know it would be “their turn in the barrel” to put this forth and come here and stand by it. I admire them doing so.

        Presuming TonyB is correct and a hand written number from 1898 has been changed today is baffleing. Surely a stop can be placed withing the “algorithm” at the point which the methodology changed for a site from a “historic record” to a “modern record”. Steve Mosher says one can’t go back and double check historic records and I agree. And it doesn’t matter if that historic record has been changed in any way, as it is then no longer historic nor a record. (In my personal model I’d change that my momma went from raising stupid children to not stupid ones.)

        I can appreciate your comments about what you’ve learned here. But what additionally interests me is what did they learn? BEST indicates the put together a “response” as a result of past feedback. They have now received additional feedback. I ends not here, but evolves (improves?), and we go forward.

        Schmidt and Mann do “climate” BEST does temps.

        Should the post “undermine Booker”? Would it matter, or would the next arrow come out of Booker’s quiver? BEST, it seems to me should do what they do, say what they do, and do this; over and over.

        Wondering out loud how this conversation might have gone had it been NCDC, NASA, MET, whomever?

      • “Schmidt and Mann do “climate” BEST does temps.” Overbroad, too early, no coffee effects, and just plain (there’s my momma raising stuid kids again). Please ignore. I won’t even try to reword what I was intending as the overegging I missed with I’m wiping off my face.

      • I do not think it is those doing the work atBEST deliberately doing anything to push an agenda, but their results and other temp record publications get used in a very one-way method to promote the cause. It would be helpful if BEST were to place a disclaimer that there data and analysis is not fit for policy decisions on global energy initiatives, but they won’t do that.

      • Danny you and I aren’t going to see eye to eye about being able to check the historical record. We absolutely can check the record. BEST only goes back to 1850. Just check the newspapers, parliamentary records, sports results etc etc. Easy and getting easier by the day with online archives.

        More to the point, we actually are checking records, and that’s why this post was even prepared by the BEST guys. People with local historical knowledge are noticing that local history is being ignored or changed by climate scientists and that when challenged to explain themselves no actual foundational reason to ignore or change recorded local history is given by climate scientists.

        I saw a piece recently, can’t recall where, but it was a Norway fellow (I think) writing about how Arctic weather “data”in one of the data sets (GISS or whatever) completely adjusted away what actually happened locally. What had happened were some extreme events at two towns, at the same time, very diverse, yet very close by.

        Climate scientists demonstrating their wilful ignorance of history, saw those two contemporaneous nearby very diverse events and concluded they could not have happened and adjusted them out of existence in their calculations. BEST and the rest can build all the pretty graphics they like but we know there’s at least some real garbage data in that sandwich.

    • Did we move “forward” today?
      By climate science standards, it cleared the air a bit.

      • John Smith (it's my real name)

        while this post brings some belated clarification about the manipulation of temp data for the creatures of the climate blogosphere
        remember, those in politics and media have no incentive to
        “move forward”
        their stock and trade is stark division
        an example
        Obama’s recent incendiary, and historically ignorant, remarks about the Crusades were no accident and intended to be divisive
        same with this issue
        the unconvinced will still be called “flat earthers”
        the President never mentioned climate for six years
        now he says “climate change” is as an immediate threat as terrorism
        this post clearly illustrates why climate skepticism is growing,
        and is in fact, a defensive response to increasing public doubt
        very little forward movement will take place until Gaia makes her position clear

  99. Willis Eschenbach

    Pretty pictures, but no numbers. How much did the homogenization change the trend of the global data? Of the US data? And where are the error bars?

    Sorry, but this is much more like a presentation to a high school class than anything to do with science.

    w.

  100. Pingback: The yellow press and climate change | Doug Craig

  101. Dear Professor Curry, I’m very curious your take on this. I do not understand the arguments in this thread well enough to form an opinion. As a result, who I trust and who I distrust becomes more important to me. I distrust BEST because I believe their funding would likely evaporate if they didn’t “toe the line.” Also, Steve Mosher’s anger makes me wonder if he has an ulterior motive (is he a Greenpeace earth-religion type?, is he anti-capitalist?, is he protecting his livelihood?,…). Of course, BEST could truly be doing solid scientific work regardless of who funds them. And perhaps Mosher is just an emotional type but still an even-handed scientist (I certainly see that Mosher’s opponents attack him, and if he is an emotional person he would likely strike back). I do trust you, so your opinion is important to me. Sadly, for much of the general public like me, our dinner debates will often end up in appeals to authority since the science is too complex. So can you add your authority here? It would be greatly appreciated.

    • Do I trust the CRU temperature record? No, I don’t think they’re competent.

      Do I trust the GISS temperature record? No, I don’t think they’re neutral.

      Do I trust the BEST temperature record? Yes, I looked at their papers and their code and they are competent statisticians. Yes, they call out nonsense regardless of its political color.

      • Thanks. This type of simple verdict is helpful to me.

      • Richard Tol, If I may ask your verdict on the follow up question,

        What do you say to ACO2 attribution? Does BEST prove it?

        With how much certainty do we know how much 50% of warming is?

        With how much certainty are we certain that ACO2 caused that 50%?

        How certain are you that you’re right?

      • Richard S.J. Tol

        @KenW
        BEST does not to much on attribution.

        There are two strands of attribution literature, fingerprinting and time-series. Both literatures find, in a large number of papers by authors who have not colluded, that the impact of human greenhouse gas emissions is highly significant and responsible for well over 50% of the observed warming.

        Although you can quibble with each individual paper, I don’t think it is reasonable to argue that greenhouse gas emissions did not have a substantial effect.

        At the same time, there is still a lot of debate and uncertainty about the size of the sensitivity of the climate system to greenhouse gas emissions. There is agreement that it is greater than zero, but no agreement about its size.

      • Stay tuned. much circular reasoning in both techniques used for attribution, I’m planning a post on this but I don’t know when I will get to it.

      • A. “Although you can quibble with each individual paper…”

        B. “…I don’t think it is reasonable to argue that greenhouse gas emissions did not have a substantial effect.”

        Not sure how you necessarily get B from A. Doesn’t seem scientific.

        Andrew

      • Richard Tol,

        “BEST does not to much on attribution”.

        thanks for the explanation, I had understood something Steve Mosher wrote on a previous thread as indicating that it did. I went back and read harder, now I see that’s not what he meant.

        Thanks,
        Ken

    • TomJorgensen, I’m one of the most vocal critics of BEST, and I don’t think things like funding have any bearing on their work. I think reasons for their behavior are much more related to things like personal biases and preconceptions. For instance, Steven Mosher has issues with me. I think that causes him to not even try to understand the arguments I make. I suspect he’d behave quite differently if certain other people said what I said instead.

      The biggest issue I believe exists, however, is I think BEST wants to look good. You can see in this post BEST claims its adjustments are “minor” and have “very little” effect. Those adjustments increase the amount of warming they find by ~20%. It’s not clear why BEST thinks a change of ~20% is “very little.” What is clear is a change of that magnitude is something people ought to know about, yet BEST never bothered to tell them about it. BEST never bothered to explain to people, “This is what we do to the data, and this is what effects it has on our results.”

      A change of 20% doesn’t disprove global warming. It doesn’t mean we know global warming to be unimportant. It doesn’t even mean we need to be less concerned about global warming. I expect that’s why BEST didn’t bother to tell people about it. I expect that’s the same sort of reasoning behind them not disclosing other things about their work. They figure if something doesn’t change certain big picture conclusions, they shouldn’t tell people about it because it makes them look worse. And if something can make them look a little worse, it will be used as ammunition by people to make global warming concerns seem exaggerated.

      At least, that’s the best explanation I’ve come up with. I happen to know there are many other issues one can raise with BEST. I also happen to know doing so won’t lead to productive dialogue with the BEST team. Even when I know they know about a problem I want to discuss, I know it is unlikely they will discuss it if they can avoid it. I don’t think it’s because they want to cover things up. I think they’re just worried about giving people any reason to doubt anything.

      But in the end, it doesn’t really matter why people do what they do. There are issues with BEST’s work, BEST is not open or upfront about these issues, but global warming is still real and the planet’s temperatures have still gone up. Motives don’t change any of that.

      • A week ago, I had no idea BEST’s homogenization systematically cools the (pre-1900) past by a couple tenths of a degree.

        Given that the supposed greenhouse effect of CO2 depends on concentration changes well after pre-1900, at best (heh!) the effects of “BEST’s homogenization” is to offer a tiny bit more credence to the LIA, and interpretations of recent warming as part of the rebound from it.
        http://wwws3.eea.europa.eu/data-and-maps/figures/atmospheric-concentration-of-co2-ppm-1/image_xlarge

        You can see in this post BEST claims its adjustments are “minor” and have “very little” effect. Those adjustments increase the amount of warming they find by ~20%. It’s not clear why BEST thinks a change of ~20% is “very little.”

        Probably because there’s already good evidence for the LIA, and that’s all the difference in pre-1900 temp estimates matters to. Also consider that the confidence levels of any estimates of global temperature pre-1900 are very low. IIRC they’ve given confidence ranges for their work in various publications, although I don’t see any here.

        At least, that’s the best explanation I’ve come up with.

        Or perhaps they (esp. Mosher) think if you’re really interested you can grab the data (and code if you want) and duplicate the work yourself with the changes you think appropriate. I do know I’ve seen several mentions recently about “skeptics” not believing in the LIA.

      • richardcfromnz

        AK
        >”….the effects of “BEST’s homogenization” is to offer a tiny bit more credence to the LIA, and interpretations of recent warming as part of the rebound from it.”

        Interesting perspective AK. That hadn’t crossed my mind until reading it.

        If we consider the present as “fixed” as per the homogenization process due to better quality control, better sites, AWS, etc i.e. the reference level is the present, then adjustments that make the past cooler are in effect indicating that the LIA was cooler than we thought as you point out. That’s if we subscribe to the process being valid.

        This perspective reverses the warmist argument. In effect they’re asserting that the LIA was real, and was much cooler than the present.

      • AK:

        Probably because there’s already good evidence for the LIA, and that’s all the difference in pre-1900 temp estimates matters to.

        I don’t think one gets to argue something is small just because it happened in the past. If BEST wants to claim they don’t care that much about changes of this magnitude because they happened in the past, it can. I just don’t see how you can say something that increases your results by ~20% is small.

        Also, BEST calculates its baseline (climatology) over the 1900-2000 period. It is plausible this choice of period is at least partially responsible for these effects showing up as they do before 1900. We see a similar thing with how other temperature groups change their past temperatures to reflect changes in their data in recent times. You don’t see those groups arguing those changes are unimportant because they happened in the past. You see them acknowledge the fact they changes manifest in the past, rather than in the present, is just an artifact of their methodology.

        Also consider that the confidence levels of any estimates of global temperature pre-1900 are very low. IIRC they’ve given confidence ranges for their work in various publications, although I don’t see any here.

        Interestingly, BEST’s uncertainty levels do not include uncertainty introduced by the timing of their breakpoints. That means BEST ignores the uncertainty introduced in the same step as these adjustments.

        Or perhaps they (esp. Mosher) think if you’re really interested you can grab the data (and code if you want) and duplicate the work yourself with the changes you think appropriate.

        That seems unlikely. Consider, for instance, I pointed out BEST doesn’t rerun its homogenization process when it calculates its uncertainty by using subsamples. I suggested this causes their uncertainties to be lower than they ought to be because it means uncertainty in their homogenization process is ignored.

        It turns out this was an issue BEST had already worked on. When BEST examined it, they found exactly what I said. They found not redoing their homogenization during the jackknife calculations causes their uncertainty levels to be biased low. They didn’t tell anyone this. They didn’t publish it anywhere. They didn’t include a warning on their web page, in their data files or in any of their papers.

        Could other people work out how much of an effect this issue has? Sure. If they happened to realize BEST doesn’t redo its homogenization, something BEST never made clear, they could download the code, familiarize themselves with it and make changes. Mosher clocked the time it’d take to run the tests to find out what effect this has at several weeks of processor time.

        Maybe BEST thinks people who are really interested should spend weeks running code to test the effects of issues BEST has simply chosen not to talk about. That seems an incredibly weird belief though.

        And what if somebody wanted to check multiple issues? Are you suggesting BEST thinks people should have to wait months after discovering a set of potential issues to be able to discuss what effect they have? That’d be really weird.

      • richardcfromnz:

        If we consider the present as “fixed” as per the homogenization process due to better quality control, better sites, AWS, etc i.e. the reference level is the present, then adjustments that make the past cooler are in effect indicating that the LIA was cooler than we thought as you point out. That’s if we subscribe to the process being valid.

        One problem with this perspective is BEST hasn’t really shown what its adjustments do in the portion of the LIA it covers. The figures in this post only go back to 1850, approximately when the LIA is said to have ended.

        It would be interesting to see how the nearly 100 years of the BEST record which overlap with the LIA are affected by BEST’s adjustments. Maybe BEST could be convinced to show that.

      • “Given that the supposed greenhouse effect of CO2 depends on concentration changes well after pre-1900, at best (heh!) the effects of “BEST’s homogenization” is to offer a tiny bit more credence to the LIA, and interpretations of recent warming as part of the rebound from it.”

        AK,

        Let’s see if I can help you.

        The global temperature series is important ( impacts published science) in the following ways.

        1. It can be used is sensitivity studies. take Lewis and curry as an
        example. For there study they look at two periods to calculate Delta T
        late 1800s and present day.. Adjustments might change deltaT
        by a little amount. sensitivity is related to Dt/Df. The uncertainty in
        Df swamps the calculation. Dt is not a sensitive parameter.
        2. It can be used to test GCMs. Here the lates period matters most.
        3. It can be used to calibrate and validate reconstructions.
        I only know of one reconstruction that got different results by using
        using raw data for a grid. Basically changing temperatures by a couple tenths here or there wont make the MWP disappear or get warmer.
        4. Spectral studies. adjustments do nothing.

        In short I dont find any paper that would have to be retracted, have its conclusions changed, by fiddling the adjustments one more time.

        what did we set out to do?

        1. Skeptics complained about the station drop out. we used all the data.
        2. skeptics complained about the non standard methods. we used kriging which they suggested.
        3. Skeptics complained about combining stations ( nasaRSM method ).
        we did what they suggested and what they had actually published
        ( see christy )
        4. They complained about humans applying adjustments in an unfair manner. We built and tested an algorithm
        5. They wanted all the data. we gave it
        6. They wanted the code. we gave them SVN.
        7 they suggested hiring critics. did that
        8. they suggested having professional statisticians. did that.

        Now of course we get round two of objections.
        1. prove the digital records match the paper records.
        2. Get the local field perfect
        3. Explain GISS again
        4. provide all code changes you have ever made.
        5. help me with matlab
        6. talk to us even though we really are not users of your data.
        7. go place new thermometers all over the world to test your approach.
        8. look for sawtooth patterns in stations
        9. explain why the algorithm does what it does in these 10,000 cases.

      • richardcfromnz

        AK, I see the CO2 graph begins 1750. Do you have one beginning earlier that corresponds to MWP, LIA, and present?

        That way we can see if CO2 is the driver of the MWP => LIA cooling and the LIA => present warming.

        Thnx.

      • AK, I see the CO2 graph begins 1750. Do you have one beginning earlier that corresponds to MWP, LIA, and present?

        That way we can see if CO2 is the driver of the MWP => LIA cooling and the LIA => present warming.


        From here. I can’t vouch for it, though.

        The CO2 records presented here are derived from three ice cores obtained at Law Dome, East Antarctica from 1987 to 1993. The Law Dome site satisfies many of the desirable characteristics of an ideal ice core site for atmospheric CO2 reconstructions including negligible melting of the ice sheet surface, low concentrations of impurities, regular stratigraphic layering undisturbed at the surface by wind or at depth by ice flow, and high snow accumulation rate. Further details on the site, drilling, and cores are provided in Etheridge et al. (1996), Etheridge and Wookey (1989), and Morgan et al (1997).

        Air bubbles were extracted using the “cheese grater” technique. Ice core samples weighing 500-1500 g were prepared by selecting crack-free ice and trimming away the outer 5-20 mm. Each sample was sealed in a polyethylene bag and cooled to -80°C before being placed in the extraction flask where it was evacuated and then ground to fine chips. The released air was dried cryogenically at -100°C and collected cryogenically in electropolished stainless steel “traps”, cooled to about -255°C. Further details on the extraction technique can be found in Etheridge et al. (1988 and 1992) and additional information on the ice and air sample handling are provided in Etheridge et al. (1996).

        The ice core air samples, ranging from about 50 to 150 ml standard temperature and pressure (STP), were measured for CO2 mixing ratio with a Carle 400 Series analytical gas chromatograph (GC). After separation on the GC columns, the CO2 was catalytically converted to methane before flame ionization detection. As many as three separate analysis were made on each ice core sample. Each sample injection to the GC was bracketed by calibration gas injections. CO2 mixing ratios were then found for each aliquot by multiplying the ratio of the sample peak area to calibration gas peak area (interpolated to the time of sample analysis) by the CO2 mixing ratio assigned to the calibration gas. The precision of analysis of the Law Dome ice core air samples was 0.2 ppm. For greater details on the experimental techniques used on the DE08, DE08-2, and DSS ice cores, please refer to Etheridge et al. (1996).

        The ice cores were dated by counting the annual layers in oxygen isotope ratio (δ18O in H2O), ice electroconductivity measurements (ECM), and hydrogen peroxide (H2O2) concentrations. For these three parameters, each core displayed clear, well-preserved seasonal cycles allowing a dating accuracy of ±2 years at 1805 A.D. for the three cores and ±10 years at 1350 A.D. for DSS.

        The enclosed air at any depth in the ice has a mean age, (aa), that is younger than the age of the host ice layer (ai), from which the air is extracted. The difference (δa) equals the time (Ts) for the ice layer to reach a depth (ds), where air becomes sealed in the pore space, minus the mean time (Td) for air to mix down the depth. The mean air age is thus

        aa = ai + δa = ai + Ts – Td

        where ages are dates A.D.

        Mixing of air from the ice sheet surface to the sealing depth is primarily by molecular diffusion. The rate of air mixing by diffusion in the firn decreases as the density increases and the open porosity decreases with depth. Etheridge et al. (1996) determined the sealing depth at DE08 to be 72 m where the age of the ice is 40±1 years; at DE08-2 to be 72 m depth and 40 years; and at DSS to be 66 m depth and 68 years. For more details on dating the Law Dome ice cores and sealing densities, please refer to Etheridge et al. (1996).

        Atmospheric carbon dioxide levels appear to have been constant at around 280 PPM between 1000 AD and 1800. Then, during the Industrial Revolution, carbon dioxide levels began a rapid rise.

        Problem is, Murry Salby has raised serious questions about the assumptions around diffusion (among other questions), but has not actually published the details of this theses or clear demonstrations of the work he’s done to prove it. There may be bureaucratic hooliganism involved, apparently a non-refundable airline ticket was cancelled such as to prevent him from returning from a conference in time to defend himself against administrative action that led to his separation from Macquarie University. The details are cloudy, as is whether his research materials needed for publication were confiscated, and if so whether they have been returned.

        For the moment, I carry it as an open question. It’s not impossible that the people responsible may have sabotaged his work out of fear he had something even if he didn’t.

      • richardcfromnz

        AK, same CO2 series plotted against Moberg (2006):

        From here:
        http://wattsupwiththat.com/2012/12/07/a-brief-history-of-atmospheric-carbon-dioxide-record-breaking/

        The idea of CO2-as-primary-climate-driver fails over the period 1000 – 1750 (MWP – LIA) by that CO2 series and that temperature series..

      • Thanks for your input. Makes sense to me that personal biases and preconceptions, the desire to look good, and the desire to not spread needless doubt (needless in their minds) has its effect at BEST. I should point out that when I said I distrust BEST due to their funding, it’s not that I think they would actually conspire to ensure their funding. It would be more a case where the survival instinct channels thinking and actions so that one’s livelihood is protected – just human instinct. That said, doesn’t sound like that is going on at BEST.

      • Oh wow, that was a reply to: Brandon Shollenberger | February 11, 2015 at 4:06 am | Reply “TomJorgensen, I’m one of the most vocal critics of BEST…”

        I’m just learning how the system works here….

      • @richardcfromnz…

        The idea of CO2-as-primary-climate-driver fails over the period 1000 – 1750 (MWP – LIA) by that CO2 series and that temperature series.

        I don’t see how the standard notions of CO2 (e.g. Law Dome cores) could be reconciled with “CO2-as-primary-climate-driver” prior to the 20th century. And natural variation (and variation from other “forcings”) should probably be held responsible for temperature changes prior. (Pending investigation into Salby’s claims)

        Part of the problem is that almost every “proxy” we have for pre-19th century produces some level of “smoothing” of decade-scale variation, so there’s no way we can know whether variations in “global average surface temperature” didn’t make similar excursions to the 20th century one in the past.

        Thing is, interpretation of the evidence is very dependent on one’s “priors”, and while thermodynamic considerations would incent a “prior” assumption of warming from increased CO2, recent (late 20th-21st century) studies in chaos theory and the general behavior of hyper-complex non-linear systems would incent a “prior” assumption of substantial unforced variation on all time-scales ranging from sub-annual to millennial.

        So both explanations are perfectly good, and there’s nothing in the current evidence to disqualify either.

    • TomJorgensen, Judith seems to make a point of staying out of this debate and I am not sure why. I asked her the same question several months back and her response was that “the issue is still not settled”.

      • Thanks. I see…. But actually, saying “the issue is still not settled” says quite a lot. Would be nice at the end of this debate to see some sort of recap like that. If not, no biggie. I’m going to go through all of these posts again to see if I can form some firmer opinions of my own. I’d like to feel somewhat informed before I go skiing next week with my cousin. He’s a big user of the term “denier” and sees no shades of grey.

    • (is he a Greenpeace earth-religion type?, is he anti-capitalist?, is he protecting his livelihood?,…) …

      That is just too precious.

      I trust BEST, GISS, and NOAA. I do not trust UAH and RSS on surface temperature, and RSS is correct to admit the series like BEST, GISS, and NOAA are more accurate there. I do not like HadCrut4, and I called its predecessor HadCrappy3.

  102. Compare the spatial maps for warming trends for 1900-2014—the empirical homogenization method is wildly different from the others.

    This should be a completely an obvious statement of fact.

    It is also wildly different than that obtained with the other temperature series, which resemble the BEST metadata homogenization.

    It is also an obvious statement of fact that BEST publishes higher resolution spatial temperature maps than the other major series. For example, see GISTEMP (1200-km smoothing):

    Simple question—if BEST wants us to only concentrate on the global temperature series, why provide any spatial temperature maps, let alone higher resolution ones?

    Now, suppose hypothetically that we are supposed to trust the BEST spatial temperature map reconstructions. I can’t see any reason that doesn’t involve hilarity for why we aren’t supposed to trust the spatial maps, since they are providing us with the images, and provide them in higher resolution than other products provide them in.

    But which one should we use?

    Should we trust the map that matches available historical data (e.g., evidence for a gradual cooling of the US SE), or should we use their empirical homogenization method, which appears to be generating seemingly novel results like South America is warming and cooling as a single body?

    I just don’t see any reason why the BEST guys can’t provide some straightforward guidance for what they think is the better product, and why.

    • Pasted the image for GISTEMP in the wrong place. It is supposed have read:

      It is also wildly different than that obtained with the other temperature series, which resemble the BEST metadata homogenization. For example, see GISTEMP (1200-km smoothing):

      Sorry for any confusion.

    • To make it easier to compare the GISTEMP with, here’s the BEST middle panel (metadata corrections only):

      • Zeke:

        It depends on how smooth the underlying trend fields actually are. Unfortunately, we don’t have any easily available ground truth to determine for sure.

        We do have data on the US SE. For example, we have direct evidence it has cooled because of the southern shift in orange and peach tree production.

        We have other lines of evidence to look at here, too, besides ground truth. For example, we can look at how your empirical homogenization algorithm works when there is sparse data, e.g., South America and Africa.

        Your code uses the default parameters:

        options.ScalpelEmpiricalMaxDistance = 2500; %km
        options.ScalpelEmpiricalMaxPairs = 300;
        options.ScalpelEmpiricalBestPairs = 25;

        Now, I will have to assume these values because they are the ones that come with your code, and you’ve not documented that you use a different value.

        (I mention this because Brandon’s 1960-2000 number comes from your code as well. It also appears the shift to 1900-2000 is a very new change. Also people shouldn’t get yelled at by you guys when you make undocumented changes to your code or set of run parameters, and they’ve assumed the default values.)

        Anyway, here’s what 2500-km looks, centered on Paraguay:

        Given the number of stations you are going to accept for that distance, the opportunity for over smoothing is very good.

        As it stands right now, your empirical homogenization algorithm does not contain logic to prevent over smoothing.

        (To prevent “over smoothing” you first have to define what “optimal smoothing” is and provide threshold tests to acheive that. But this seems to be missing in your code.

      • Just to clarify:

        Now, I will have to assume these values because they are the ones that come with your code, and you’ve not documented that you use a different value.

        (I mention this because Brandon’s 1960-2000 number comes from your code as well. It also appears the shift to 1900-2000 is a very new change. Also people shouldn’t get yelled at by you guys when you make undocumented changes to your code or set of run parameters, and they’ve assumed the default values.)

        The 1960-2010 (not 2000) number is for a baseline used in uncertainty calculations. I’m told the 1900-2000 period was used for estimating the planet’s climatology field though I haven’t verified that in the code and I didn’t see that said in the paper I was linked to which supposedly listed it. What I did find is BEST publishes a Readme file with its results which says:

        % climatology: For each grid cell, an estimate of the true surface
        % temperature for each month during the period January 1951 to December
        % 1980 reported in degrees C. For “LatLong1”, the dimensions are
        % latitude x longitude x month, where month has length 12 and corresponds
        % to January through December respectively. Hence the first month is an
        % estimated average for all Januarys from 1951 to 1980, teh second month
        % is for all Februarys, etc. For the “EqualArea” case, the dimensions
        % are grid cell number x month.

        Which clearly says the climatology field provided is given as an estimate of the temperature between 1951 and 1980. One could be forgiven for interpreting this as indicating BEST calculates its climatology field over the 1951-1980 period.

        I imagine I could track down just what period BEST uses when estimating its climatology field, but I haven’t gotten around to examining their code to find out. I would have, but I spent some time rereading a paper which doesn’t specify the period because i was told it does then got annoyed at having wasted my time.

      • “(I mention this because Brandon’s 1960-2000 number comes from your code as well. It also appears the shift to 1900-2000 is a very new change. ”

        It’s an option. duh.
        I believe we tried to explain on WUWT that we tested the sensitivity of the answers to this and found nothing of importance.
        some folks want to make an issue about it.. what’s new

      • “(I mention this because Brandon’s 1960-2000 number comes from your code as well. It also appears the shift to 1900-2000 is a very new change. Also people shouldn’t get yelled at by you guys when you make undocumented changes to your code or set of run parameters, and they’ve assumed the default values.)”

        Its on the chart.

    • Carrick,

      It depends on how smooth the underlying trend fields actually are. Unfortunately, we don’t have any easily available ground truth to determine for sure.

      Tests using synthetic data suggest that the Berkeley approach is not too far off, though I agree that its certainly possible that we over-smooth at a regional level. See this memo: http://static.berkeleyearth.org/memos/robert-rohde-memo.pdf

      Or our poster last year: http://static.berkeleyearth.org/posters/agu-2013-poster-1.pdf

      • Zeke, I am pretty sure that BEST over smooths in some regions but they seem to be “special cases”. Georgia is a good example of over smoothing while Florida is about right.

      • Zeke:

        Tests using synthetic data suggest that the Berkeley approach is not too far off, though I agree that its certainly possible that we over-smooth at a regional level. See this memo: http://static.berkeleyearth.org/memos/robert-rohde-memo.pdf

        Thank you for the links.

        The issue with synthetic data is that ideally it should be created by a third party, and should reflect variation that is found in the environment, without regard to the assumptions of the analysis software.
        This prevents assumptions that were written into the code from getting enforced in the synthetic data that is meant to validate the code, or by selection of synthetic data that is unlike to violate the assumptions of the software.

        I’ve discussed before some of the assumptions of your analysis code that I think need a closer look at (assumption of uniform, time-invariant correlational field for one), but briefly the Monte Carlo field you are testing against should reflect the realistic noise sources present in the measurement system as well as realistic natural variability of the signal you are trying to measure.

        I wrote down a partial list of what I think should be present on Chad’s old blog.

        Finally, there’s the question of the metrics you use for testing. For example, you have to start with a definition of what is “optimal” if you are trying to see how close to “optimal” you are able to achieve.

        It’s your project so you guys get to define what that means, but it needs to be stated clearly as well as tested for.

        Thanks again for comments and your groups hard work in trying to shed more light than heat into this very interesting topic. ;-)

      • Carrick | February 11, 2015 at 3:40 pm |

        Good comment…concise summary of some key points that linger.

      • A carriage for Carrick, vehicle of sense.
        ===============

    • “I just don’t see any reason why the BEST guys can’t provide some straightforward guidance for what they think is the better product, and why.”

      When real users ask, they get my guidance. when trolls ask, I mainly ignore them, but I’ll make an exception

      If you want a global series, use the global series. duh
      If you are interested in a state, say when the state of california wanted data,
      I talk to the analyst about his various choices. and the various products and their pluses and minuses. He ended up using the quarter degree feilds.
      Some guys want just the raw data and they will do their own local work.
      If you are interested in comparing to a GCM.. use the 1 degree product.
      although for USA RGCM work I’ve used the quarter degree.

      Sometimes people want the world at 1km, so I direct them to my buddies work.
      At my day job i will probably use the daily experimental product, but first I need to test a couple things.. I may just build a new product from the raw data.

      It would be nice if it were like buying an adaptor for your vaccum cleaner.
      its not.
      go figure.

      • FYI, carrick has a Ph.D. in physics and works for a major research university

      • Stephen Mosher:

        When real users ask, they get my guidance. when trolls ask, I mainly ignore them, but I’ll make an exception

        I’m a troll now? LOL.

        At least I’m not a holocaust denier for pointing out your various products contradict each other.

        Good grief.

        >.<

        If you want a global series, use the global series. duh
        If you are interested in a state, say when the state of california wanted data,

        I told you exactly what I was looking, and you even quoted me before calling me a troll. Namely straightforward guidance for what your group thinks is the better product. And the product I was specifically looking at was this:

        Three choices: Which in your groups estimation is best for studying what that product is displaying—the surface map of temperature trend—and why?

      • “Three choices: Which in your groups estimation is best for studying what that product is displaying—the surface map of temperature trend—and why?”

        http://simplikation.com/why-sealioning-is-bad/

        “When you ask a question in bad faith, you are essentially looking for a way to demean, degrade, or otherwise destroy your target. A good example of an obviously bad faith question is the perennial favorite “When did you stop beating your wife?” as it instantly casts doubt upon the person asked the question.

        However, it’s easy to ask a question in bad faith using reasoned, good faith practices. Neutral phrasing does not always guarantee a question is asked in good faith. This is extremely obvious in documented sealioning; the target responds, only for the questioner to immediately grill them for more information, misinterpret the answer, or dismiss it entirely.

        The purpose of sealioning never to actually learn or become more informed. The purpose is to interrogate. Much like actual interrogators, sealioners bombard the target with question after question, digging and digging until the target either says something stupid or is so pissed off that they react in the extreme.”
        #############################################
        But to answer your question.

        Which is best to study “the trend” odd question.

        well, typically people who want to study ‘the trend” are interested in the “global trend” in which case they wouldnt use a gridded product.

        If somebody want to study “the trend” of a certain location.
        say “georgia”
        I would ask them specifically what they were looking to do with the “trend” data of georgia. and what time period they were interested in and how sensitive their question was to differences in trend. If they didnt know,
        then I’d suggest that they look at the two extremes. with and without corrections. and I’d probably compare multiple products for them.

        If that turned out to be significant, I’d suggest that they might want to take the raw data and put together there own map, especially if they had an expert. or I might want to do it myself with them.

        So, lets take a real example, of a real user, with real good faith question
        rather than some sealion.

        Robert Way. Wanted to know what he should use for labrador.

        So. I looked at CRU gridded, Nasa gridded, and berkeley gridded.
        Seemed like there some differences primarily in the temporal domain.

        This real user with real questions and real work.. had some specialized knowledge. I also knew that we dont ingest Env Canada and that this resource could add some fidelity…

        So, we decided to ditch the gridded products and go to sources.
        that’s right my recommendation was to not use anyones product.
        This meant figuring out if env canada got us any more stations or any segments of records that went further back in time

        So we spent a bunch of time doing hand checking of stations. sure some was automated.. but in the end we ended up doing some grunt work.

        In the end we ended up adding a few stations and some years beyond what anyone had
        See the difference between a real user and a sealion.

        good faith questions lead to this

        [IMG]http://i61.tinypic.com/29nuoeb.jpg[/IMG]

      • Lets the user was interested in Slovania
        I would suggest our adjusted product.

        Here is a comparison between local experts doing a map for their country with various public datasets…

        First these guys did the human way with some software assist
        and then they compared it to ZAMG, GISS, JMA, NCDC, CRU and BE

        Ensemble homogenization of Slovenian monthly air
        temperature series

        “ABSTRACT: This paper presents an attempt to obtain high-quality data series of monthly air temperature for Slovenian
        stations network in the period from 1961 to 2011. Intensive quality control procedure was applied to mean, maximum
        and minimum air temperature datasets from the Slovenian Environment Agency. Recently developed semi-automatic
        homogenization tool HOMER (HOMogenisation softwarE in R) was used to homogenize the selected high-quality datasets.
        To estimate the reliability of homogenized datasets, three to six experts independently homogenized the same datasets or
        their subsets. Different homogenization parameter settings were used by each of the experts, thus comprising ensemble
        homogenization experiment. Resulting datasets were compared by break statistics, root-mean-squared-difference (RMSD)
        of monthly and annual values, and RMSD of the long-term trend. This semi-automatic homogenization approach based
        on metadata gave more reliable homogenization results than a fully automatic approach without metadata. While the
        network-wide linear trend of the dataset did not change after semi-automatic homogenization was applied, the distribution
        of the trends of individual stations became spatially more uniform. The arithmetic mean of the homogenized datasets of three
        experts was assigned as a reference homogenized dataset and it was compared with some publicly available homogenized
        datasets. The calculated linear trend on an annual level for Slovenia is strongly positive in all datasets, though the trend values
        are significantly different between the datasets. We conclude that the warming trend of near-surface air temperature in Slovenia
        in 1961–2011 is significant and unequivocal in all seasons, except for autumn. Mean, maximum and minimum temperature
        series indicate linear trend of around 0.3–0.4 ∘C decade–1 on an annual level.
        KEY WORDS air temperature; Slovenian time series; subjective homogenization; HOMER”

        ######################################

        Now as I recall you prefer the NCDC approach.

        How did NCDC do compared to CSV ? well not so good. Like the other series they differed significantly from the CSV series which was created by the local climate experts. One gridded product that did not differ significantly ( it was within a few hundreths) was BE

        “The linear trend of
        the 48-year period varies considerably between the series:
        from 0.19 ∘C decade–1 for the NCDC to 0.36 ∘C decade–1
        for the ZAMG series (Figure 8). The trend uncertainty
        at a 5% significance level is around 0.1 ∘C decade–1 for
        all the series, making a warming trend highly statistically
        significant (p<10−5). Although the trend difference may seem marginal, five of six comparison series differ
        significantly at a 5% level from the CVS series. The only
        exception is the Berkeley Earth series, which also exhibits
        the smallest RMSD to the CVS dataset. "

      • Above was slovania. besides slovania, There are other countries that do specialized homogenized series. my sense after looking at the comparisons … let’s just say its probably not what you think.. Dunno, you spend a lot more time doing these comparisons than I do..
        what did you think of that paper on slovania when it came out?

        There are also States who also do their own series.. havent plowed into that one too much yet. which state do you think has the best approach to doing its own series?

        I imagine there will be surprises along the way.. hopefully some improvements..

  103. Mosh, Zeke and Robert

    Thank you for your efforts in putting together this post and answering questions.

    I do not believe for one moment that any of you are engaged in any sort of hoax, conspiracy or fraud.

    However, after reading through each comment in the cold light of morning I can’t see that anything has been resolved regarding the claims of noticeable cooling of the past record. The issue doesn’t seem to have been met head on and each of the cases cited by Booker and Delingpole and Homewood comprehensively refuted . These basically revolve round Paraguay, Iceland New Zealand and some other stations.

    I was hoping for a clear cut explanation and resolution of the points made but have not found this, instead we seem to have gone off on some highly complex side road that leads nowhere.

    The basic premise of the temperature record, which over the years has become over complicated, is that at some point at each location some person in the past looked at their thermometer, took a pencil and noted it down in a notebook. These were comp-lied into year books. I see many of them in the Met Office library when I visit.

    After a time this process became increasingly automated and the hand written notes may have been automatically recorded.

    Now, there are lots of arguments as to how accurate the thermometer was, whether the observer caught the lowest and highest temperature, complications with new equipment, new observers, station moves and urbanisation. Ideally, but not practically, all the individual data points should be collected in the same manner to ensure consistency .

    I think what people are asking is how a temperature recorded on say 15 June 1898 in Paraguay or 21st April 1927 In Sydney gets altered to a substantially different one.

    Circumstances of the station change over the years so each temperature measurement (not an estimate) needs to be validated, otherwise its as anecdotal as you believe the written records are. How is this done? Why do the records observed at the time and written up with varying degrees of diligence not remain the same? A temperature of 19.65C is what it is. It shouldn’t become 17.24C or whatever without a very good explanation.

    Also, I live ‘local.’ I want to know what has been happening where I live, by town, county and Country, over the decades and centuries. I am not interested in some fantastically constructed and highly complex global temperature that has no bearing on anything and seems to sacrifice the ‘local’ in order to arrive at the ‘global’.

    Simple question Mosh, Zeke and Robert. Have the country/station temperatures noted in the headlines been cooled? If so by how much and why?

    tonyb

    • I agree there’s no evidence of BEST being engaged in a hoax but then they’re the only ones suggesting that’s what people are commonly saying. I say that whatever it is they do all day one thing they’re definitely not doing is showing any respect for actual recorded history with the inevitable consequence that those of us who know some history have no other rational choice but to consider BEST no more reliable than GISS (by which I mean not reliable).

      • hidethedecline

        As you will have seen over the years, written historical records are routinely derided and dismissed as ‘anecdotal’ by some, yet it seems that this same group believe numbers are so reliable they can be used as the basis for some fantastically over complex global construct that has astonishing accuracy.

        However, without verification numbers are as anecdotal as text, perhaps more so as the assumption is made that all the components of the historic temperature observation are correct and the resultant numbers have not been mis-transcribed at some point.

        tonyb

      • Very expensive lipstick applied with very great care on a very dead pig.

      • “The Zekes. Mosher are on the Defensive ALWAYS its a sure sign of defeat but they are not to blame they simply have been had and been told to use the algorthyms to change to trend according to what they have to do according to the WMO”

        Ya right.

        The logic is pretty simple.

        Someone accuses NASA of fraud. and shows an adjustment.
        Another person says, well, BE, have the same answer.
        the next person says..that’s easy to explain.

        “The Zekes. Mosher are on the Defensive ALWAYS its a sure sign of defeat but they are not to blame they simply have been had and been told to use the algorthyms to change to trend according to what they have to do according to the WMO”

        So our approach which gets the same answers as the “frauds” leaves you with two choices

        A) we somehow magically matched a fraud
        B) fill in the blank as others do.

      • Steven Mosher commented

        So our approach which gets the same answers as the “frauds” leaves you with two choices
        A) we somehow magically matched a fraud
        B) fill in the blank as others do.

        While I have to guess you mean that because two such different methods come to so similar answers, they must be fundamentally correct, I on the other hand tends to thinks it’s due to the fact you both “fill in the blank” places on the globe with temps from somewhere else. :)

      • Mosh

        I believe in your good faith and that of your colleagues

        However in the unlikely event that delingpole or booker were to phone me and ask about the cooling adjustments they believe occurred what would I tell them as I feel my basic question hasn’t been answered. Can you do so in a clear manner?

        https://judithcurry.com/2015/02/09/berkeley-earth-raw-versus-adjusted-temperature-data/#comment-673890

        Tonyb

    • You have cracked part of the code. +10

      • Rud,
        You often do this temperature plots with actual numbers.

        Can you look at the stations in Paraguay, the Arctic and Australia for a couple fo sets to compare to BEST?

        Scott

    • Tony

      “However, after reading through each comment in the cold light of morning I can’t see that anything has been resolved regarding the claims of noticeable cooling of the past record. The issue doesn’t seem to have been met head on and each of the cases cited by Booker and Delingpole and Homewood comprehensively refuted . These basically revolve round Paraguay, Iceland New Zealand and some other stations.

      I was hoping for a clear cut explanation and resolution of the points made but have not found this, instead we seem to have gone off on some highly complex side road that leads nowhere.

      Nothing will be resolved because folks who are asking the questions are not interested in explanations.

      Lets start with adjustments.

      There was charge laid out long ago, a charge that continues to be brought up. NOAA cooked the data and Hansen and GISS cooked the data.
      Adjustments were a fraud and possible criminal.
      You know I have friends in the business who actually had to stop working and spend a long time answering investigators questions.

      Words have consequences. Booker doesnt get that, Shub doesnt, Carrick doesnt and perhaps Judith doesnt. These guys could not do their science, they had to convince investigators that they hadnt been cooking the books.
      the news went from a blog to fox news to a congressman to the GSA..
      and good people suffered.

      on to adjustments

      we decided that a better approach would be an algorithm.

      1. you couldnt accuse it of having political motives
      2. you could test it on synthetic data
      3. you could vary parameters and test it.
      4. you could repeat the same work and get the same answer.
      5. it could scale to 40000 stations.

      in short we decided to take a hands off approach. Test the approach independently. verify that it wasnt biased. then apply it to the problem.

      The approach will warm some stations and cool others.period.

      Along comes a booker or a delingpole or homewood.. whoever
      and they pick a station that warms.

      1. They claim that the adjustment is a scandal.
      2. people link to comments that adjustments are criminal

      And then you ask me to explain the adjustment. !! WTF?

      The explanation is simple: an algorithm tested to be fair, adjusted the station.
      it had no “reasons” no “motives” no human bias. It looks at the neighbors and decided that the station was inconsistent with its neighbors.
      The explanation is the CODE.

      Now imagine I step through that one station and pull up a hundred of its neighbors.. and walk you through every bit of math. In the end.. what will happen..

      someone will ask for station x1, x2,x3,x4,x5,

      and everytime I walk them through it, they will say.. what about station x6?
      or they say “I think your algorithm will fail on this kind of case”

      see Pielke’s comment here?. he asserted our algorithm could not handle a gradual case. It did.

      and while I spend this time on defense answering questions.. the important stuff doesnt get done. So, instead of actually working on improving the local fidelity I get to explain to people that they need to read charts to figure out the base period. granted some of these people have Phds.. go figure.

      Here is a question for you.

      You’ve spent a fair amount of time asking me questions.
      when will you spend an equal amount of time asking booker or delingpole why they avoid talking about cooling adjustments?
      hell, when will you, or anyone here spend 1/10 of their effort quizzing the people who make these allegations? never.

      That means I have to play both offense and defense. I don’t mind.
      But that means less time to answer your questions.

      • Mosher it’s great to have your reply to Tonyb’s question.

        TonyB Q: Why do the records observed at the time and written up with varying degrees of diligence not remain the same? A temperature of 19.65C is what it is. It shouldn’t become 17.24C or whatever without a very good explanation

        Mosher A: Algorithm.

        It’s not an answer but it’s nice you replied.

        BTW that stuff about the free press writing about incompetent govt scientists and then the govt scientists getting asked questions about their incompetence, that’s democracy.

      • Mosh

        Thanks for your reply

        See hide the decline answer at 10.36 which sums it up

        You above all people know that I continually make the point here and elsewhere that scientists are not involved in a hoax, conspiracy or fraud. I get it in the neck sometimes for saying this as there are many sceptics who believe it is.

        I try to build bridges with the scientific community in my own small way and get it in the neck for that. I point out that whilst they have their faults scientists such as Phil jones and Michael Mann are perfectly competent if, in my view, incorrect in the case of the latter, and actually quite a good researcher in the case of the former. I certainly get it in the neck for that.

        Certain people are gaining traction by making claims of wholesale unwarranted adjustments that cool the record. I had thought this post would confront that head on, so people like me can turn round and give a reasoned reply if someone asks.

        As hide the decline notes you haven’t done that. I sincerely believe in your good faith and that of Robert and Zeke. But unless you directly confront the issues raised by Homewood, Booker, Delingpole and Nova, amongst others, in a clear and comprehensible manner, the issue will continue to bubble away

        Your temperature records are at source, just as anecdotal as my written historical accounts. Demonstrate they are reliable and then directly and clearly answer the issues raised about adjustments causing unwarranted and meaningful cooling in the various specific instances and locations cited.

        If you can do that it will give me-and others- the ammunition needed to refute the claims made . It is not in either of our interests to allow incorrect perceptions to take root

        Thank you

        Tonyb

      • > Your temperature records are at source, just as anecdotal as my written historical accounts.

        I prefer my anecdotes shaken by numbers, not stirred.

      • tonyb

        ” I get it in the neck sometimes for saying this as there are many sceptics who believe it is.”

        you get it in the neck? really?

        I think you don’t get it in the neck. Here is what that would look like.
        You like CET. you work with the MET.

        getting it in the neck would be a skeptic trashing your work because you work with the MET.
        getting it in the neck would be a skeptic trashing you because you work with the MET.

        you don’t get it in the neck. maybe the little toe.

      • Mosh

        Suggest you look up the WUWT thread where Willis thoroughly trashed me because I dared to suggest that Michael Mann wasn’t a bad scientist.

        Matthew marler was there. I think he was a bit shocked. So yes I do get it in the neck which is why I would appreciate a clear answer to my reasonable question that I just posted

        Tonyb

      • Matthew R Marler

        Tony B: Certain people are gaining traction by making claims of wholesale unwarranted adjustments that cool the record. I had thought this post would confront that head on, so people like me can turn round and give a reasoned reply if someone asks.

        You have made a number of comments with that theme, but I can not figure out exactly what you are asking for. Do you want the BEST team to focus on one of the series selected by Booker, Delingpole, etc; and then trace through step-by-step how each data point in the whole set affects the adjustment; then show how changing one of those data points would change the modeled values?

        What they did was a series of choosing 7/8 ths of the data, using them to train the algorithm, and then using the computed coefficients to compute the estimates for the 1/8 of the sample omitted (called “imputed values”), and show the mean square error among the imputed values was the smallest yet achieved (more on Brandon Sholleberger’s point about this later.) Compared to that, the detailed exploration of particular non-intuitive outcomes has very little information. You would still not know, for example, if all the thermometer records of the selected non-intuitive outcome were especially accurate, especially inaccurate, sited in a bizarre microclimate, or whatever There will always be some adjusted series that defy common sense, because there isn’t enough reliably known about what produced the apparent outlier in the first place. All you can really say is that the “best” algorithm will produce a smaller mean squared error than any intuitively understood algorithm; and there will always be some adjusted series that defy intuition.

        Brandon Shollenberger’s point was that when the BEST team did the jacknifing, they did not each time re-estimate the breakpoints, but used the common set of breakpoints from their overall estimation procedure. It is a fair point, in my opinion, though I think it makes little difference to the estimated mean square errors of the imputed values. The most important point here is that re-estimating the breakpoints each time will not satisfy the Booker-Delingpole critics who select particular non-intuitive results, because there will still be some non-intuitive results.

        Zeke Hausfather and Steven Mosher have linked to technical documents. (Besides that, an awful lot of their data and code have been made available for download, in case anyone wants to study it. From what I have read so far, I doubt that the code could be understood by someone who had not mastered most of what is in the technical documents. ) If you think, or someone thinks, that intuition is more informative than those technical documents, then nothing Zeke, Steven and others writes could possibly change your or their minds.

      • Matthew

        Imagine that Delingpole or Booker or Rose came along to this thread in order to write a follow up, but first they wanted to try to verify if what they wrote initially -about unwarranted cooling in a number of named cases- was true.

        What would they take away from the information provided to date?.

        Has the core of the concerns that was causing such excitement been laid to rest? I am certainly not accusing anyone of fraud or lack of scientific integrity, but after hundreds of comments I am not sure the basic premise, that was news even in the MSM, has been answered in a way that journalists could subsequently put over to an intelligent readership of the respective newspapers or blogs.

        Intuition is not as good as science, but clarity in addressing the issue and refuting it seems an important matter to me, otherwise this will just rumble on. When you see a bud you don’t want, nip it quickly

        tonyb

      • What Mosher, Mahler and Zeke fail to understand is that despite all their testing, in real life Climate their Algorithms are not working the way that they describe, unless they don’t care about how it handles Regions.
        But if it can’t properly handle regions how can it handle global?

        When we point this out they just fall back on “look at this paper” or “look at this Code” or it is “just an algorithm”.
        What is the point of an Algorithm and a resultant Data Set that does represent REALITY.

      • Sorry, it should say.
        What Mosher, Mahler and Zeke fail to understand is that despite all their testing, in real life Climate their Algorithms are not working the way that they describe, unless they don’t care about how it handles Regions.
        But if it can’t properly handle regions how can it handle global?

        When we point this out they just fall back on “look at this paper” or “look at this Code” or it is “just an algorithm”.
        What is the point of an Algorithm and a resultant Data Set that does NOT represent REALITY.

      • A c Osborn

        I am baffled as to how difficult it seems to be to get a straight answer to a straight question.

        I have the email addresses of delingpole, booker and rose.

        If in good faith I wanted to email them before they prepare their material for this weekends newspapers, in order to point out that their concerns over the climate record are groundless, which of the many comments in this thread would I link to in order to demonstrate this?

        Tonyb

      • TonyB,

        I have a(nother) question. Would it be an improvement in your view if the values for the 30,000 or 40,000 sites were held at the point they are today and then followed going forward? I realize with would in now way answer any of the questions wrt historic but it seems it would be a “trend indicator” going forward. Any additional sites would improve coverage, but I’m thinking of a sort of “set point”. Am I thinking incorrectly?

      • Danny

        Do you mean that we should use today’s temperature data base as a benchmark? That assumes they are correct , whatever correct may mean.

        If they are correct and meaningful that’s fine. Having said that it would be decades before we could see any trend emerging so I guess we really need to sort out the historic record as that is already available and can tell us a lot.

        Tonyb

      • TonyB,

        Although I butchered the typing embarassingly, yes that’s effectively what I mean. If the “correct” measurements are set, we should have a trend indicator. There are still a number of questions to be addressed wrt “correct”. And if any future “adjustments” would be made all bets would be off. Seems this could be done with some sort of subset of fixed position, unreplaced, unupdated existing sites. Maybe more is not better. Fewer, equally maintained and unmodified might actually give more credible data. All arguments would not be addressed, but some would based on my level of understanding.

      • tony

        ‘What Mosher, Mahler and Zeke fail to understand is that despite all their testing, in real life Climate their Algorithms are not working the way that they describe, unless they don’t care about how it handles Regions.”

        WTF?

        we have done a homogenization test and reported that we may be over smoothing.
        we have done a increased resolution test and reported that we may be over smoothing. Even here I note the issue with smoothing.

        But here is what we note. Regardless of how interpolation, smoothing, infilling, extrapolation is done.. global estimates are very close to each other

        For christsakes compare UHA and RSS look how wildly different they are. Now skeptics cite RSS .. But UHA is vastly different?
        fraud? ya right.

        As for how we handle regions I’ve told you its an active area of research.
        Recently for example a paper was published comparing a local version of slovania with our version. We beat all other global products and were the only one that wasnt significantly different.

        There are others coming.. as I said.. active area of research.

        What’s that mean? We know the areas that we can improve to get local values improved. We know it wont change the global answer in any
        way that will make skeptics happy.. if the global goes up a bit they will bitch. if it goes down they will bitch.

        As we do comparisons with smaller spatial scale products, we will look for systematic things we can change to improve the local fidelity. When we do that people will also bitch.

      • Mosh

        Your 4.04 reply to me

        Why don’t you address your ire to the person that made that statement as it wasn’t me. It was a c Osborn.

        But you could answer my 3 .32 made in good faith

        Tonyb

      • Tony

        “If in good faith I wanted to email them before they prepare their material for this weekends newspapers, in order to point out that their concerns over the climate record are groundless, which of the many comments in this thread would I link to in order to demonstrate this?”

        That would be simple.

        You would explain that algorithms are not designed to preferentially warm the record. They are designed to move the record closer to the truth.
        In some cases this means warming the record. He focused on half the story

        In other cases it means cooling the record..

        Like removing UHI ( not entirely I would argue )

        Seoul: http://berkeleyearth.lbl.gov/stations/156456
        Tokyo http://berkeleyearth.lbl.gov/stations/156164
        Phoenix http://berkeleyearth.lbl.gov/stations/161051
        Vegas http://berkeleyearth.lbl.gov/stations/161705
        LA http://berkeleyearth.lbl.gov/stations/161141
        taipei http://berkeleyearth.lbl.gov/stations/159560
        SF http://berkeleyearth.lbl.gov/stations/162082
        Baltimore http://berkeleyearth.lbl.gov/stations/170075
        Altlanta http://berkeleyearth.lbl.gov/stations/161109
        Jakarta http://berkeleyearth.lbl.gov/stations/155660
        Dallas http://berkeleyearth.lbl.gov/stations/170005
        NYC http://berkeleyearth.lbl.gov/stations/167589

      • mosher –

        ==> “We know it wont change the global answer in any
        way that will make skeptics happy.. if the global goes up a bit they will bitch. if it goes down they will bitch.”

        You are being unduly harsh on my much beloved “skeptics.”

        You act as if no matter what the results of your work is, they’ll be unhappy. That isn’t true.

        Yes,it is true that If the global goes up a lot, they will be unhappy.
        If the global goes up a bit they will be unhappy.
        If the global does down a bit, they will be unhappy.

        But if the global goes down a lot, they will hail you as a genius who will be doing your part to save poor children in Africa from starving.

      • Matthew R Marler

        tonyb: I am baffled as to how difficult it seems to be to get a straight answer to a straight question.

        Yes, you are baffled. The fact is, there is no “straight” answer. As I put it above, there is no answer that is technically accurate and intuitively clear.

      • Joshua.

        The problem is if I drop it too much then I will disappear the LIA.

        So..If I made all of Doc’s changes, Willis’s changes, Carricks ideas, Brandons ideas, AC Osborne, TonyB.. if I did all that and the temps went up.. I’d been hosed. It would be like the time JeffId and RomanM did a temperature series and it came out warmer.

        If it went down a little, They would say.. so you were wrong before… maybe you are still wrong.. more cooling dammit!

        And brandon and carrick would complain that the cooling was now not smooth enough.. or too smooth.. because georgia peaches.

        If it went down a lot, they would complain that I’m destroying their argument that we are coming out of an LIA.. or by cooling it I was erasing the effect of the grand solar maximum.

        Lets see.

        Take two periods: 1750-1780 and 1984-2014

        Raw; says the difference is about 1.2C
        Adjusted says the difference is about 1.35C

        Clearly cooling it to 1.2C is not enough for them. any amount of warming more than 1.2C is a fraud.

        I just want some clear direction. How much lower do I have to go before it’s not the biggest scandal of all time.

      • Tony its in moderation.

      • Mosh

        Thanks. Look forward to reading it. It does no one any favours if a false sceptical claim is made.

        if you can demonstrate in a clear fashion that they have got it wrong I will tell them so. At the present time I have no idea if the cooling claim was right or wrong.

        Tonyb

      • ==> “The problem is if I drop it too much then I will disappear the LIA.”

        Yeah. Ok. Didn’t think of that.

      • “BTW that stuff about the free press writing about incompetent govt scientists and then the govt scientists getting asked questions about their incompetence, that’s democracy.”

        The claims were not about incompetence. The claims were false claims about fraud.

        They are repeating them now

        http://blog.heartland.org/2015/02/2014-hottest-year-ever-recorded-look/

        “A shocking report by two veteran meteorologists Anthony Watts and Joseph D’Aleo states, “All the data centers, most notably NOAA and NASA, conspired in the manipulation of global temperature records.” Thus all three do not display independent research confirming the work of the others; instead they demonstrate their common corruption.”

        I may just have to bust out some mails on these jokers!!!

        here is a taste from an insider….

        “The paper was rushed to get out on the table that the NOAA and NASA data will not without its flaws. All the focus was on CRU.”

      • Would you call Dr.Easterbrook a “Warmist Denier”? Dr. Easterbrook agrees with Steve S Goddard on the Real Science Web Site that the @NOAA Data has been manipulated to cool the past and warm the present for the purpose of skewing the data in favor of the Anthropogenic Climate Change Theory. http://youtu.be/WwTmm1zcrJ0

        Moreover, one also wonder how seriously President Obama views the dangers of CO2 emissions from coal on the Climate when he has just given the green light to China’s and India’s coal use who are both spewing out far more CO2 from coal than the U.S. is. http://wp.me/pPrQ9-vUW

        Other factors that supports the Climate Skeptics position that NOAA has manipulated the temperature data in favor of the Anthropogenic Global Warming Theory is the motivation of the government advocates. Billions of tax payer dollars are being “invested” in green energy, government can justify collecting billions of dollars via carbon taxes to “save the earth, the Saudis can continue their monopoly on oil resources, etc etc etc.

        In the meantime there is no credible explanation as to why GISS altered the surface temperature data https://notalotofpeopleknowthat.wordpress.com/2015/02/12/real-climate-fail/

      • Mosh Feb 12 6.59pm – are you on the sauce?

        Heartland? Srsly? Calm down mate – you’re coming over all Peter Gleick.

      • Mosher, I apologise for making you so angry, you are obviously very passionate about what you are doing and the anger shows it.
        However please let me explain where I am coming from.
        My whole working life I have been involved in Quality Control or later Quality Engineering.
        I started working for the UK Ministry Of Defence as an apprentice and when finished was involved with Measuring and testing Instruments and the Tools used for measuiring them.
        I worked in a Metrology Lab, which at that time was doing more accurate work than the National Physical Laboratory.
        I then moved in to Industry and worked on Gauge, Machine & Process Capability so I have a throuough grounding in
        Measurement
        Repeatability
        Reproducibilty
        Distributions
        Process Controls
        and Problem Solving.
        I am also a Computer Programmer (low level), but I know without a shadow of doubt an Algorithm can never take in to consideration all the complexities that are involved in measuring the local Temperature and trying to correct “so called” errors by comparing them to a place more than 1Km-10Km away will never work.
        Everything I have done and everything that I understand about “Science” is about the Accuracy of Observations which are used to Describe the reality of an “Object”, “Process”, “World” or even the “Universe”.

        Now we come to what you are doing, which you believe in so much and are so passionate about.
        You appear to have lost sight of the objective of the Temperature Record, it was never designed for the purpose of measuring “Global Temperatures” or Global Temperature Trends.
        It was designed to tell people about their environment, because that, for various reasons like farming, flying, health and safety etc, was what was important ot them at that time, but at the same it also provided evidence of what the world was like.
        Those measurements, however fragile accuracy wise were the “Reality” of what those people experienced.
        Now 50, 100, 150 years later you are trying to “Refine” and “Improve” that record for Climate Science use.
        However what you are actually doing is “Changing the Reality” of what those people and places actually experienced.
        You are producing a “False Reality” which is destroying the very thing that Science is supposed to be about ie Describing Reality as we Know it.
        Not as we believe it Should be or how we Want it to be or how some Computer Model says it should be.

        At the same time you are are doing a massive disservice to all the professionals and amateur workers who compiled that data by calling their work Incorrect, Inaccurate or even in some cases Shoddy.
        Yet at the same time you do not acknowledge the massive errors introduced by Electronic Weather Stations recording Temperature “Spikes” that are too short for any Human to even feel let alone affect the Global Temperature and yet you are prepared to use those to “Correct” 50,100, 150 year old Temperatures.
        http://notrickszone.com/2015/01/22/alice-springs-automatic-weather-station-inflated-temperature-by-4-5c-producing-false-record-high/#sthash.k8DSIibh.dpbs
        You also do not acknowledge real values of UHI or how badly BEST use of Satellite “Lights” measurements actually isolates it or how inadequate their adjustments are.

        I am sorry but this goes against everything I have ever learnt about Instrumentation, measurement, accuracy and how Science is supposed to be used in Describing “What Is”.

      • Matthew R Marler

        TonyB: I am baffled as to how difficult it seems to be to get a straight answer to a straight question.

        But you are baffled by quantum mechanics as well, are you not? Everyone else is. General Relativity? The Big Bang? Gravitational singularities (aka “black holes”)? How random variation and natural selection produced composers like Mozart and Wagner, scientists like Newton and Einstein? One’s own bafflement is not that informative about anything in the shared world.

      • “You also do not acknowledge real values of UHI or how badly BEST use of Satellite “Lights” measurements actually isolates it or how inadequate their adjustments are.”

        We DO NOT use nightlights.

      • AC

        ‘However please let me explain where I am coming from.
        My whole working life I have been involved in Quality Control or later Quality Engineering.
        I started working for the UK Ministry Of Defence as an apprentice and when finished was involved with Measuring and testing Instruments and the Tools used for measuiring them.”

        QC? You? really?

        ok QC your own writing. we DO NOT, I repeat, DO NOT use “Lights”
        to determine UHI, urban, rural ANYTHING.

        So now please explain how and why you made this mistake.
        explain how somehow so steeped in QC could make such a mistake.

      • Mosher, typical mis-direction and sarcasm, so I made a mistake having read that somewhere, big deal at 68 my memory is not as good as it once was.
        It does nothing to change what I said in the rest of the text and you know it.
        But then that is you all over, never answer the criticism just attack any weak point.
        I give up with you, I have your non answers on record and they show you up for what you are.

      • Well Well, you do know how to bend the truth and make the most of it.
        I just looked up BEST UHI and what do you know it goes directly to this study.
        http://scitechnol.com/2327-4581/2327-4581-1-104.pdf

        And I quote
        “The effect of urban heating on estimates of global average
        land surface temperature is studied by applying an urban-rural
        classification based on MODIS satellite data to the Berkeley
        Earth temperature dataset compilation of 36,869 sites from 15
        different publicly available sources.”

        So it is not lights but brightness or colour. That is a really big deal.

      • Matthew R Marler

        A C Osborne: However what you are actually doing is “Changing the Reality” of what those people and places actually experienced.
        You are producing a “False Reality” which is destroying the very thing that Science is supposed to be about ie Describing Reality as we Know it.

        Despite everyone’s best efforts, the data have random variation, where “random variation” is the variation that is not reproducible and not predictable. If you knew for a fact that a certain station never had any random variation in it at all, or that some did, or that all did, you would not adjust them. The problem is, all of them have random variation. Some of the apparent difference between two sites was caused by random processes; at the same time, some random processes, like unrecorded changes in jet streams, have produced correlations in the data of recording instruments separated by long differences. So what are you going to do: ignore the data? Deny the existence of random variation, either in all cases or some cases? Ignore the correlations caused by random events (or assert that they can’t occur or can’t be random)? And having made a bunch of decisions or claims like that, how are you going to investigate whether you have come up with accurate inferences about what is not in fact absolutely known?

        If you know for a fact that a Paraguay station is perfectly accurate, you can adjust the algorithm so that it is not adjusted, but can be used in adjusting other stations known to be in error. But how do you know that? The BEST team reported the algorithm in great detail; and they reported the results of the 1/8 jackknifing procedure that shows that, over the full data set, their procedure does not produce a “false reality” that destroys the very thing they are supposed to be about. But back to a question: How can you tell that they have produced a “false reality” that is actually less accurate than any “reality” produced without a complex statistical procedure, such as accepting every station at face value? Or a procedure of willy-nilly selecting some for averaging, some for adjusting, some for treating idiosyncratically?

      • Matthew R Marler

        A C Osborn: so I made a mistake having read that somewhere,

        When you make a mistake, admit it, apologize, grind your teeth and suffer in silence. We all make them.

        I apologize for misspelling your name “Osborne”.

      • Mr Mahler, are you in all honesty saying this “If you know for a fact that a Paraguay station is perfectly accurate, you can adjust the algorithm so that it is not adjusted, “.ie that BEST or anybody else would adjust their algorithm for every station in the world?
        Now I know you are having a laugh, because you and your algorithm can’t possibly Categorically, Positively know if a station is correct or not just by comparing it to other stations.
        All I know is BEST does not work on a “Local” or “Regional” basis so how the hell can you you then put it all together and make it work for the world.
        ps I have already demonstrated to Mr Mosher that BEST does not work for the UK, so don’t even bother asking.

  104. “It isn’t that they can’t see the solution. It’s that they can’t see the problem.’ C. K. Chesterton

    Think I’ll start a rival quote of the day service. Open up the market.

    In the past few days we have had a discussion about the 26% of greenhouse gas emissions – neglecting black carbon on top of the other 74% – that inevitably dominates so-called policy options. There has been lengthy discussion of a few of thousands of possible outputs of non-linear equations – chosen arbitrarily – that diverge exponentially through time – and why these bear little resemblance to climate observations. And we have just spent an entire post on the vagaries of a dataset cobbled together from disparate sources, subject to unknown errors and seriously missing an important energy term. Latent heat at the surface.

    Less rational discourse than the odd, angry pot shot from climate warriors. Is it my imagination – or does it get sillier by the day?
    .

  105. The BEST homogenisation process has clearly cooled the period prior to 1900 and made very little difference post 1900. That much seems obvious. Other datasets (GISS, NCDC, Hadcrut) using different adjustment procedures have slightly different outcomes. I questioned Gavin Schmidt re. the 0.02C margin via which GISS put claim to 2014 being the ‘hottest’ year since records began and whether or not GISS adjustments, if they had not been made, might conceivably have ‘made all the difference’ re. this claim. He said that the adjustments to the latest years were so tiny (and so consistently applied) that they would not have affected even this very small margin. I find this just a little hard to believe.

    In this age of politicization of the climate change debate, even very small adjustments to temperature data can be skewed in such manner as to ‘prove’ this or that argument and I believe this is where the real problem lies – the fact that we cannot rely upon the statements of climate scientists made on the basis of minimally (at least in the case of post 1900 BEST), but significantly adjusted global temperature records. The adjustment regimes of both NCDC and GISS since 2008 have clearly cooled the past and warmed the present – see Climate4you. Hadcrut has largely warmed the past and the present since 2008, with significant exceptions. The previous version of Hadcrut had 1998 clearly as the warmest year, in line with the two satellite datasets. So adjustments, though small, can often be significant in terms of policy advocation.

  106. Geoff Sherrington

    Thank you, Doc,
    Several points of agreement there.
    In Australia, we have ready for release some moderately large data sets from official historic sources giving temperatures in the late 1800 to 1930 or so time period.
    Try as we will, we cannot see more than 0.5 deg C change berween then and say 2000-13 incl.
    The shape mght be right in BEST but the calibration seems wrong, as you note.
    Plus, it is almost axiomatic that at a site, a homogenisation adjustment will change a trend every time..
    There is no homogenisation that gives more realistic absolute temperatures and trends simultaneously.
    It has to be one or the other but not both — or none.

    • A heart surgeon told me this story. He has a device in a clinical trial and one of the patients in the treatment arm died while visiting Central America. He contacted the family and a pathologists at the best hospital in the nation. He flew out there and assisted at the autopsy and the finding showed that the death was completely unconnected with the poor mans heart, which made him very happy.
      I asked him if he would have done the same thing if a patient in the control arm had died. He looked at me like I was insane and said of course not.
      this is a major problem with people, they don’t realize that Locard’s exchange principle holds on all levels including information.

      • Well, yes, fair exchange; but he didn’t have to prove the absence of the device caused a mortality.

        This is, of course, something hard to double-blind.

        Here’s a bias. What’s the easiest to study prospectively, double-blinded, and placebo controlled? Why, pharmaceuticals, natch. A puffed up dominant treatment modality.
        =================

      • What device was it? A VAD?

  107. Score so far:

    Historical facts changed – none.

    Research value for money – zero.

    Practical utility – nonexistent.

    Benefit to man or beast – nil.

    Nature wins again – as usual.

    Live well and prosper,

    Mike Flynn.

  108. A good presentation and discussion. Not perfect, but progress.

    It seems to me to boil down to:

    The data is quite uncertain.
    The uncertainty is uncertain.
    Fixing uncertainty is uncertain.

    The uncertain uncertainty needs to be taken into account whenever using the data for any specific purpose.

    Proclamations based on the data often fail to make the uncertainty clear, and this is certainly damaging to credibility.

  109. also,

    The BEST team’s willingness to engage their critics on open turf is exemplary.
    Too bad some others seem to be too thin skinned, or lack the guts, to do likewise.

    Just one last question to Steve Mosher:
    As an engineer, I have notions of accuracy and precision and correctness which are quite alien here. When I read these discussions it makes me squirm.
    Can you appreciate that?

    • “As an engineer, I have notions of accuracy and precision and correctness which are quite alien here. When I read these discussions it makes me squirm.
      Can you appreciate that?”

      Yes. my first job in engineering was as an operations research analyst.
      after a few promotions I became an vice president of flight simulation.

      one of the problems the company I worked for faced had to do with missile tip off in high AOA flight conditions. That is at high AOA conditions the nose of your plane ( and the missile sensor) was pointed at the target, but your velocity vector was not. Consequently, when you pulled the trigger on the missile and it ejected from the plane, it would weathervane into the wind,
      (Tip off) and potentially lose sight of the target. Losing sight of the target could me losing lock and basically you wasted a missile.

      Trying to solve that problem precisely was a real bear.. because the flight characteristics of the aim9 at high AOA wasn’t really well understood. and even if we did understand it, we still had to predict it real time and give feedback to the pilot about whether his shot was going to be valid after launch and tip off. There wasnt an accuarate answer. Oh ya, and we had very little real data. We had A lot of short cuts
      and approximations, simplifying assumptions.. first order models.. safety margins. But there was a practical goal. Make the best system we could.
      Now my first reaction to seeing the code that gave a pilot the indication that a missile launch was going to be good was quite a shock to my love of accuracy and precision. WTF? how can this code make that short cut?
      what about this case? what about that case? couldnt we make something a bit more precise? why yes.. in theory. In practice, there was a spec. The spec never asked for perfection. It was “good enough for horse shoes and hand granades”

      Since I used to be an engineer I am also familar with two phrases.

      1. stop polishing the bowling bowl.
      2. its time to shoot the engineers.

      I’m sure you’ve heard one of those two.

      • It’s good when you reference your engineering days Mosher. Fascinating stuff. And your recent comments abut how there needs to be a Spec and acceptance criteria for various climate science products was also fabulous.

        It seems to me that BEST’s Spec was ‘create a global mean aligned to GISS and the rest’ and the acceptance criteria was ‘to the satisfaction of the team creating BEST’.

      • well ,,Judith was there ASK HER DIRECTLY

      • “well ,,Judith was there ASK HER DIRECTLY”

        Uh, why ask her when she posted about the extent of her involvement already?

        “The fact that my name appears as second author on some of these papers is attributable to my last name starting with the letter ‘C’. The group has taken a ‘team’ approach to authorship on this set of papers. My contribution to these papers has been in the writing stage and suggesting analyses. I have not had ‘hands on’ the data, one of the reasons being that I do not have funding to do any analysis.”

        https://judithcurry.com/2011/10/20/berkeley-surface-temperatures-released/

        Not sure why anyone would ask someone who did not have “‘hands on’ the data”, and did not do any of the analysis, about spec and acceptance criteria?

        But I do understand the desire of a CAGW apologist wanting to borrow Dr. Curry’s integrity.

  110. The problem is not the magnitude of the BEST adjustments (their series are too short for it to have an interest) but the fact that the discontinuities are statistically significantly biased towards negative values. We can assess this bias on the basis of long series and many studies show that these discontinuities tend to cool the trend of the raw data of about 0.05 ° C per decade (or 0.5 ° C over the twentieth century).

    The bias is very important, it affects BEST as the others, and it must be explained. To date, the only unrebutted scientific explanation was published in 2001, Hansen et al. 2001.

    This means that according to current scientific knowledge, warming trends are systematically overestimated in the temperature curves including BEST; 0.05 ° C per decade represents a lower bound for this bias.

    • (2014-1984) period – (1780-1750) period
      That is taking the first 30 years and the last 30 years.

      Difference between raw and adjusted is around .15C

      26 decades: .15C

      around .005C per decade.

      your figure of .05C would disappear the LIA.

      That is today would be as cool as the LIA

      • Mosh

        In 1939 Matthes termed the last 4000 years as ‘the little ice age’. In that context there were two hundred years around 1650 When the temperature was often cold and glaciation was the greatest in that 4000 year period.

        However, many parts of the LIA -in the popular sense of that term 1300 to 1870 or so -was as warm as today. Can we definitively say we have climbed out of the lia type events of considerable temperature oscillations? We would need another fifty years to know that.

        Tonyb

  111. 1. Great post, much to think about.
    Compliments to Zeke et al for a nice presentation. BEST is generally better received than the other surface sets. Just wish Mr Mosher wouldn’t get out of his pram so quickly – he’s capable of making points very well without the curtness.
    2. As usual, not much meeting of minds. The defenders and the sceptics still talking at cross purposes, and some are just shouting.
    3. Long thread and I may have missed it, but I don’t see much discussion of AFRICA. Zeke shows that the warming trend adjustments in the northern hemisphere are largely cancelled by a large opposite adjustment to AFRICA. Nobody talks about Africa. Presumably data is much sparser there and adjustments can have a big effect. Any comments?

  112. A fan of *MORE* discourse

    Appreciation  Robert Rohde, Zeke Hausfather, and Steve Mosher have done a terrific service in establishing the statistical robustness of land-temperature warming.

    Further strengthening  Bayesian Climate Etc readers will appreciate that further confidence in the Rohde-Hausfather-Mosher thesis arises from concurrent heating/expansion/rising of ocean-waters.

    Who remembers sea-level “pause”?  Who remembers the “the pause” in sea-level rise between 2007–2011?

    Who remembers the failed WUWT predictions of sea-level fall?

    Question  Why don’t sites like WUWT revisit their failed analyses and revise their bankrupt “pause” expectations? The world wonders!

    FOMD’s prediction  Present cherry-picking belief in a land-temperature “pause” will suffer the same fate as the past cherry-picking belief in a sea-expansion “pause”.

    And there is strong scientific evidence this week to support FOMD’s

    Probabilistic reanalysis of twentieth-century sea-level rise
    by Carling C. Hay et al.; Nature 2015

    Estimating and accounting for twentieth-century global mean sea-level (GMSL) rise is critical to characterizing current and future human-induced sea-level change. […]

    Our analysis, which combines tide gauge records with physics-based and model-derived geometries of the various contributing signals, indicates that GMSL rose at a rate of 3.0 ± 0.7 millimetres per year between 1993 and 2010, consistent with prior estimates from tide gauge records

    The increase in rate relative to the 1901–90 trend is accordingly larger than previously thought; this revision may affect some projections11 of future sea-level rise.

    Conclusion  The observed acceleration in sea-level rise in recent decades affirms the “hockey-stick” rise in land-temperature; these findings challenge the rationale of climate-change skepticism, and demolish the foundations of climate-change denialism.

    Needless to say, these strengthening, consilient, synoptic, accelerating scientific realities of climate-change are evident to *ALL* thoughtful young researchers, eh Climate Etc readers?

    Pundits, ideologues, contrarians, and special interests … not so much!

    \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • Fan

      Sea level pause?

      I seem to remember you getting very excited about the acceleration you thought you saw in 2012 that you believed vindicated Dr Hansen

      What happened to that?

      Tonyb

      • Fan, I usually ignore your posts for reasons previously stated. TonyB did not, which gave me pause. Please read essay Pseudo Precision in my newish book with JC foreword, which interprets the very same official US graphic you posted somewhat differently. With lots of references, and some hilarious twists.
        Solved the closure problem yet?

    • A fan of *MORE* discourse

      Tonyb wonders “What happened to that [sea-level rise acceleration]?”

      TonyB, please note that Figure 2 of the above-referenced Probabilistic reanalysis of twentieth-century sea-level rise gives observational estimates of sea-level rise-rate acceleration.

      Note that all acceleration estimates are positive; their median is (about) 0.01\ \mathsf{\text{mm}/\text{yr}^2}

      Exercise 1  Show that if the present acceleration of sea-level rise-rate is sustained, then seas will rise (about) five meters in (about) one thousand years.

      Exercise 2  Show that if the acceleration of sea-level rise-rate increases to 0.1\ \mathsf{\text{mm}/\text{yr}^2} (consequent to Greenland/Antarctic ice-sheet sliding, for example), then seas will rise (about) five meters in (about) three hundred years.

      Is this a long time? It depends who you ask.

      Remark  Some folks plan centuries ahead, others mere decades (or even only as far as the next election and/or quarterly earnings report).

      TonyB, it is a pleasure to assist your quantitative understanding, and (hopefully) the understanding of Climate Etc readers too!

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • Oh dear. Two replies in one day. FAN, the chart you grabbed is mostly tide gauges. Must be so, since satellite altimetry (your previous chart) is newish. So, how many of those tide gauges in this chart have been differential GPS corrected (ordinary GPS is not sufficiently accurate) for isostatic rebound and tectonics? You really should read essay Pseudo Precision, which explains such things.

      • Oh no! Not Wendell Berry!

      • AFOMD,

        Exercise 3.

        Show that if the material comprising the solid Earth neither increases nor decreases in volume, then any material raised above the geoid must be compensated by an equivalent decrease below the geoid, as matter can neither be created or destroyed.

        Calculate the resultant effect on global sea level, if possible. If not, explain why not. For bonus marks, explain why tide gauges are completely unreliable in the absence of knowing whether the land they abut is rising, falling, or moving laterally in relation to other land masses in the vicinity.

        Do you really have a cognitive defect, or are you just pretending to be silly, to be seen as eccentric?

        Live well and prosper,

        Mike Flynn.

      • Some folks plan centuries ahead, others mere decades (or even only as far as the next election and/or quarterly earnings report).

        So should I buy stock in submarine excursion companies?

  113. @KenW, temperature homogenization should make anyone who values data integrity squirm.

    Clearly the homogenization models are biased, or in other words, inaccurate. Yet it is remarkable how reluctant the “Climate Science” community is about discussing the problem. This presentation of historical temperature data is a W.A.G.But the charts are presented as if they are accurate to a hundredth of a degree. This is insanity! But it is insanity that is fully embraced by the “Climate Science” community. Why?

    One can only wonder, how many examples of bad temperature homogenization will it take to demonstrate that the homogenization process is bogus? Or is it “Consensus” that the homogenization is correct and that is all anyone has to say about it?

  114. The trouble is that climate science has shown itself untrustworthy in what makes it through peer review without being called out.

    So the natural thing is, rather than looking into everything, everything claimed is dismissed as unproven. We don’t know and can’t know if the planet is in a trend or a cycle. A cycle can’t be man-caused.

    There’s a very high cost to lying.

    • “There’s a very high cost to lying.”

      Paid for by funding.

      Andrew

      • ROK? seriously? white horse no less. Ok you win.

        the other day I was talking with a friend and she was curious about
        my study of korea. When I explained what i found so admirable.
        she said
        “의지의 한국인”

        It comes in handy when outnumbered.

        last one.. I’m out

      • Some light relief after a long debate …

        I’m sorry SM , know yer have yr areas of deep experience but musically, this – is so – bland, and, hey, where’s the fizz betwixt
        the sexes, don’t yer find it kinda’ soporific.)

        Humor enchants, I say, combined with melody, rhythm and
        surprise. )

      • Damn that was supposed to be for Don

    • Mr. Hardin has whacked the nail upside the head. Climate science is not trusted. The consensus crowd can’t argue with that. They whine about the credibility gap between climate scientists and the public. How about a little introspection, you dilettantes? If you want to save the world, you need to get your act together.

      BEST is not part of the climate science establishment. They couldn’t get past pal review. They had to publish in a pay-for-play journal of last resort. Why beat up on them? Because they are the only ones willing to face you?

      BEST is not the problem. If you smart guys are serious, you could pool your considerable intellectual and financial resources to construct your own temperature record. Brandon, Carrick, Rud, TonyB, Matt and several others have the skills and are obviously keenly interested. And Rud could fund the effort with the money he is saving by boycotting Harvard:) I am sure Judith would lend assistance to her denizens. Zeke and Mosher would collaborate. You could carry on these arguments forever, or come up with something acceptable to all.

      Better yet, I have suggested here that the Congress appropriate a couple billion to fund an audit of the settled climate science that is being rammed down our throats. Contract Boeing, Lockheed Skunkworks, some other non-government, non-greenie organizations with the technical resources to examine the temperature records, the hockeysticks with the goofy proxy selections, the gaggle of climate models, the strongly positive water vapor feedback assumption etc. and give us a freaking professional opinion on whether or not the science is settled. Write your Congressmen. Next time Judith testifies, maybe she will suggest an audit.

      Anyway, there is a lot of wheel spinning going on here. Have fun, I am out for a while.

      • Feel better Don?

        Every “Global Surface Temperature” post I have looked at devolves into a wheel spinning cluster f__k.

        One could set up the group you suggested to work the problem harder, but for my money I just listen to any one of a dozen talks by Richard Lindzen. He had this all figured out long ago and he still has the most mature take on the CAGW issues. Notice that he doesn’t bother with endless debates over tenths or hundredths of a degree temperature changes.

      • Don,

        “Brandon, Carrick, Rud, TonyB, Matt and several others have the skills and are obviously keenly interested. And Rud could fund the effort with the money he is saving by boycotting Harvard:) I am sure Judith would lend assistance to her denizens. Zeke and Mosher would collaborate.”

        Please put in a plug for me to be the mail room kid. I could carry the coffee. It’s where my qualifications lie. Strictly volunteer, as long as I can look over their shoulders to learn.

      • Danny

        We need to know if you have any urges to cool or warm the coffee?

        Or more to the point whether you are prone to spilling it?

        tonyb

      • Tonyb,

        Tell me where you want it spilled and I’ll do my best. Computer? Those papers? And since I’m on the spot, I do prefer morning coffee warmed and lightly sweetened, but afternoon might have a nice iced w/chocolate…………..oh, geez…………..did I just do that?
        Do I get the job?

      • For what it’s worth, the main reason I haven’t generated my own set of results to show what I think BEST ought to come up with is financial. I currently have one computer, my half-dead laptop. I do quite a bit on it, and it works for my general purposes.

        The problem I face is BEST’s code requires a great deal of processing. My laptop has crashed several times due to memory issues when I tried to get BEST’s code to run. I’ve managed to largely overcome that problem, but I’m still stuck with only one computer. Devoting days of processing power to running BEST’s code means I’m largely stuck without a computer to do anything else.

        I could buy a new computer, but I can’t justify spending that much money on one just so I can test BEST’s work right now.

      • Danny, somebody else can be the bartender. We have a more important job for you, sheriff. No telling what these onery hombres will do when they meet up with Mosher. Keep your eye on Brandon. Judith can choose between the school marm, or the dance hall girl. Little joshie and jimmy dee will have to fight it out over who gets to be the inebriate town character. This all depends on banker Rud providing the grubstake.

        Produced and directed by…..

        Fin, for now.

      • Don,
        Will I report to you as Mayor? I’ll bring me “pop” guns but insist on wearing a badge.
        Who knows what a few of the correct beverages will do to loosen folks up so they can speak up and not hold back. Might be interesting. Okay. I’m in!

      • No, Danny. I am the former Civil War cavalry officer who rode with Gen. Custer at Gettysburg, when the 1st Michigan charged into Jeb Stuart’s vaunted Virginia cavalry and prevented them from crashing through the rear of the Union defenses, during Picket’s charge. Saved the Union. After the war, I became a famous gunfighter. I got out of that business, when I married my Jewish-Jamaican Princess and settled down on a little spread north of town. Send Chester to get me, if you need help.

        Your boss is Territorial Circuit Hanging Judge McIntyre. The recalcitrant varmints you don’t shoot, you keep locked up till he comes to town. The Judge will give them a quick trial and a quicker hanging. He’ll give you a tin star and your first month’s pay, when he gits around.

        You are going to have to read the script, Danny. And watch this:

        The Judge will reimburse you for the $2.99.

      • Don,

        I know you are serious about the collaboration. And yes Rud has the money to get the computer. I’m on board. you wont find anyone else.
        dont forget I did my own series on my own dime using skeptics algorithms.

        people forget that skeptics already DID their own series.

        warmer than CRU.

        they all just moved on.

        of course nobody harped about the minor issues.. cause welll…look at the HS over there..

      • Well Steven, I am thinking that this climate thing isn’t going to be resolved anytime soon and I haven’t learned anything new and significant for a couple of years. It’s incumbent upon the Chicken Little’s to prove why we should be scared. If they aren’t smart enough to know why they aren’t trusted, then there is little chance that they will save the world, if it needs saving. I would really like to know.

        I am going to spend more of my time on things that are less contentious and more rewarding. Politics and the climate war are getting tedious. I think I’ll take piano lessons. I have discovered some musical talent that had not been known to me prompted by the recent passing of my friend Jimmy Ruffin. He and his brother David used to give me pocket money and hand me down clothes when I was ‘the white kid’ in Detroit. I never had seen this lady, who is from near where my moms came from in Kentucky. It’s Jimmy’s song and she does him proud. Several of the Funk Brothers who made the Motown sound are in the band. Amazing performance. Makes me cry:

        Also check out the Playing for Change vids. Browsing youtube for music and sipping whiskey is more fun than golf.

        James Garner also passed recently. Didn’t know him personally, but he was my favorite actor. Favorite movie is above. Seems like I am losing somebody every week. Makes one think about what one is doing with one’s life. I am thinking about becoming a nicer person. That may be a stretch.

      • > I am thinking about becoming a nicer person. That may be a stretch.

        I disagree with you, Don.

        If you need, take stretching exercises:

        Thanks for the song.

      • I see my reply went down below, Willard.

      • Hope this threads right. Since folks are already notionally spending my money without permission, thought might weigh in with what I really think. What follows is just my opinion. Few fact citations, mere opinions.

        BEST was established under pseudo-sceptical pretenses by Mueller following the Mann shenanigans hockey stick revelations. See his Youtube lecture. Raised a lot of money to redo the temperature record. Hired some real Berkeley brains like Steven and Zeke to do that. They did the best they could. In my opinion (having done a number of spot checks) better than any of the government services criticized elsewhere.

        The question remains, good enough? Fit for purpose? Look up thread for specific examples of BEST problems with regional expectations (166900) and data ingestion (151882). Fatal? Probably not. But the best BEST try still has major uncertainties, as RGBatDuke pointed out.

        Now, in the greater scheme, does this matter? Lindzen first pointed out the indistinguishabiliy of the (about) 1920-1945 rise to the (about) 1974-2000 rise in ANY temp data set. (And BEST is best, since both NASA and NOAA seem to have been working diligently the past few years to erase the interim). Essay When Data Isn’t. Essay cAGw. This poses the attribution issue in spades, per IPCC’s own AR4 and AR5 pronouncements. Former natural, later GHG. CMIP5 models parameterized to hindcast three decades back to ~1975 by design implicitly attributing the second rise to mostly GHG. Logical FAIL. So of course CMIP5 are now falsified by the pause from natural variation. But to admit NV means the necessarily parameterized models run too hot (Akasofu 2009) and the whole house of CAGW dutire prediction cards comes crashing down.
        Sceptics should be applauding that after all the BEST maybe not fit for purpose attempts to adjust weather station data, the first natural rise remains firmly in the adjustment record. That by itself suffices to falsify the whole IPCC CAGW meme.

      • Ha don you can’t even get rud to plop down 20 large.
        For funnier still he doesn’t get that lindzen argument falls to pieces without adjusted Temps.

        So here is this grand opportunity for rud to put the final nail in the coffin. But 20k is too much.
        Heck I put in 10000 free hours.

        He could put in 20k. Brandon Carrick and tonyb could
        Write the code. Doc could tell them how to test till destruction…

        That would be a hoot

      • Rud, great summary.

        The falsification of the IPCC CAGW meme is clear. Even the MSM seems to be picking this up with the help of GWPF amongst others.

      • Well, if threaded correctly, your response Steven was a real disappointment. And yet another schooling lesson about attempted olive branch dialog. Not to be easly forgotten.

      • Damn it Steven, I had him on the line. No doesn’t mean no, when you got em on the line. Why are you guys always fighting?

        Rud is smart and honest and Steven is smart and honest. You two could meet up in a bar and work something out.

        Rud, I don’t think anything you have mentioned has put the proverbial final nail in the coffin of the CAGW theory. The general public and most of the media characters and politicians don’t know what CMIP5 is, so they wouldn’t know it has failed. They don’t know who Lindzen is. He has about as much influence with the pezzonovante and the masses, as little jimmy dee. Appealing to Lindzen authority is not going to get it. Many have heard of the pause that is killing the cause, but that isn’t the final nail unless it lasts for maybe 5 more years. And the CAGW crowd still have a lot of tricks up their sleeves. Record highs every year. Record snow. A couple of dead polar bears. Look for some peer reviewed papers in prestigious journals reporting the discovery of vast amounts of heat hiding in the deep ocean abysses, or on the moon. We really got it this time. It’s worse than we thought…yatta…yatta.

        I foresee a lot more food fights and wheel spinning, unless Sheriff Danny pulls his Army Colt and starts cracking heads around here.

        Seriously fellas, you have a lot more in common than you have things to fight about.

        OK, you may resume firing.

      • Don,

        Pop g_ns all locked and loaded. Just waitin’ on word from the Mayor!

      • Here Don.
        Yoon Mi Rae..

      • That’s a nice tune, Steven. Girl can sing with some soul. I did some work with the ROK White Horse Division. Good soldiers. Didn’t like their rations.

        Here’s my last and I have to stop reminiscing. Too many tears:

        I was crazy about Tammi, when she was David Ruffin’s GF. We used to smoke together. Youngblood whi’boy got a couple of nice kisses, that was it. She died eight months after I last saw her, brain tumor. David abused her and I hated him for it. But David was a product of his environment, he had redeeming qualities and I am forgiving, so I named my son David. My expectant wife and I went around for weeks trying to agree to a name. Finally she said how about naming him after his great grandpa, David Cohen. I said you name him after David Cohen and I’ll name him after David Ruffin. She still isn’t going for it, but I tell my son he is named after David Ruffin. David Jr. has a raspy voice and looks like a masculine Justin Bieber. I tell him if he can get in touch with his roots and sing like Ruffin, he will make a billion bucks. I am training him.

        Don’t let these denizens rile you Steven. Be patient with them. They mean well. I’m out for a while.

      • @ Don Monfort

        “Rud, I don’t think anything you have mentioned has put the proverbial final nail in the coffin of the CAGW theory.”

        The problem is that the CAGW theory is ‘nail free’. Listening to the news stories and political pontification about ‘Climate Change’ (always about the threat of thermogeddon though) makes it clear that EVERY undesirable weather event, or for that matter, any undesirable event of ANY description (even volcanos) is a direct consequence of Climate Change, brought about by our use of fossil fuels.

        Since ALL unpleasant data is declared, ex cathedra, to be a consequence of CAGW there is NO chance that observations will put the final, or any other, nail in its coffin. See the previous ~15 years of observation, the news stories about the observed climate, and, should you still think that there is evidence of nails, final or otherwise, the recent blather about the existential threat to the entire biosphere posed by CAGW, as delivered in the State of the Union message and the reporting on same.

        The self-licking ice cream cone comprised of the progressive politician/climate_science/news_media/academia will make absolutely certain that a nail never comes within shouting distance of the CAGW coffin.

      • Bob Ludwick,

        The self-licking ice cream cone comprised of the progressive politician/climate_science/news_media/academia

        I love it!

    • “I am sure Judith would lend assistance to her denizens. Zeke and Mosher would collaborate. You could carry on these arguments forever, or come up with something acceptable to all.”

      yup. it would take a computer with a lot of capability.
      we used to run on the super computer cluster at berkeley.
      uncertainty took a couple weeks.
      We’ve ported to a multi processor server.
      not a simple job.

      The areas that would be interesting to investigate include the following.

      1. More work on the station de duplication code.
      This is compute /manpower intensive work. but most importantly any and all algorithms will have errors. In my current work which deals with validated data ( in real industry ) error rates of 5% are superior. with the data from temperature suppliers.. you’d be lucky to have a 10% error rate which
      means checking 4000 or so records by hand. Mind numbing work. Way and I worked on 100 stations for about 3 months.

      2. adding variables to the regression. The stuff that needs to be added is
      terabytes of data and months of processing. Some of the work would be publishable in an of itself as “new datasets” we are talking metadata here as metadata drives the regression.

      3. Improved handling of the correlation.. right now we use a simple range.
      correlation distance could be altered to account for weather patterns ( basically an ellipse instead of a circle.
      4. Improved adjustments. I’ve got some ideas..

      If you did that you would STILL have people who.

      A) denied that a global temperature existed.
      B) would ask us to explain what CRU did, or what GISS did, or what
      the NWS in New Zeeland did.
      C) ask you to explain what your algorithm did in each and every case.
      If you explained why it warmed station X, they would ask about x1,
      when you got to Xn, they would switch to argument A
      D.) demand that you go through all the paper records, even those that dont
      exist.
      E) argue that lack of calibration in 1900, made every record suspect.
      F) Argue that RSS proved you were wrong.
      G) demand to see every version of your code, every thing checked in an out.
      H) demand you write it in a different language
      I) switch topics and discuss the ocean
      J) question people’s education background
      K) argue that the raw data wasnt raw, but was pre cooked by warmists.
      L) argue that the small difference you found was profound.
      M) take issue with one of your assumptions. All statistics have assumptions, they just find it and raise the question ‘ what if you do X’
      N) compare your product with their homegrown crap and demand you explain why they are wrong.
      O) people who demanded site surveys of all the sites including historical one that have been retired.

      You get the idea. at some point along the way you have to discriminate between people who use doubt as a weapon and those who use doubt as tool to improve their own understanding.

      For example. Imagine someone came on and said.. ” I ran your code and found X” and I replied.. prove that you get the same answer by changing to run it on another computer. I dont believe you ran it right.

      That’s doubt as a weapon. in some areas the call it ‘de nile’

      • This is why they call him Kid Mosher. Just funnin ya, Steven. But I am serious about the collaboration. Rud can buy you a supercomputer.

      • Don

        Don’t you think we need to build gradually up to the collaboratifirsthand totally having a Series of scoping meetings in first class hotels in increasingly agreeable locations that, to simulate the weather conditions the improved data base will cover, should range from beach resorts to ski locations?

        I’m in. Someone had better tell Rud and Harvard. I will leave that to Mosh. He has a way with words, some of them not profane.

        Tonyb

      • Tony, meetings in first class hotels? Now I’m in. I can’t afford a supercomputer but I quite like mind numbing, repetitive work, as long as I can listen to Vaughan Williams.

      • I think Rud has gone into hiding. You should have until we had gotten the money, before you revealed our plans. I don’t know what I am going to do with you guys.

      • Oh no, Johnathan. The name Vaughan Williams sounded familiar, so I looked him up. I thought he was with the O’jays, or the Chi-Lites.

      • You all are getting as good as governments (to be more specific, US, UK, Germany) at prespending my presumed money that isn’t yours. See what lessons CAGW is teaching even sceptics. (This post is participating in the semiserious fun spirit as denizens reconcile all the above more serious nastier stuff. Soft landings and all that.) regards all.

      • “If you did that you would STILL have people who.

        A) denied that a global temperature existed….”

        I don’t know anybody who denies that a global temperature exist(s). But I for one deny that Mosher, BEST or any other human or group of humans on the planet can determine the global average temperature to within a tenth of a degree at any given time, let alone temperature trends over months, years, decades or centuries to within tenths or hundredths of a degree as reported.

        And frankly, until you can deal with A, the rest are irrelevant.

        Even if BEST is the ‘best’ anyone can do, that does not mean that what it produces is sufficient to justify the policies it is used to promote.

        With existing technology and coverage, ‘calculating’ GAT is on about the same scientific level as counting the number of angels who can dance on the head of a pin. It can be an interesting debate (OK, not really), but it has little to do with reality.

        In a progressive world, vanity is king. You don’t know what you think you know, by a long shot.

      • GaryM

        “I don’t know anybody who denies that a global temperature exist(s).”

        read more. That would be a leading skeptic numbnuts

        http://www.uoguelph.ca/~rmckitri/research/globaltemp/globaltemp.html

      • “Even if BEST is the ‘best’ anyone can do, that does not mean that what it produces is sufficient to justify the policies it is used to promote.”

        the policies dont need to be justified by the temperature record.
        c02 warms the planet.

      • Actually, I stand corrected, by the authors of the link. Not you.

        I have often argued (though I claim no ownership of the concept), that GAT is not global, is not an average, and does not reflect temperature. So they are correct.

        When I am in a more generous mood, I am more charitable toward obscurantists like yourself, and take the term GAT to mean the average of the temperatures of the global climate system taken as a whole.

        Even though I know that GAT in fact means at best the global mean of the krigged, interpolated, computer generated temperatures of most of the Earth’s climate system, formed only in part from anomalies actually measured at a relatively minor percentage of locations within the Earth’s climate system.

        But GAT is so much easier to type that I pretend that is what you mean.

        My bad.

        My primary point that you don’t know what GAT is, however defined, with any meaningful precision, at any particular time, let alone on annual, decadal or centennial trends, remains.

      • “the policies dont need to be justified by the temperature record.”

        Your progressive betters, the ones actually making the policy, would beg to differ. Which is why they trumpet the faux temperature records – “WARMEST YEAR EVER!!!” (by three thousandths of a degree with 35% probability).

      • Matthew R Marler

        Steven Mosher: the policies dont need to be justified by the temperature record.
        c02 warms the planet.

        You don’t think policy decisions should depend in part on estimates of how much warming results from increased CO2? Those estimates depend in part on the temperature record. Don’t you think so?

      • it would be nice if the polices were justified by it.
        but they dont NEED to be.
        Pen: Phone. does what he wants.