Best of the BEST critiques

by Judith Curry

The new Berkeley Earth Surface Temperature product and the accompanying papers have generated considerable discussion.  Lets focus on the technical criticisms.

I am swamped this week, barely able to keep up with what is going on in the blogosphere. Here are the more substantive criticisms that I’ve identified.  Please let me know if you have spotted any others.

Doug Keenan was the first out with this critique.  With regards to the AMO paper, Keenan objects to a 12 month moving filter and the use of AR(1).  Tamino is not impressed with Keenan’s arguments [link].   Tamino has other concerns with the AMO paper [link], at the heart of his concern seems to be the additional autocorrelation introduced by the moving average filter.  He does not buy the conclusion that the AMO signal is larger than ENSO.

Anthony Watts is critical of the surface station quality paper [here].  His particular concern is about the use of 60 year rather than 30 year trends.  Tamino does not find Watt’s argument to have any merit [here].

William Briggs thinks that BEST has underestimated uncertainty [here].

Steve McIntyre has a generally favorable impression [here].

And finally, there is this one, which isn’t really technical, but gotta love the title: Deniers eat their own in BEST feeding frenzy

Moderation note:  this is a technical thread, comments will be moderated for relevance.

419 responses to “Best of the BEST critiques

  1. Arcs_n_Sparks

    Well, The Economist (October 22-28, pgs 99-100) has taken up the BEST results. and weighed in with an opinion.

  2. Then there is Tilo Reber, who thinks that the UHI paper failed to do what it attempted to do:

    Let’s say the year is 1950 and we are going to put a thermometer in a growing city. But the city is already there and already has a very high built density. So, let’s say that the city already has 1C of UHI effect. Over the next 60 years the city continues to grow, mostly around the perimeter. The UHI effect goes up, and by 2010 there is 1.5C of UHI effect. The thermometer was only there since 1950, so the thermometer will only see the delta UHI change from 1950 to 2010 as an anomaly. So, by that thermometer, the delta UHI effect for that period is .5C.

    Now, in the same year, 1950, we put another thermometer into a medium size town. Let’s say that it has a UHI effect of .1C at the time we put the thermometer there. The town grows over the next 60 years, there is a lot of building that happens close to the thermometer, and by 2010 it has .6C of UHI effect. Again, the thermometer will not register that first .1C as anomaly. But it will register the next .5C as anomaly.

    So, in 2010, what we end up with is that the urban thermometer has 1.5C total of UHI effect, and the rural thermometer has .6C of total UHI effect. But, the delta UHI for both thermometers since they were installed is .5C. It is that .5C that both of them will show as anomaly.

    Now BEST comes along and decides that they will measure UHI by subtracting rural anomaly from urban anomaly. Let’s also say that there has been .3C of real warming over those 60 years. So the rural thermometer shows .8C of warming anomaly and the urban thermometer shows .8C of warming anomaly. BEST subtracts rural from urban and gets zero. Their conclusion is, “either there is no UHI or it doesn’t effect the trend”. But, as we have just seen, .5C of the .8C in the trend of both the urban station and the rural station were UHI.

    With their results, BEST has failed to discover the pre thermometer urban UHI effect, the post thermometer urban UHI effect, the pre thermometer rural UHI effect, and the post thermometer rural UHI effect. They have also failed to discover the UHI addition to the trend in either place. In other words, their test is a total fail. Even if they did their math perfectly, ran their programs perfectly, and did their classification perfectly, their answer is still completely wrong. Why? Because the design of the test never made it possible to quantify UHI. Now, many of you may object to my scenario. But I’m going to post this, get a cup of java, and speak to those objections next.

    Some of you may wonder if it is reasonable to expect a small town to grow at a rate that pushes up the delta UHI as fast as a city. This is where the definitions of rural and urban come in. Modis defines an urban area as an area that is greater than 50% built, and there must be greater than 1 square kilometer, contiguous, of such an area. So, for example, if you have two .75 square kilometer areas that are 60% built, separated by one square kilometer of 40% built, it’s all rural. So the urban standard is high enough that an area must be strongly urban to qualify. The rural standard is anything that is not urban. And that allows for a whole lot of built. 10 square kilometers of 49% built is all classified as rural.

    BEST then goes on and further refines the rural standard as “very rural” and “not very rural”. Unfortunately, they make no new build requirements for “very rural”. The only new requirement is that such an area be at least 10 kilometers from an area classified as urban. But a “very rural” place could still have 49% build.

    This means that you can have towns, small cities, and even some suburbs that are classified as rural. In such areas there is still plenty of room to build and build close to the thermometer. In the urban areas, there is little room to build. So either structures are torn down in the city to make room for new structures, or structures are put up at the edge of the city, expanding it. The new structures being put up at the edge of the city are far from the thermometer and while they still effect it, the further away they are, the less effect they have.

    In the rural area there is still space to grow close to the thermometer. So, in the rural area you can actually have more UHI effect with less change in the amount of build. So, if a rural area goes from 10% built to 30% built it will still be rural and it can have the same UHI effect on the thermometer as the city where most of the new building is around the edges. The urban area may go from 75% built to 85% built around the thermometer, and it may have it’s suburbs growing, but the total effect will be close to that to the rural build.

    All of this is essentially confirmed by Roy Spencer’s paper and by BEST’s own test results.

    • Tilo,
      Those who live by the anomaly die by the anomaly.

    • Lots of large cities have declining populations:

      http://www.demographia.com/db-intlcityloss.htm

      Some rural areas also are losing population.

      I seems reasonable to expect a declining UHI effect in urban and rural areas that are gaining population, and a rising UHI effect in those that are gaining population. Wouldn’t any net UHI effect just be mostly a result of overall population growth?

      • Make that “losing” rather than “gaining” in the first sentence of the last paragraph in my previous post.

      • Declining population may not mean a reduction in buildings, in Australia where household size is declining but inner urban building density is increasing, you can get population decline but more urban heating effect. Similarly, Central London lost population over a long period, but I doubt if the UHE declined.

      • That may be true. I know little about the physics. It sounds like you are referring to structures holding heat rather than heat-producing activity. I can imagine that making night temperatures higher than otherwise after a hot day. Would it also be a cooling influence on day temperatures after a cold night?

      • nandhee jothi

        Actually, most of that data is inapplicable here
        That data only covers the “city”, the legal entity. The metropolitan area is what really matters. Phila, eg. shows up as 1.5 million. But the metro is about 5 million people. One of the suburban counties ( montgomery ) has about the same population as Phila.
        In most of the old cities, the population dispersed from the inner core
        for any number of reasons: increased transportation options ( Hwys, cars, cheaper personal transportation etc. ), white flight, better suburban schools, lower taxes for both businesses and people etc.
        That does not matter to the analysis for GW or UHI

    • When you look at a thermal image of city that shows UHI you can see pockets of heat but you also see cool areas within a city. Maybe the standards or near standards for the stations are good enough for ~99% of the stations and the rest do not matter statistically. However, the error they gave would still be too small. UHI should be better explained with regards to their results.

      • If you really wanted to measure properly you would have to sample a load of those warm and cool “pockets” properly. How many thermometers does one usually get in a city? One or two?

      • Maybe they do. There are standards for a weather stations that, in theory and practice, should keep this in a cool pocket but many stations in the US were found not to be up to standard. Whether they were minor problems or major ones is what is not really known. I don’t know if they use thermal images as part of that standardization process or not.

    • Steven Mosher

      Tilo.

      Best described very rural in the following way.

      Take the station location 12.yx, 120.rs . Because station locations for some stations are only good to 1/10 of a degree, they looked around the site
      1/10th of a degree in latitude and 1/10 of a degree in longitude. That’s more than 10sq km its over 100

      “Unfortunately, a portion of station locations in the Berkeley Earth merged dataset are reported only to the nearest tenth of a degree in latitude and longitude. This makes it impossible to identify each station as definitively urban or rural using the fine resolution MOD500 map. This imprecision in site location could yield a site which is urban being labeled as rural. An alternative, which we adopt here, is to analyze the urban-rural split in a
      different way. Rather than compare urban sites to non-urban, thereby explicitly estimating UHI effects, we split sites into very-rural and not very-rural. We defined a site as “veryrural” if the MOD500 map showed no urban regions within one tenth of a degree in latitude or longitude of the site.”

      At the equator 1 degree is 111.12 km
      http://www.dslreports.com/faq/14295

      So 1/10th of a degree is 11.112 km at the equator and it diminishes with cos(lat).. 11km * 11km is more like 121 square km around the site, not the 10sq km you suggest. you wrote:
      “The rural standard is anything that is not urban. And that allows for a whole lot of built. 10 square kilometers of 49% built is all classified as rural.”

      I think your concern that the entire 121 sq km area could be 49% built is unfounded. To be sure you will, you must find areas where the “build” is sub pixel. The notion that this entire area could be 49% built is just flat wrong and provably wrong by considering sensors other than Modis.

      For grins here is what you see if you look at the BEST sites.

      I took the BEST 39000 sites and processed them using the Modis 500
      data set. I used a slightly different rule, rather than using .1degrees
      I used a constant 11.112km. That means I looked 11km around every site location ( their test used .1 degrees which gives you a slightly smaller area search as you go toward the poles.) My test is only slightly different from their test ( but better I think) which looks .1degree away from the site.
      you’ll see the difference that makes is slight in finding “built free” areas.
      That is, you’ll see I find fewer “very rural” sites. This is because at larger latitudes I am search a constant distance.. not simply .1deg

      READY? BEST found 16K sites that had no built pixel within .1degree
      With my test, I find 14,571. That is when I did the test a bit more precisely and search equal AREAS around the site I find about 2000 less sites that qualify as very Rural. get it? if the site is at 45Lat, 120 Lon
      they look in the following box 44.9,119.9,45.1,120.1. thats a 1/10th
      of a degree each way. I look 11.12km at the equator that .1 degrees at 45North you end up looking a little bit more east and west..

      What else do I know about these sites? are some actually 49% built?
      how can we tell? well we consider other measures and other sensors.

      in 1950, the average population density of these “un built areas” was 10 people per sq km. by 2000 that average density has increased to 20 people per sq km. these aint no suburbs.

      What about the “not very rural sites”
      in 1950 the average population density? 320 people per sq KM
      by 2000 this has grown to 733 people per sq km.

      What does 300 people per sq km look like?

      http://maps.google.com/maps?q=42.6500,26.9830&hl=en&sll=32.651155,-103.380833&sspn=0.045673,0.104628&vpsrc=0&t=h&z=16

      What does 10 people per sq km look like? here’s a BEST very rural site.. 10 people per sq km

      http://maps.google.com/maps?q=29.7956,-82.9178&hl=en&sll=62.417088,179.116631&sspn=0.100469,0.41851&vpsrc=0&t=h&z=16

      What else:
      Lets look at Nightlights. according to Imhoff the nightlights indicate whether a place is rural or not. His cut off is a value of 30. below 30 is rural, between 30 and 60 is peri urban, above 60 is very urban
      My 14K very rural sites? average nightlights? 6.
      not so rural: average is 71.
      So, by looking at areas with no built pixels in a 121 sq km area, you also find no lights. no lights means no electricity.

      What else? lets look at impervious surface data
      As the literature explains impervious surfaces start to effect an area when they exceed 10%-20% of the surface.

      the 14K rural: average 0.75% impervious surface, thats less than 1% impervious surface.
      not so rural? 12% impervious surface.

      What else do we know? Well we can also look at land use for these areas. We have an independent measure of land use.
      the 14K sites: 0.46% of the land is designated as urban. thats right less than 1%.
      the not so urban sites: 12% of the land is urban.

      What else do we know?
      of the 14,000 sites roughly 500 are within an adminstrative boundary of an urban area, the rest are outside administrative boundaries.
      of the not so urban sites about 60% are inside administrative boundaries, like suburbs

      lets take a look at an example of a “rural” site using these rules.
      you’ll note that there are some buildings which fall below the sensor threshold ( the 49% of a single pixel and two contiguous pixels as built.)
      This site has a total population of about 60 people. 1 -2 per sq km.
      it had the same population in 1950

      http://maps.google.com/maps?q=51.7670,87.6000&hl=en&ll=51.76869,87.603736&spn=0.016784,0.052314&sll=58.216,-62.583&sspn=0.007143,0.026157&vpsrc=6&t=h&z=15

      Another?

      http://maps.google.com/maps?q=35.5194,-111.3679&hl=en&ll=35.519976,-111.368001&spn=0.005519,0.013078&sll=51.76869,87.603736&sspn=0.016784,0.052314&vpsrc=6&t=h&z=17

      Another?

      http://maps.google.com/maps?q=32.6500,-103.3833&hl=en&ll=32.651155,-103.380833&spn=0.045673,0.104628&sll=35.519976,-111.368001&sspn=0.005519,0.013078&vpsrc=6&t=h&z=14

      I know of no field study with areas like this that show any “UHI” from an area that has no tall buildings, no substantial changes to surface properties and no waste heat.

      If, upon inspection people want to redfine what they mean by rural and say that a place like this is what they are worried about when they talk about UHI figures, then a lot of science on UHI will have to be rewritten.

      • steven,

        Now why didn’t you tell us all that, before now? Let’s move on:)

      • Steven Mosher

        I’m kinda in the middle of this work and really should not take time out to fuss with this, but It wasnt that hard since the tools for doing this are already built. It just takes hours to do all the processing. There are other things, but I don’t have the time to go in depth..

      • PS: Steven, why are the 1/3 of sites that showed a cooling trend so homogeneously mixed in with the sites that showed a warming trend?
        And your study is significantly better than theirs.

      • “And your study is significantly better than theirs.”
        Agreed

        UHI seems to be a very complex area for uncertainty WRT AGW. Not only for buildings and asphalt but trees and vegetation have a role

        a study of the effect of trees on Rhode Island UHI
        http://envstudies.brown.edu/reports/TreeReportForWebPrime.pdf

      • Steven Mosher

        I talked a little bit about this on CA, no clear answer yet, but I have some pretty strong leads

      • Don:

        Don’t be fooled by a picture of a rural area with 10 people per square km. You can’t tell what Mosher did to come up with his population density numbers. But look at this population density map of the US.

        http://en.wikipedia.org/wiki/File:USA-2000-population-density.gif

        Notice the areas with the very highest category of population densities in places like the northeast coast of the US. Notice the population density of areas like LA, San Fransisco, Chicago. 250 – 66,995 per square mile. Then go to this map that BEST supplies of where their “very rural” sites are:

        http://www.berkeleyearth.org/Resources/Berkeley_Earth_UHI

        Notice that the LA area, the San Frasico area, and the north east coast are totally saturated with “very rural” thermometers. All of the areas that have the highest population density classification are covered completely with rural thermometer. In fact, there are “very rural” thermometers covering areas that I know to be continous city. So the idea that Mosher is trying to pass off that “very rural” must be equal to “cow pasture” is just nuts.

      • Tilo,

        I understand that. The entire Northeastern Atlantic coast is an UHI. However, I believe that Mosher has done a much better job of getting at a real analysis than BEST has done. He still would come up with a bogus result, because the thermometers are located where the people are. Very rural is a bogus classification, because there ain’t any thermometers, unless there are people around. And I would bet that most of the “very rural” sites are at times downwind from one, or several urban sites.

        This is interesting:

        http://climateaudit.org/2010/12/15/new-light-on-uhi/

        The magnitude of some of the UHI effects is huge.

      • Don:

        Yeah, those numbers that McIntyre has are jaw droping. What I find interesting is that Lynchburg Va, with a population of only 7,000, has a UHI effect of over 5C. And that population and build are sparse enough that it probably doesn’t meet the BEST definition of urban.

        So, we get back to the question, how could BEST possibly come up with a negative UHI effect. There is no other possibility than Mosher being wrong about the nature of “very rural” places.

      • Tilo,

        I don’t think that mosher has to be necessarily wrong. I think that he can be at least technically correct in his analysis. But those supposedly “very rural” areas are intermixed with UHI areas, and are under UHI influence. And since they are not unpopulated areas, there is still local human influence. As Jacob stated elsewhere, you have to compare UHI areas with the boondocks, where only bears and moose tread.

        What I would like to see somebody explain is why the 1/3 of sites that exhibited a cooling trend are so well-mixed with the warming sites. Again, looking at the Northeast Atlantic Coast continuous heat island, you see a lot of blue dots among the solid red. I ain’t buying it.

      • By the way Don, one of the articles on WUWT shows the number of rural stations that are used by GISS:

        http://wattsupwiththat.files.wordpress.com/2011/10/palmer_figure1.png

        Notice that by 2010 it looks like less than 1% of the GISS stations in the US are classified as rural by GISS. And yet BEST classifies around 40% of their thermometers as “very rural”.

      • Mosher:
        “I think your concern that the entire 121 sq km area could be 49% built is unfounded.”

        As I start reading your post I can see that you are babbeling like a moron again and that you still haven’t read what I said.

        I don’t care about the area within 10 kilometers of the urban area. It’s classified as “not very rural”. So it’s not what they use in any case. And I know that the area they do use must me more than 10 kilometers away from the urban area. I’ve already stated that a dozen times. I don’t need for you to explain to me what I explained to you long ago. But I don’t care if it’s 10 kilometers away or 100 kilometers away, so stop ranting about the 1/10 of a degree. What counts is the definition. And the definition says that all the places, wherever they are, that are classified as very rural, can be up to 49% built. I also don’t care if they are 49% built. They can be 20% built. It makes no difference to my argument. The argument that you still don’t understand. And that’s why I’m not going to bother reading the rest of your post. You keep ranting about things that you think I’m talking about, but all you are doing is having an argument with an imaginary Tilo that is in your head. And you are making up the imaginary issues that you claim your imaginary Tilo has. You are not debating with me. After I’ve made it clear enough for a 10 year old, you are still jousting with straw men. Now go away and stop wasting my time for god sake.

      • All can of things can occur. I don’t think anybody would argue on that. The point is doing some research and using some common sense to determine, how many problematic cases are likely to be included and how much they are likely to affect the outcome.

        Making classifications with tighter and tighter criteria and failing to find statistically important effects at any step is rather good evidence on the weakness of the effect even, when there are some questions that cannot be answered.

      • Yes, for example, with the “screen’ of no built pixels within .1 degrees you still end up with airports in the “rural” class. but with enough various forms of metadata you can eliminate most of the obvious problem cases.. what you can’t eliminate is the exception people concoct out of whole cloth.

      • Infilled metadata is still a fabrication in the same way that infilling temperature data is still a fabrication.

      • Steven Mosher

        “Infilled” metadata? Sorry in the work above there is no “infilled” metadata. There were 2000 cases where metadata was missing. I leave those out.
        So, once again Bruce you have no idea what you are talking about.

        Of the 1.5C of warming BEST has found since the LIA, how much of that is UHI? your best guess?

      • There are many things we don’t know mosher. I understand you think of CO2 as the cause, but I doubt CO2 is so whimsical a gas as to cause cooling in so many places.

        I’ve offered UHI references for Japan, Korea, Hong Kong that claim half or more of all warming is UHI.

        Therefore we have to assume UHI is responsible for at least 50%. We also must assume that UHI kept some of the cooling stations from cooling as much as they would have. Therefore we really don’t know how much it has warmed since the warm 1800s.

        Did you know 1826 was 2nd warmest Jun/Jul/Aug in HADCET?
        And 1846 was the 6th warmest JJA?
        And 1818 was the 4th warmest Sep/Oct/Nov?
        And 1834 was the 2nd warmestDec/Jan/Feb?

        Your cracks about the LIA show you are quite misinformed.

      • Steven Mosher

        So bruce.

        Of the 1.5C of warming that BEST found since the LIA, you believe that 50% of it is due to UHI. So, you believe that the real land temperature series should be .75C warmer than the LIA? We dont need to discuss the cause. Just the .75 Figure. Is that your final number?

        So that, if we removed all the urban stations from the record you believe that the land record will fall to .75C increase from the 1.5C that BEST shows?

        Correct?

      • Steven Mosher

        here Bruce this will help your case.

        from 1833 to present over 50% of the warming of this city is due to UHI

        http://www.google.com/url?sa=t&rct=j&q=uccles%20uhi&source=web&cd=8&sqi=2&ved=0CFgQFjAH&url=http%3A%2F%2Fwww.i-sup2008.org%2Fpresentations%2FConference_4%2FVanWeverberg_K.pdf&ei=332oTsfdGszWiALIwoyqBg&usg=AFQjCNEuOocxFv8d2Gueq8jQ7j5CDHUe9Q&sig2=k_2bF3_JUJtPGKH6A1z-kg

        So, we’ve got some evidence that for cities like Hong Kong and Tokoyo and Brussells, cities with populations over 1 million, the UHI contribution is about what you say .. around 50%

        Agreed?

      • mosher, I’ll clarify. The second warmest summer in HADCET history was 1826.

        Is it hotter now in HADCET? Not really. Most winters are less cold. Only one summer was hotter and that was 1976.

        It isn’t 1.5C warmer.

        This seems to be another facet of the whimsical nature of CO2. It makes winter less cold, but summers (even with UHI) were warmer 35 and 185 years ago.

        What theory about CO2 suggests it only makes winters less cold? And makes 1 out of 3 locations colder?

        Try coming up with a theory that matches such whimsy.

        I’ll go with waste heat from humans having a greater impact on winter averages.

      • Its interesting to consider HADCET.

        4th hottest Jan – 1834
        Hottest May – 1833 by 1.2C, 2nd hottest 1848
        Hottest Jun – 1846, 2nd hottest 1676, 3rd hottest 1826

      • Steven Mosher

        Bruce. you are losing focus. The question is the amount of UHI not the warmest year or C02. Let’s stipulate that C02 causes no warming. Ok? you know what that means, that means for the sake of discussion we will agree that C02 is a nullity. This is about UHI.

        You’ve mentioned Korea, and japan, and Hong Kong, and I gave you Brussels. With Brussels the UHI was .8C

        So, you dont know how much it has warmed or cooled from 1800, I understand your position now.

        How big is UHI? you cant say 50%, cause then Ill ask 50% of what?
        Or maybe you would like to say how much warmer it is today.

        Lets take 1800 to 1810.

        call it X.

        Is it warmer today?

        How much?

        And how much of that is UHI?

        Or is it cooler?

        You can make up numbers if you like.

      • I think making up numbers is part of the problem. Creating data because there is a station a long way away and infilling is part of the problem. The population has risen from 1 to 7 billion since 1800. Lots of room for UHI to affect temperature. We should find out how much. And how much albedo has changed. And how much more bright sunshine there is.

      • I’ll give you some numbers. And possibly the only method (or a variation on it) that may someday give us a good idea about current UHI (but won’t help us with historical UHI).

        “The compact city of Providence, R.I., for example, has surface temperatures that are about 12.2 °C (21.9 °F) warmer than the surrounding countryside, while similarly-sized but spread-out Buffalo, N.Y., produces a heat island of only about 7.2 °C (12.9 °F), according to satellite data. Since the background ecosystems and sizes of both cities are about the same, Zhang’s analysis suggests development patterns are the critical difference.

        She found that land cover maps show that about 83 percent of Providence is very or moderately densely-developed. Buffalo, in contrast, has dense development patterns across just 46 percent of the city. Providence also has dense forested areas ringing the city, while Buffalo has a higher percentage of farmland. “This exacerbates the effect around Providence because forests tend to cool areas more than crops do,” explained Wolfe.”

        http://www.nasa.gov/topics/earth/features/heat-island-sprawl.html

      • I think your concern that the entire 121 sq km area could be 49% built is unfounded. To be sure you will, you must find areas where the “build” is sub pixel. The notion that this entire area could be 49% built is just flat wrong and provably wrong by considering sensors other than Modis.

        1/3 of stations show net cooling trends, and 2/3 of stations show net warming trends. Of those that show net warming trends, how much is due to buildup near the thermometers can only be known by studying the actual thermometers in question. Could half of the thermometers be affected by the construction and use of asphalt runways, nickel smelters, air conditioner heat exchangers, reflective building surfaces, cattle feedlots, nuclear power plants? Sure. Within 1/10 mile of the Four Corners coal fired power plant almost all the terrain is undisturbed; within 1/10 mile of the San Onofre Nuclear power plant near Camp Pendleton there has been nearly no other buildup since 1950 (highway 1/101 was expanded to I-5); within 1/10 mile of the Mussel Shoals nuclear facility most of the terrain is undisturbed. In fact, almost all of America’s nuclear power plants are surrounded by mostly undisturbed terrain, on purpose. Within 1/10 mile of the Smyrna, TN auto factory and its parking lot, most of the terrain is undisturbed. Within 1/10 mile of the Nichols Institute in San Juan Capistrano most of the terrain is undisturbed. Your operations for distinguishing build-up from non-buildup (and by inference sources of UHI) misse these cases. How representative are these cases? (and there are thousands of like cases in the U.S.) The empirical evidence is insufficient to support a conclusion. Is it likely, in the considered judgment of an expert like you, that such isolated contributions to civilization can be prevalent enough to bias the estimation of the UHI effects? Such an approach to epistemology has never been better than actual exploration and tabulation.

      • So, by looking at areas with no built pixels in a 121 sq km area, you also find no lights. no lights means no electricity.

        If your flight path takes you across central Missouri on a cloudless night you can see the nuclear power plant in southern Callaway County, east of Columbia and north of Jefferson City. It is a small constellation of a few lights, one of them blinking red, surrounded by black for miles around. It’s where the electricity is made that is used in thousands of buildings almost all of which are at least 10 miles away. It’s the complete lack of adjacent buildup that makes the power plant identifiable. The same is true of a complex of 4 nuclear power plants southeast of Chicago: the area around them is totally rural (or almost totally, I haven’t seen the place lately); the same is true of the prison that is south of Chicago, but it has more lights. Similar comments can be made about gas stations along the interstate highways, and gas pumping stations along the pipelines. Unless someone investigates the thermometers in place, it can not be determined how much influence such buildups in rural areas have.

        Your (or anyone’s) judgment may say this or that, but judgment has never been equal to empirical research as a basis for knowledge.

      • a lot of science on UHI will have to be rewritten.

        That’s about the size of it. That is what I have been writing is necessary.

      • Looks reasonable.

    • Harold H Doiron

      Thanks for such an excellent explanation of your important critique of the BEST “findings”.

  3. I think you will find Keenan didn’t exactly object “to … the use of AR(1)”.

    What he says is:

    “BEST did not adopt the AR(1)-based model; nor, however, did it adopt a model that deals with some of the complexity that AR(1) fails to capture. Instead, BEST chose a model that is much more simplistic than even AR(1), a model which allows essentially no structure in the time series. In particular, the model that BEST adopted assumes that this year has no effect on next year. That assumption is clearly invalid on climatological grounds. It is also easily seen to be invalid on statistical grounds. Hence the conclusions of the statistical analysis done by BEST are unfounded.”

    In respect of Tamino’s counter [“they use Monte Carlo simulations for all their statistical testing’] I can only see Monte Carlo used to determine spatial uncertainty. The impact of using a linear rather than an AR model I suspect sits in the assumptions made about the behaviour of local weather and its interaction with the local climate.

    • Here is Richard Muller quoting Keenan:

      > Keenan’s conclusion is that there has been virtually no valid work in the climate field, that what is needed is a better model, and he does not know what that model should be. He says, “To summarize, most research on global warming relies on a statistical model that should not be used. This invalidates much of the analysis done on global warming. I published an op-ed piece in the Wall Street Journal to explain these issues, in plain English, this year.”

      http://neverendingaudit.tumblr.com/post/11763136868

      Which statistical model is Keenan talking about?

      • Willard, it’s set out in the link in the post.

      • HAS,

        The subject of my rhetorical question was “the statistical model which relies most research on global warming”, which I believe is the subject of Keenan’s critique of his WSJ op-ed. So I’ll ask again my rhetorical question: which model does Keenan has in mind when he talks about “the statistical model which relies most research on global warming”?

        If we read the correspondence between Muller and Keenan, we find that Muller seems of the opinion that Keenan would find objection to about anything. Muller calls that statistical pedantry.

        Do we know if Keenan asked Muller the permission to publish this correspondance? I asked our good Bishop Hill, and he has not answered yet.

        If anyone knows how to contact Keenan, I’d be interested to know.

      • willard –

        In all fairness, there is some ambiguity in the statement:

        “To summarize, most research on global warming relies on a statistical model that should not be used.

        Gramatically – from a prescriptive perspective – that suggests a reference to a singular model used by all research. However, I would guess that Keenan means that “most research on global warming relies on statistical models that should not be used.” Or, “Most all research projects on global warming rely on a statistical model…

        But if you do contact Keenan, would you mind asking him to explain how scientists with acknowledged expertise and extensive experience in analyzing trends from complicated data could come up with a report that wouldn’t pass a level of scrutiny typical in an introductory course on time series? Could you also ask him why he thinks Judith would put her name on a paper if it were so fundamentally flawed?

      • Joshua,

        Indeed, there might be some ambiguity in the quote I used. Here is a relevant quote from Keenan’s WSJ editorial:

        > To draw that conclusion, the IPCC makes an assumption about the global temperature series, known as the “AR1” assumption, for the statistical concept of “first-order autoregression.” That assumption implies, among other things, that only the current value in a time series has a direct effect on the next value. For the global temperature series, it means that this year’s temperature affects next year’s, but that the temperature in previous years does not. Intuitively, that seems unrealistic.

        http://informath.org/media/a41.htm

        Note also that one can find lots of “AR(1)” in the correspondence Keenan reproduced on his website, with or without permission of the persons concerned:

        http://www.bishop-hill.net/blog/2011/10/21/keenans-response-to-the-best-paper.html

        Still no answer from Bishop Hill, in the thread or by email.

        PS: Just imagine how it would have been useless to discuss with goblins there. May it inspire you, Josh-ua.

      • Update. Bishop Hill just assured me he forwarded the question to Doug Keenan.

      • Steven Mosher

        AR(1) as I recall. You can visit Lucia’s to discuss this since its off topic here. You can send a mail to me and I can get it to doug.

  4. I missed something. Tamino asserted that Anthony Watts’ augment that a 30 year time series was more appropriate for his data and not the BEST 60 year time series yet Tamino never demonstrated how he arrived at such an opinion The easiest way for all to see is a side by side comparison: 30 year with 60 year. Maybe Tamino wants to revisit the question and validate his assertion?

  5. The so-called UHI effect is not just an effect of being in the middle of an urban area, but of being in the middle of an area that has heated abnormally fast relative to its rural surroundings.
    Looking only at today, a thermometer in the middle of Central Park, NY, may have less of a “UHI” effect than a thermometer on Times Square or Harlem, or another thermometer in the middle of nowhere but sitting from its inception on top of a tin roof and surrounded by a paved parking lot (say, on a rural roadside gas station that is there since the 1940s).
    Of course, what counts is not the static difference today but the differential rate of increase of some stations relative to others in the same climatic area, due to the vicinity of local factors with a non-climatic heating effect (engines, asphalt, metal surfaces) which have changed over time. Temperature trends are expected to be different, e.g., at one station that was already in some downtown area in 1940 or 1970, and another station located then outside of the town but now close to a highway in a densely urbanized suburban area.
    Whatever UHI effect exist now at the feet of the Empire State Bldg, it is probably similar to the UHI effect that existed at the same spot by 1950 or 1970 (although more numerous high-rise buildings nearby may have accentuated the effect just a bit). Instead, a station located in a suburb that was rural in 1970 but got developed during the 1980s would show more of an effect.
    In spite of the heroic efforts of Watts’ volunteers and existing documentation on station siting at various dates in the past, it is difficult to ascertain all those effects. Perhaps an experimental approach, putting several thermometers at different locations in the same general (small) climatic area, may be used to “calibrate” how much additional temperature is measured depending on various surrounding features, and then apply this calibration to the historic record at a number of stations in which the surrounding features are well documented over time. No such study has been as yet carried out, I gather.

  6. While I think that William Briggs has made a strong case that the uncertainty is underestimated for BEST, I think his most important point was this comment about the BEST use of kriging to cover areas where they had no or few thermometers:

    “If you have a mine in which at various spots some mineral is found and you want to estimate the concentration of it in places where you have not yet searched, but which are places inside the boundaries of the places you have searched, kriging is just the thing. Your mine is likely to be somewhat homogeneous. The Earth’s land surface is not homogeneous. It is, at the least, broken by large gaps of water and by mountains and deserts and so forth. To apply the same kriging function everywhere is too much of a simplification.”

    This becomes very important in the far north; places like Greenland, northern Siberia, and northern Canada. Most of the thermometers in those places are costal. But costal stations are warmed by water once the ice opens up. But the inland areas are not warmed by open coastal waters. So you get a thermal discontinuity between where you are kriging from and where you are kriging to. Basically, you end up assigning warmer temperatures to the inland areas than you should. But the problem with BEST goes further than that. Their algorithm for resolving outliers and other discontinuities is a kind of “reality through democracy” approach. So basically, it appears to me that if you have a few inland stations in the far north disagree with the many costal stations, then the inland stations get downweighted – even though they are the ones with the correct answer for the inland temperatures.

    • Their algorithm for resolving outliers and other discontinuities is a kind of “reality through democracy” approach. So basically, it appears to me that if you have a few inland stations in the far north disagree with the many costal stations, then the inland stations get downweighted – even though they are the ones with the correct answer for the inland temperatures.

      It depends on how similar the inland stations are to each other (variance of the differences), how similar the coastal stations are (their variances) and how similar the inland stations are to the coastal stations. How much improvement you obtain (measured in the sum of MSEs of the estimates) depends on how accurate the estimates of the population variances and covariances are. Much of this is covered in a highly cited paper by Rob Kass and Duane Steffey ((1989) Approximate Bayesian inference in conditionally independent hierarchical models (parametric empirical Bayes models) Journal of the American Statistical Association, 84: 717-726.). Like almost everything else in statistics and conditional probability, it is counter-intuitive.

    • Tibo

      >Your mine is likely to be somewhat homogeneous<

      Bluntly, that depends exactly on the nature of the ore body and host deposits. It's true enough for some ore bodies and completely fallacious for others

      • Tilo – I apologise for the typo in your name

      • Yes, gold is far from uniform while coal often is. Given all the practical experience with Kirging this should be well understood in the field. I doubt temperature is that uniform. But if the temperature records are “homogenized” before analysis a lot of the variances will disappear.

      • Yes, gold is far from uniform while coal often is. Given all the practical experience with Kirging this should be well understood in the field. I doubt temperature is that uniform. But if the temperature records are “homogenized” before analysis a lot of the variances will disappear.

        This discussion is nearing collapse from a lack of knowledge.

        Gradations and concentrations of ore will range over many orders of magnitude and show huge excursions over short distances. Temperature on the other hand occupies a relatively small interval around 270 K and is relatively well mixed due to convection and diffusion of the fluidic medium and to a much lesser extent the diffusion/conduction of heat itself. What kriging won’t get is those shallow spatial convexities and concavities that come about from local microclimates that the algorithm cannot deterministically predict, but then again those may turn out just as likely to cancel out in positive and negative excursions. That becomes a wash.

        But to compare it to where kriging is used in mineral exploration is kind of absurd. The point is that performing sample mining cores and bore holes is very expensive, and the gamble is to use kriging to estimate the probability of making a discovery on the cheap versus investing in equipment. The estimate may give you a probability of 0.6 of finding a mineral that you are anticipating and it turns out to be completely dry and the prospecting company gets skunked. That isn’t going to happen with temperature anywhere near to the same extent, as you will always get a better interpolated number.

        Why isn’t kriging used more often for spatial temperature estimations? Thermometers are cheap and the simplest interpolation schemes work well.

        BTW, kriging is not used anywhere near as much in oil exploration as it is used in mineral exploration.

    • I should say that a better reference is “A comparison of the Bayesian and Frequentist Approaches to Estimatiion” by Francisco J. Samaniego. He cites the Kass and Steffey paper, but presents the case more broadly. He does make the case that the superiority of the Bayesian approach to the frequentist approach depends in practice on how accurately the prior distribution is known — or estimated in the empirical Bayesian approach used by BEST.

      Here is where the critique of kriging applies. It would seem most unlikely that the correlation of anomalies in Denver with anomalies in Kansas is the same as the correlation of anomalies in Denver with anomalies in Utah, but the kriging assumption, and model used in BEST, is in fact that these correlations are a function of distance only, not of specific location or intervening terrain. Even in places where the assumption is likely true, such as across the vast plains of the US, Argentina, great flat plains and deserts of central Asia, including the places like the Rockies, Alps, Andes and Urals in the estimation must introduce inaccuracy. One can’t tell by the seat of the pants or a back-of-envelope calculation how much inaccuracy. For reference, the book “Dynamic Analysis of Weather and Climate” by Marcel Leroux presents a load of evidence for terrain effects in heat flows.

  7. The dispersion in the BEST temperature change rates is interesting. A third are negative but that also means some are quite large to balance that out.
    http://4.bp.blogspot.com/-k1vYjQ8Qthk/TqdulOhrSEI/AAAAAAAAAko/0H9kUnzIcZs/s1600/vostok_temperature_changes.gif
    I don’t think this dispersion is sustainable as it would mean that the temperature differences will continue to diverge spatially. I don’t see how that can continue and it points to the dispersion as being part of the variability and oscillations.

    This is what the related temporal dispersion looks like at Vostok. This is a single location so can’t be compared directly, but you can see how rare the large 2 degree C changes per century are:
    http://1.bp.blogspot.com/-pEEYO-I0_fA/TqduktntOEI/AAAAAAAAAkg/F2PKph74I3k/s1600/berkeley.gif

    Dispersion is a huge and often overlooked feature of many natural phenomenon and I have found some that probability measures such as Maximum Entropy can really help quantify the observations.

  8. Science plus politics lessens the value of each.

    What a sad, sad day for science.

    What a sad day for UC-Berkeley.

    With kind regards,
    Oliver K. Manuel
    http://myprofile.cos.com/manuelo09

  9. http://www.berkeleyearth.org/Resources/Berkeley_Earth_UHI

    Look at figure 2: Does that look (black) like a true representation of the “very rural” land areas on our globe’s land surface? Not much in Africa, or South America. I guess things have changed since my last visits. And Australia looks like the places where the people lived, are now very rural. That’s sad. I liked Sydney the way it was.

    Look at figure 4: Is it possible that the respective locations of long-term warming trends and cooling trends could be that homogeneous?

    Something is wrong with this story.

    • You must have good eyesight to see anything one way or another in fig 2.

      • Kermit,

        Well, do you think that Africa, and South America are very nearly devoid of very rural sites? Do you think that the very rural sites in Australia are located in NSW and on the opposite coast, where the big cities used to be? Can you see any of that? Can you find Africa and South America. Look in the middle.

      • There aren’t many cities I could pinpoint to the exact pixel on a map of that scale. Many cities may have large rural areas bordering them and have a station there. In Africa, maybe I could guess Cairo since it is so big. But it’s borders the desert and so within a couple mm I probably would be in the middle of a desert. Maybe most stations in Africa are in private airstrips in rural areas, I don’t know.

      • Kermit,

        My point is that you don’t see many black dots in places like Africa and South America representing stations that are “very rural” according to the study. Compare that with US and other countries with more urban populations, and particularly Australia, where the black dots are on the coast, where the people live, and not in the interior. To my eagle eye, that looks like they are missing something. But I have been wrong once before.

      • Steven Mosher

        Don.

        Yes, south america and africa tend to have a void of rural stations compared to places like the US, canada, and australia.
        in SA and africa the sites tend to be associated with cities.. so like people have to read the temperatures.

        Nick has a nice google earth tool, so we can bascially take an inventory of stations and then go look at them in GE. its exhausting but you learn a lot.

    • With regard to figure 2, they state in the paper that only 18% of US sites qualify as “very rural” but because of the way that the figure is constructed where stations are dense (the example they use is the US, but it most likely also holds true in other areas where stations are dense) it appears as though very rural stations dominate.

      • Rattus,

        Yes I read that. What I am pointing out is that there is very little representation of “very rural” stations in places that are far more “very rural” than the US, Great Britain, Northern Europe, and the populated portions of Australia. Look at Australia, the black dots are where the people live, not in the ‘bush’. Look at Africa, and South America. Put simply, unpopulated and sparsely populated areas are under-represented in the temperature record.

        And what about figure 4? Does it make sense that warming trend stations and cooling trend stations would be so homongenized? I would also be interested to know how many stations showed no significant trend. They only say 2:1 warm vs. cool.

  10. Doug Keenan was the first out with this critique. With regards to the AMO paper, Keenan objects to a 12 month moving filter and the use of AR(1).

    Since Keenan was being fronted as a “mathematician” I thought, great, maybe at last some common sense. Turns out Keenan dropped out of mathematics early on and went into finance. His idea of mathematics is to repeat dogmatic slogans like “Unless the data is measured with error, you never, ever, for no reason, under no threat, SMOOTH the series!”

    I have no idea what he’s talking about since all data that exists anywhere on the web has been smoothed to one degree or another. What he says is ridiculous. You can’t even get started in statistical analysis without working with smoothed data.

    • Can you elaborate Vaughan? I do a good bit of statistics and never start by smoothing my data. I think you must be referring to some specific domain of data. Which one… time series? What sort of smoothing do you mean? Just curious.

    • I am not a statistician but I think he means that you do not smooth using simple moving averages unless you know what your data looks like and the same applies for averages. For example sometimes a mode or median is more important than the mean. Or maybe outliers or the variance is of interest and smoothing hides this.

      I have seen weighted moving averages used instead of simple moving averages also. This is usually because the data closer to the present can be of more importance.

      If that is correct(pls correct me if I am wrong) then the question then becomes what kind of smoothing and how much should you do. If climate is long term then why not use a 30 year moving average? or why not 3 years?

    • I think one thing that Vaughan is referring to is that even a sensor is a filter. If you don’t integrate for a long enough time for each digital sample, then you might not get enough of a signal above the hum and shot noise. Yet an integrating sensor or sample-and-hold are by their very nature already filters. That’s why you can’t get away from filtering.

      • Vaughan said “statistic analysis” not measurement

      • Vaughan said “statistic analysis” not measurement

        Per what Vaughan says below you are off the mark, and then you start backpedaling by trying to “decouple” the measurement from the analysis. Don’t try to make it so hard, it is just the underlying data that we are trying to find a model for.

      • I was not back-pedalling. I just thought your sensor comment was pedantic.

      • WHT is right, even a sensor is a filter. That’s a statistical operation, I don’t draw any distinction there. A camera taking a 1/1000 second exposure has filtered to the point of badly blurring 100-microsecond information.

        Anyone looking at 150-year records is lucky if they can get any data that has not been filtered over less than a month. There are around 2.6 million seconds in a month, that’s a lot of filtering.

        @kermit If climate is long term then why not use a 30 year moving average? or why not 3 years?

        Indeed. What I don’t like about moving-average filters is that they leak too much high-frequency information, as can be seen from their sinc-shaped frequency response. A filter of width 1/ν seconds (years, whatever) completely kills all harmonics (integer multiples) of its fundamental frequency ν, but the half-integer multiples (n+\frac12)ν come through in proportion to 1/n, which isn’t that great.

        One very simple trick for dealing with this limitation of the moving average filter is to apply it twice. The effect of this in the frequency domain is to square the response. The integer multiples of the fundamental remain completely killed, but now the half-integer multiples have amplitude 1/n2 instead of 1/n, which in practice is a big improvement. I’ll be talking about this (among other things) at the upcoming AGU meeting in San Francisco on December 8 for those in the neighborhood then.

        Even so this is still pretty primitive as there are lots of other more sophisticated digital filters.

      • Link to your AGU meeting talk when it comes up. It sounds interesting.

        I suppose if all the other studies used decadal moving averages we may be stuck with it for that reason only. Most people will not realize the significance of the very possible recent flattening out.

        In regard to the sensors, yes, they are statistical operations but that’s going down a layer of hierarchy, so in a practical world you need to decouple measurement/data from the statical analysis and make assumptions or incorporate the error calculations on the sensors/measurements, which in turn are based on other sensors…
        For one thing the decoupling gives the measurements/data less bias, or at least a bias from another neutral source. It you do the analysis then use that as your data and apply more analysis then you could be adding your own bias to the result several times.

      • woodfortrees

        By the way, thank you for the toolset. It’s provided hours of amusement and nostalgia.

        It does seem to give quite a nice smoothing, but it does of course lose you two lots of N/2 endpoints, which might disappoint those for whom the last ten years are much more interesting than the big picture…

        That, I can help you with.

        As it happens, I’m conversant with several methods to hide the decline. I’ll suggest three, though there are many more.

        a) No one says you have to use the same two-pass filter on the entire curve:

        http://woodfortrees.org/plot/best/mean:13/mean:17/plot/best/mean:5/mean:7/from:2009.15

        b) You could just use a single pass for the portion at the endpoint:

        http://woodfortrees.org/plot/best/mean:13/mean:17/plot/best/mean:7/from:2009.15

        c) Create the 17-year trendline (or whatever you believe the minimum valid trendline you can live with), plot a second linear trend with detrending and offsetting for only the endpoint, tuned to terminate where the 17-year trendline does, then remove the 17-year trendline. I imagine that’d cause howls of outrage, though mathematically no less valid than the other methods:

        http://woodfortrees.org/plot/best/mean:13/mean:17/plot/best/from:2009.15/trend/offset:-0.165/detrend:-0.65

        Hope this helps.

      • Vaughan Pratt

        “One very simple trick for dealing with this limitation of the moving average filter is to apply it twice.”

        http://woodfortrees.org/plot/best/mean:60/plot/best/mean:60/mean:60/offset:0.2/plot/best/mean:204/offset:0.5/plot/best/mean:360/offset:0.8

        A picture is worth a thousand words.

        That two pass filter at 5 years is vastly superior to single pass at 5, 17 or 30 years.

        Or at least prettier.

      • Bart, you’re a genius! It never occurred to me that woodfortrees let you compose filters. I overlooked the operative word “sequence.” That makes Paul’s website vastly more useful. Girma will have a field day if he ever figures out how to exploit that additional functionality. :)

      • I hadn’t considered doing that, either… That’s the beauty of a toolkit, you can do stuff the toolmaker hadn’t thought of. :)

        It does seem to give quite a nice smoothing, but it does of course lose you two lots of N/2 endpoints, which might disappoint those for whom the last ten years are much more interesting than the big picture…

        Enjoy!

        Paul

      • Vaughan Pratt

        I can’t take credit for genius of the technique.

        I learned it here from another poster in a previous thread, but did not fully appreciate the significance.

        I’m also extremely rusty about selecting values for multipass filters, but seem to recall there’s a good reason to use relatively prime numbers and to tune the filters to the data.

        http://woodfortrees.org/plot/best/mean:13/mean:17/from:1955/plot/esrl-co2/offset:-325/scale:0.016/mean:7/mean:11/plot/jisao-pdo/mean:11/mean:13/from:1955/scale:0.14/offset:-0.1/detrend:-1.05

        There’s also good reason not to do this, as the choice of filters generates whatever narrative the manipulator chooses, and it’s easy to be fooled into forcing data say whatever one wants.

        In the above, I interpret PDO insensitive to CO2 level (so I detrend to emphasize the fit of PDO to BEST), but CO2 level to explain BEST overall over the half-century scale (the half century rise of BEST is about ten times the 2-15 year rises, and matches the rise in CO2). It’s obvious in some ways that temperature rise leads small CO2 rise, and even CO2 falls (small), but that CO2 is resistant to dropping regardless of what BEST does, and so BEST returns to follow the CO2 trend, not the other way around. It’s obvious PDO leads BEST fairly well (much more strongly than CO2 on the 2-15 year span, though not so strongly as CO2’s influence over the half-century) by some variant lag that seems to increase as CO2 increases, as if the hotter the system gets, the more former correlations deteriorate. It’s easy to believe this might be true, but it’s just squiggles I manufactured.

    • I’m too used to the discrete data I deal with. Choices, votes and so forth. I don’t think of these as filtered, nor would I want a sequence of them (essentially zeroes and ones) to come to me pre-processed in any way. I tend to forget that a lot of physical measurement is some sort of algorithmic processing of signals and noise, and hence already pre-processed.

    • Michael Larkin

      That quote (referred to by Keenan IIRC) was from W.M. Brigg’s blog and is talking about time series:

      http://wmbriggs.com/blog/?p=195

      W.M. Briggs is a PhD statistician with experience in metereology. His CV’s on his site.

      The actual quote was:

      “Now I’m going to tell you the great truth of time series analysis. Ready? Unless the data is measured with error, you never, ever, for no reason, under no threat, SMOOTH the series! And if for some bizarre reason you do smooth it, you absolutely on pain of death do NOT use the smoothed series as input for other analyses! If the data is measured with error, you might attempt to model it (which means smooth it) in an attempt to estimate the measurement error, but even in these rare cases you have to have an outside (the learned word is “exogenous”) estimate of that error, that is, one not based on your current data.

      “If, in a moment of insanity, you do smooth time series data and you do use it as input to other analyses, you dramatically increase the probability of fooling yourself! This is because smoothing induces spurious signals—signals that look real to other analytical methods. No matter what you will be too certain of your final results! Mann et al. first dramatically smoothed their series, then analyzed them separately. Regardless of whether their thesis is true—whether there really is a dramatic increase in temperature lately—it is guaranteed that they are now too certain of their conclusion.”

      • Sure, if you have infinite bandwidth and infinite storage space, you can capture any raw signal and deal with it algorithmically. Unfortunately, the real world gets in the way and you end up dealing with a filtered sensor signal right from the start.

        Cripes, if I had gigabytes worth of computing years ago, I could have taken the noisy photomultiplier signal and sampled at some high rate without fiddling as much as I did. Nature fought us, but we still found interesting stuff from our integrated filtered signals. YMMV.

      • The only argument I know against smoothing is that it increases r2 (Pearson correlation squared). The fallacy in that argument is that the denominator in r2 is the variance of the data. Smoothing the data inevitably reduces the variance of the denominator, and bingo, up goes r2. This has nothing to do with real correlation, it is purely an artifact of how you choose to define correlation. Anyone comparing r2’s from smoothed and unsmoothed data is living in a state of statistical sin.

      • The value r2 is very difficult to interpret in almost all cases. Sometimes even very low values are due to statistically very significant effects, which are also those of interest, sometimes much larger values of r2 are artifacts from the processing. Smoothing is a major complication in interpreting r2, but not the only one.

        In the case of the Earth system, we know that we have two dominant accurate periods: 24 h and 1 year. Whenever any analysis concerns time scales that are not really far from both of these, it’s certainly worthwhile to take advantage of the knowledge that we know the periods exactly, i.e, they should be handled by a method that is not based on a peak in the Fourier spectrum with a width determined by the length of the full period being studied but on the precise period known with certainty. In some cases that can be done using only 24h averages or 1 year averages (not moving averages, but one point per period). In some other cases moving averages or fitted periodic functions may be a better choice.

        It’s clear that optimal efficiency and easily understood full transparency are difficult or impossible to combine in statistical analysis. Traditional methods of time series analysis are backed by good theory, but using them in any practical task means usually that some of the assumptions involved in any chosen method are not fully satisfied. It’s often possible to derive new methods that fit the task better, but that leads to the risk of errors in the method, because many of the pitfalls are difficult to avoid.

      • Sometimes even very low values are due to statistically very significant effects, which are also those of interest, sometimes much larger values of r2 are artifacts from the processing. Smoothing is a major complication in interpreting r2, but not the only one.

        Pekka and I have had this discussion before, and our respective positions seem unchanged since then. My position on low r2 is that there is a large unexplained variance of 1 − r2 that needs to be explained one way or another. It is fine to call it noise provided one has an effective definition of noise which the unexplained portion meets, though I prefer a less pejorative term than mere noise. If it’s clearly thermal noise it should be identified as such, likewise if it’s ENSO events with a period below the time scale of interest. Basically “unexplained variance” should be accompanied by a satisfactory explanation of why it is ok to omit it from the model.

        But high r2 can easily be meaningless. The top of a gaussian and the top of a cycle of a sine wave can be in extraordinarily good agreement, yet obviously picking the wrong one as the model of data so shaped will give a wildly incorrect projection since a gaussian flattens out while a sine wave continues to oscillate. In such a situation it is helpful to back up the chosen observational fit with a rationalist account (in the sense of Chomsky) of why it should be preferred to another fit with the same or better r2.

        Incidentally I should probably amend my statement above that “Smoothing the data inevitably reduces the variance of the denominator” to focus instead on the angle between the data and the model (assuming a convex model space), since the length of the smoothed data vector is not really relevant and can be taken to be a unit vector. Typically the points of model space correspond to smooth data, whence unsmoothed data lies far away from model space. Smoothing tends to swing the data closer to model space, raising r2 and therefore the appearance of a better fit. But objecting to smoothing merely because it improves the fit in this way is unreasonable because unsmoothed data is guaranteed to be a bad fit. Smoothing that has been carefully designed to remove only those artifacts that are absent from the model space, for example high frequency signals that don’t exist in the model space, is not merely defensible but essential if one is to obtain a meaningful notion of quality of fit.

      • Vaughan,
        I think the only point, where I have noticed difference in position is in, what weight to give to those cases, where enough is known about the variations to allow small values of r2 to reflect statistically significant effects. That was not the issue that I wanted to bring up. I just expressed in different words the point that the range of r2 varies widely also, when the effect is statistically highly significant. I think you agree on this observation.

        My main point was closely related to your answer to Richard Saumarez on box filters. My above comment and that comment of yours emphasize both the fact that the real well known cycles should be taken into account. That knowledge makes 12 month averages (non-overlapping and moving window) a much more reasonable choice for most purposes than other types of smoothing filters of a similar width. Non-overlapping 12 month averages are the easiest to use without introduction of spurious effects or misleading values of r2 as long as we have good reasons to trust that no other (auto or cross) correlations of a time lag of 12 months vary significantly with time on a time scale of months. I tried to formulate the requirement so that it prevents also spurious effects from aliasing, but means of course that all variability with a period of less than 2 years is suppressed.

      • You can invent examples to show that smoothing is necessary (for example, to reveal the high correlation between two highly correlated signals, when each is embedded in a lot of independent noise — the case that has motivated much of the study of optimal filtering), and examples to show that it produces misleading results. Comparing r2s from smoothed and unsmoothed data can be informative, not sin. I think you meant something more complicated than what you wrote. Possibly, your remark is clarified below, though you don’t reference this line explicitly.

      • It’s hard to make any definitive statement about r2. My post immediately above is my best shot to date. As Bart says, a picture is worth a thousand words, and I should put up a picture somewhere of just how extraordinarily good a fit you get with the tops of a gaussian and a sine wave cycle.

    • Richard Saumarez

      I have never understood why people “smooth” with a rectangluar filter. It has a poor frequency response. If one has a hypothesis about “noise” and wished to remove it, why not use a filter that is designed to have the correct characteristics?

      • In general I agree, though when the noise is known to be periodic with period p, regardless of its shape (i.e. the disposition of the harmonics of the fundamental frequency can be completely arbitrary), a box filter of width p completely removes noise of that form. This is painfully obvious when you consider the average of one cycle of a signal with period p, regardless of the phase. This helps explain why the sinc function is the frequency response of a box filter: it is zero at the harmonics of the fundamental frequency.

        A triangular filter of width p at the 3 db points (i.e. total width 2p) has a response like that of a box filter of width p, with the nth side lobe being in the same place but of amplitude proportional to 1/n² instead of 1/n. The frequency response is the function sinc²(x), the square of that of the box filter. If your spread sheet can do a box filter, aka moving average, it can do a triangular filter merely by applying the box filter twice, since the convolution of a box with itself is a triangle.

  11. Little Rich Muller
    Sat in a cellar
    Cranking his temps
    And his stats.
    He stuck in his plumb,
    Pulled out some fun,
    Then cried ‘Oh, how dumb
    All skeptics are.
    =========

    • A hapless researcher named Muller
      was dipping his sugar-kist cruller
      in some coffee, lukewarm
      that he got from the dorm
      and wishing his life had some color

    • Then cried ‘Oh, how dumb
      All skeptics are.

      I’m troubled by the dangling inverted comma which seems to imply that Muller said much more than this. As indeed he did.

      I’ve not followed this with the passion of some and it’s hard to argue with a piece of poetry but I don’t agree for one moment that this is what Muller cried. And I have a particular aversion to anyone stirring up unnecessary conflict, especially in the AGW area, which I think has enough to be going on with.

      I would say that in his Wall Street Journal article Muller used the term “global warming skeptics” in a way that was either naive or mischevous. He was choosing to take the term rather literally, in other words, but I don’t see any need for anyone to go into a great tizzy about that.

      It certainly wasn’t “all skeptics” that he was saying were dumb, in fact I don’t read him saying that about any skeptics. He explicitly leaves open skepticism about attribution and thus, for sure, skepticism about future catastrophe.

      If he was trying to wind up the hotter-headed who self-identify as skeptics then it seems he’s shown how easy that is to do. But like Lindzen, I fail to see what the big deal is.

      • Data undiscerned.
        What is what in Richard’s mind?
        Curious or not?
        =========

      • Yeah, the “feeding frenzy” article sorta sticks Lindzen in on the end after numerous “less nuanced” critiques.

        Poor Richard.

      • Following ‘mad English graduate’ Delingpole with Lindzen is an old trick. The need to insult, the need for the gratification of some of basest desires in the human psyche, the need to feel superior, even for a moment. But what Lindzen says, as always, puts the matter in perspective. At least that’s mine.

  12. http://www.berkeleyearth.org/Resources/Berkeley_Earth_UHI

    They say that it is impossible to identify urban or rural areas using the fine resolution MOD500 map. That doesn’t stop them: “Rather than compare urban sites to non-urban, thereby explicitly estimating UHI effects, we split sites into very-rural and not very rural.”

    Why then have they pretended that they explicitly estimated UHI effects? Look at the title to the paper. And does anybody believe that 41% of the total of 39,028 sites were “very rural, distant from urban areas”, when we know that the thermometers are located where the people are?

  13. So, increases in atmospheric CO2 are positively related to the fall of western civilization so, everything is good, right?

    • Rising CO2 brought more food and energy to billions of people.
      Lowering CO2 would take food and energy away from billions of people.

      • Then the fall of Western civilization is not due to whatever causes fear of increases in CO2 and that obviously flows from the zeitgeist of secular socialism.

      • Vaughan

        There is certainly good data to show that plants have the ability to absorb significantly more CO2 in relation to a change in the quantity available in the atmosphere.

        What there isn’t- is good data to show that a change in CO2 levels upward being harmful to plant growth.

        Your example to humans getting fat is not relevant is it?

      • Can you link to peer-reviewed work affirming this claim?

        I grew up dry-land farming when the Green Revolution went hot, and I never heard a single farmer or extension agent say carbon dioxide. Not once. Anhydrous ammonia? Yeah. Hybrid seeds? Yeah? Till planting? Yeah? Herbicides? Yeah. Pesticides? Yeah.

        Additional CO2? Nope.

      • People who grow plants in greenhouses often more than double the CO2 to get the green things to grow faster with less water.

        Here are some good links with information about CO2
        http://plantsneedco2.org/
        http://co2isgreen.org/

        Look at the Bibliography of the book, “Fire, Ice and Paradise” by Leighton Steward It is full of references to peer-reviewed work.

        I will look for some specific links.

      • I will look for some specific links.

        They would be interesting to read.

        Please be sure that you include links that control for the revolution in farming technology, the phenomenon of factory farms, engineering of seeds and fertilizers, etc., in you links that show the causal relationship between CO2 and bringing food to millions of people.

      • Why do they add CO2 to a greenhouse? Why do you not have to add CO2 to a cornfield?

        Ain’t a lot of greenhouses on farms.

      • JCH
        You may really have a good idea there.
        Pump CO2 from power plants into fields where green things are growing. The corn, or whatever, would grow better while using less water.

      • How are you going to stop the CO2 from blowing away?

      • This article reports on experiments done early in the decade to investigate the impact of various consequences of global warming on net plant production, NPP. Elevating each of CO2, (fixed) nitrogen, water, and temperature separately enhanced NPP. However when all four were enhanced together, a likely outcome of warming, NPP responded negatively to variation in CO2. This was both unexpected and hard to explain.

      • @Vaughan Pratt… You mean This article (free reg req.). Quoting from the abstract:

        Across all multifactor manipulations, elevated carbon dioxide suppressed root allocation, decreasing the positive effects of increased temperature, precipitation, and nitrogen deposition on NPP.

        IOW, the plant spent less effort on building roots, but everything else was improved.

        Thus, better crops.

      • People who grow plants in greenhouses often more than double the CO2 to get the green things to grow faster with less water.

        Does supersizing your diet that way work equally well with people? Do they thrive on twice the food they normally eat?

        If they’d been on a starvation diet previously, doubling the rations would presumably help. One might expect the same for plants: if they’d been starved of CO2 they may well welcome more, but if they adapted to 280 ppmv over thousands of years, can we infer that doubling their CO2, the likely state of the atmosphere by 2055, will make them healthier?

      • This is just my rudimentary understanding. They pump CO2 into greenhouses because the content of CO2 in the air trapped inside a greenhouse can be stripped of a large amount of CO2, creating a situation where a low level of CO2 is actually a limiting factor in plant growth.

        They could just as easily open the windows, but that might expose the plants to another limiting factor: freezing to GD’d death.

        In a cornfield, CO2 is not likely to be a limiting factor. That is why my grandpappy and M. Carey’s grandpappy never prayed for more CO2. And believe me, they prayed for every single limiting factor. Dry-land framing requires a lot of interventions from gawd.

        I would love to have you pipe CO2 in from the power plant. Atmospheric CO2 is at 389 ppm. We didn’t need a speck more when it was at 280 ppm. At night I could steal the pipes and sell them for scrap metal in the next county. Bring them darned pipes on.

      • “We are all aware that ‘the green revolution’ has increased crop yields around the world. Part of this wonderful development is due to improved crop varieties, better use of mineral fertilizers, herbicides, etc. But no small part of the yield improvement has come from increased atmospheric levels of CO2. Plants photosynthesize more carbohydrates when they have more CO2. Plants are also more drought-tolerant with more CO2, because they need not ‘inhale’ as much air to get the CO2 needed for photosynthesis. At the same time, the plants need not ‘exhale’ as much water vapor when they are using air enriched in CO2. Plants decrease the number of stomata or air pores on their leaf surfaces in response to increasing atmospheric levels of CO2. They are adapted to changing CO2 levels and they prefer higher levels than those we have at present. If we really were to decrease our current level of CO2 of around 400 ppm to the 270 ppm that prevailed a few hundred years ago, we would lose some of the benefits of the green revolution. Crop yields will continue to increase as CO2 levels go up, since we are far from the optimum levels for plant growth. Commercial greenhouse operators are advised to add enough CO2 to maintain about 1000 ppm around their plants. Indeed, economic studies like those of Dr. Robert Mendelsohn at Yale University project that moderate warming is an overall benefit to mankind because of higher agricultural yields and many other reasons.

        “That we are (or were) living at the best of all CO2 concentrations seems to be an article of faith for the climate-change establishment. Enormous effort and imagination have gone into showing that increasing concentrations of CO2 will be catastrophic: cities will be flooded by sea-level rises that are ten or more times bigger than even IPCC predicts, there will be mass extinctions of species, billions of people will die, tipping points will render the planet a desert. Any flimsy claim of harm from global warming brings instant fame and many rewards.”

        ~William Happer, Climate Science In The Political Arena, Statement Before the Select Committee on Energy Independence and Global Warming U.S. House of Representatives, May 20, 2010

      • @AK: Thus, better crops.

        Did you perhaps misread “decreasing” as “increasing?” The paragraph you quoted said “elevated carbon dioxide suppressed root allocation, decreasing the positive effects of increased temperature, precipitation, and nitrogen deposition on NPP.” This is simply repeating what was in the other article, namely that when temperature, precipitation, and nitrogen deposition are increased, increasing CO2 decreases NPP. How does a decrease in net plant production translate to better crops?

        The article seems to be saying that NPP is best with increased temperature, precipitation, and nitrogen deposition and decreased CO2. Given how well plants do in the tropical rain forests, it shouldn’t be a surprise that increased temperature, precipitation, and fertilizer help.

      • @Vaughan Pratt…

        Did you read what I wrote?

        First of all, the report I linked to was the peer-reviewed science your news release was referring to. Second, decreasing root allocation may reduce net plant production, but that has nothing to do with above ground NPP. According to the abstract I quoted, a substantial decrease in root production reduced the increase in total NPP relative to above-ground NPP.

        Getting into the detailed report, the sites with all factors elevated, including CO2, got about a 40% increase over today. The sites with everything except CO2 elevated got over 80% increase over today. See Fig. 3 in the report.

        Moreover, even if above ground NPP had been reduced, that says nothing about the ability of crops to yeild. The less mass corn has to spend on leaves, the more it has for the harvest.

        Bottom line, we can expect at least a 40% increase in crop yields.

      • AK
        It does not mater if some of it blows away, it makes other green things grow better. We still win.

      • Vaughan Pratt
        Yes, if people are close to starvation, they do thrive on twice the food they have been getting. CO2 is much closer to starvation levels for green things than it is to the order of magnitude higher and more that we had when when earth grew the green things that became our Carbon fuels that we use today.

      • Vaughan Pratt
        The research has been done. Increased CO2 does help!

      • Vaughan Pratt
        With more CO2, you need less roots to get a better crop.
        You twist this into a bad thing.
        With more CO2, green things grow better while using less water and they need less roots. This is all good.

      • VPratt is saying is cut the CO2 and let the poor eat cake…

      • JCH

        “This is just my rudimentary understanding. They pump CO2 into greenhouses because the content of CO2 in the air trapped inside a greenhouse can be stripped of a large amount of CO2, creating a situation where a low level of CO2 is actually a limiting factor in plant growth.

        They could just as easily open the windows, but that might expose the plants to another limiting factor: freezing to GD’d death.”

        What you say has a lot of wisdom, and avoiding CO2 starvation below 180 ppmv in closed greenhouses is the principle reason CO2 is artificially raised during the day.

        At night, there’s no need to artificially raise CO2 levels; plants can easily throw off 100’s of ppmv CO2 when the sun isn’t shining.

        However, there is a known and well-document plant hormone analog effect of CO2 as levels rise toward 1400 ppmv in otherwise optimal conditions (Liebig’s law of the minimum).

        This hormone effect is logarithmic, and 90% of the change is reached by about 500 ppmv for many plants.

        The 1400 ppmv upper level seems to be a limit common to all plants, but the CO2 steroid-like outcome curve has a different shape for different plants. There’s some marginal increase as far up as 2,500 ppmv, but it’s accompanied by many downsides.

        Note I say “effect”, “change” or “outcome” or “increase”, not “benefit”.

        Plants will have their other hormones suppressed or augmented by CO2 at different rates with different effects: more mass in some parts, less in others (chiefly less root mass), deformations and shrinkage in some reproductive parts, increase in fruit size, more rapid growth in fruit mass but not always in fruit nutrients, with smaller and less viable seeds (for some species).

        The downside changes appear mostly to be linear, so will overtake the beneficial changes, usually somewhere between 500 and 1200 ppm.

        These rates are species dependent; there is no complete catalog of plants by CO2 hormone effect; I have yet to find if plants can be bred to overcome the downside steroid-like impacts.

        Also, plants tend to become lazy. Given enough time under higher CO2 levels, the positive outcomes may fall away gradually. I haven’t found good literature to confirm this.

        However, in no case are crops known to produce dramatically more yield with higher CO2 without also intensive use of fertilizer, 80% of which comes from petrochemicals. These fertilizers are expected to skyrocket in price due both demand and supply issues.

        More CO2 is no net benefit to plants so far as can be definitively shown.

        Some may do better. They may indeed in the wild force out others. We have no way of knowing what hypothetical mass plant extinctions would look like.

        What applies to plants applies to microbes too, to some extent.

      • A coujple things that seem to be missed here….

        1) Root crops? If as AK states plants put less effort into roots, than won’t important sustenance crops like potatoes and cassava be negatively impacted.

        2) There is some research (and I’m not going to find the book right now where I read this — it is called “Changing Planet, Changing Health” — and there is some indication that plants grown in an enhanced CO2 environment are less nutritious and (for some species at least) less resistant to pests and disease.

        I don’t think that this is all sweetness and light. There may be widely differing impacts on different species and different environments.

      • @Rattus Norvegicus…

        A coujple [sic] things that seem to be missed here….

        Quite a few. I was giving Vaughan Pratt a hard time for misusing science, linking to a press release when the base article was available (open access even), and misrepresenting the results. But…

        This study was done on natural ecosystems looking for total biomass for the purpose of estimating carbon fluxes. It never looked as individual species, and even explicitly mentions the possibility of species replacement (or at least change in representation) as an explanation.

        Note that the effect occurred only in the third year.

        Note that they never mention the ratio of C3/C4 plants, by species or biomass.

        Note that crops are not natural, having been selectively bred for many thousands of generations from wild ancestors.

        Note that crops are irrigated (often), while they were studying natural systems.

        Just off the top of my head, to quote Steven Mosher. If I were tasked with demonstrating why this report is useless for predicting crop outcomes, I’m sure I could come up with more.

      • This was the problem for Malthus, he couldn’t take the improvements in plant growth into consideration.

        Farmers are adaptable. You plant for what the conditions are, you can adapt, fertilize, water, or modify the genetics of the plant. There is a move towards large scale green house production and in 100 years who know what we will be doing. Even forestry practices can adapt over longer time scales.

        The problem is more for vegetation in natural areas. Some plants will flourish and some won’t. Evolution will continue. The sky is not falling. Look how fast Mt St Helens vegitation started to regrow.

      • @Kermit…

        There is a move towards large scale green house production and in 100 years who know what we will be doing.

        Or even in 20-30 years. (Sorry for the self-promotion, but I spent quite a bit of effort digging up and verifying those numbers, and I’d like somebody to look at them.)

        The problem is more for vegetation in natural areas. Some plants will flourish and some won’t.

        This means changes of ecosystem, which can be extremely non-linear.

      • Bottom line, we can expect at least a 40% increase in crop yields.

        Yes, but not from CO2. As I said, if you increase temperature, moisture, and nitrogen, you get better production. You confirmed this with the 80% improvement you mentioned. And as I said, if you increase CO2 you decrease production. You confirmed this by pointing out that the CO2 decreased the 80% yield resulting from enhanced temperature, moisture, and nitrogen to only 40%.

        Somehow you’re managing to turn a decrease from 80% to 40% into a positive benefit of CO2. I have no idea how you see that as a benefit.

        If you’re claiming that adding CO2 increased the yield to 40% then you and I have completely opposite interpretations of what this data is showing about the impact of CO2 in the presence of the three agents responsible for the 80% increase.

        According to the abstract I quoted, a substantial decrease in root production reduced the increase in total NPP relative to above-ground NPP.

        That’s not what the abstract says. It says Across all multifactor manipulations, elevated carbon dioxide suppressed root allocation, decreasing the positive effects of increased temperature, precipitation, and nitrogen deposition on NPP. It does not say anything about “relative to above-ground NPP”. Moreover there is no such thing, since the “net” in “net primary production” refers to net production for the whole plant, not for some part of it. When you suppress root allocation you decrease net production, not root production.

      • Why would you waste good cake on the poor when I’m going to have mountains of CO2-juiced weeds on which they can gnaw?

        In the late 1970s we were getting up to 150 bushels per acre. Since then there have been a multitude of improvements in seeds. A great year now would be 190 bushels. You CO2 kooks fight it out with the seed companies. They’re claiming the entire 40, and my money is on them.

      • There really are very good records on this stuff. I was also there. I can still remember the first anhydrous ammonia truck that began operating in our county. Hard to forget. The operator had an accident and froze off his you know whats. This terrorized all of us young boys.

        Here is the dare. Cover a field with a greenhouse. Maintain a background level of 280. Use the best of the Green Revolution. I have no doubt you will essentially equal modern, 389 ppm yields.

        When that anhydrous ammonia truck showed up, we had corn pouring, literally, out of our ears. It was like a miracle.

      • Use the best of the Green Revolution. I have no doubt you will essentially equal modern, 389 ppm yields.

        If the BotGR includes more warmth, moisture, and fertilizer, the Jasper Ridge experiments suggest you may do much better than modern 392 ppmv yields. (Column 8 for August 2011, the seasonally adjusted and smoothed trend, is 391.67 ppmv.)

  14. You know I tend to regard the BEST study as pretty credible even though not the end of the story. I don’t care so much about the process issues such as releasing the papers before peer review. It is a good model for what I want to see in climate science, namely, people from outside the inner circle verifying independently the facts and data. Unfortunately, this issue of the land surface temperature data is much easier to resolve than the important issues for policy.

    • well said

      • Unfortunately, this issue of the land surface temperature data is much easier to resolve than the important issues for policy.

        Or the important issues for the rest of the science of global warming, such as, how hot in 2100, what part of that is our fault, how to define climate sensitivity, and what unknowns are muddying the picture and by how much?

        As BEST showed, the data was pretty clean to begin with, so this doesn’t advance the study of global warming so much as it does the public’s perception of the field, by providing (relatively) independent confirmation that the subject is more than just the smoke and mirrors that many have been claiming it to be.

    • Really, but how much easier to resolve though? I have to re-read Mosh’s post at the top, but I have downloaded the entire set and it appears to be comprised of mostly once-a-day measurements (you get the monthly mean, but you also get the number of samples inlcluded in that mean, which on first look appears to be about 28-31 a month or less). So very briefly it seems like the issues of signal filtering, and the aforementioned UHI/LHI issues, seem very much alive. I will repeat what I said before, I do not doubt the globe in general has been warming, but how much? Maybe a completely correct (and fundamentally impossible) interpretation of the data would yield a temperature increase of 75% of what BEST is showing. I don’t know, but how close can we get?

      When Pekka said on an earlier thread, he thinks the satellite scientists took sampling theory into account when designing the program, I say sure, but what options were available for really radical improvement if something comes up short? More satellites? Faster satellites? Not easy things to conjure up…

      • Bill C,

        It looks to me from mosher’s analysis above that the method used on the UHI paper was not as bad as I thought, although mosher’s is superior. I wonder what he would get if he did the calculations, and used population density to separate urban from very rural. I don’t think he would find that UHI is negative. In any case, there seem to be significant problems with the data, and a big problem in that the thermometers are not measuring large parts of the land surface, particularly those areas where people are scarce.

      • Don, to me the most interesting part of this is the BEST analysis for the 19th century. Someone else pointed this out, I forgot who, but the BEST analysis just makes the paleoclimate data from Mann and his minions all the more suspect. Please, Mike Mann, stop stonewalling and admit that there is a problem. Your reputation with posterity will go up dramatically.

      • Steven Mosher

        “maybe a completely correct (and fundamentally impossible) interpretation of the data would yield a temperature increase of 75% of what BEST is showing. I don’t know, but how close can we get?”

        well BEST shows a warming of 1.5C since the LIA. thats for the Land.
        Unless you want to suggest that the land has warmed less than SST, the amount of room you have to move that 1.5C down is rather small.

        Put another way. BEST says its warmed 1.5C since the LIA.
        1C of that is NOT UHI
        .5C of that is not UHI
        at most, at most, .3C of that could be, might be, UHI. at most.

        You want bounds on the UHI bias 0 to .3C now try to find that with the BEST approach. I dont think you can unless the bias is toward the high end .. greater than .2C. simply because of the noise in the system. what you need, if the signal is smaller than .2C, is a more powerful test or a better classification approach.

      • Steve,

        I can buy that UHI is 15% +/- 15% of the increase in temperature since XXXXX. I don’t know that it is proven by satellites and SST. Satellite record is only 30 years, so how does that help? Only if we can extrapolate the patterns in differences back farther in time. SST – sure it provides a bound of sorts if the land heats faster than ocean because of convection argument holds, which it probably does, but how good is the SST record? To have a warm bias in SST and land records would not be out of the question.

        Regardless I see your point, now I would offer up a question instead of asking can we get there with the BEST approach, how about can we get there with the BEST DATA? ie take a different approach to their data. Is this what you’re doing?

        For you and anyone else who is interested (including ad homs, let’s have them!) I downloaded the BEST data and extracted the US records.

        There are 17132 US records in the BEST data.

        Of these records 5041 have a timespan of >50 years and 1586 have a timespan of >100 years. I’m defining this as date of last record – date of first record.

        Of the 1586, 1361 have >90% time coverage as defined by:

        (number of records*)/(12*timespan)

        * = BEST “raw” data is reported monthly though each monthly entry gives the # individual measurements in that month, typically 1/day from what I’m seeing.

        That’s where I would start looking for possible UHI trends.

        But it would make sense to come up with a criteria set other than urban/rural to factor in. Such as (as you’ve mentioned): distance from various landforms and coasts, agriculture….

        This is also without regards to their own flags. I couldn’t get the text files for flags or data source info to extract properly and my MatLab subscription has run out.

      • And yet, HADCET suggests 1826 was the 2nd warmest summer of all time. It was very warm at times during the early 1800s. In the UK, summers haven’t gotten much warmer at all. If you look at HADCET, it is the absence of alternating cold years that makes the average higher.

      • I’ll repreat my appeal for someone to stop me and point to where this has been done already, but:

        Continuing with the US data, there are 27 nonduplicate records for which there exists 95% coverage over the past century. The difference between the decadal average 2000-2009 and 1900-1909 for the individual stations is:

        Average 1.2 deg C increase/century
        Minimum 0.46
        10th percentile 0.52
        90th percentile 1.7
        Maximum 1.7

        I haven’t bothered to plot them on a map yet.

        None have cooled, in fact all have gained >= 0.5 degrees more or less.

        Is the reason for the spread:

        a) UHI
        b) Region
        c) both a) and b)
        d) both a) and b) and oh by the way, even the low ones have UHI
        e) This poster is an idiot.

        If e) you have to explain why.

    • Unfortunately, this issue of the land surface temperature data is much easier to resolve than the important issues for policy.

      Or the important issues for the rest of the science of global warming, such as, how hot in 2100, what part of that is our fault, how to define climate sensitivity, and what unknowns are muddying the picture and by how much? As BEST showed, the data was pretty clean to begin with, so this doesn’t advance the study of global warming so much as it does the public’s perception of the field, by providing (relatively) independent confirmation that the subject is more than just the smoke and mirrors that many have been claiming it to be.

  15. I can’t for the moment locate the reference, but there is an observation/claim that the bulk of the UHI effect occurs at very low population densities, well below the “very rural” category used by BEST– i.e., shows rapidly diminishing ‘returns’ as buildup density rises. E.g.: the “heat bubble” surrounding the very small town Eureka (pop. ~9,000), the only GISS-reported station in the Cdn North, is considerable according to those who live there.

    On those grounds, the entire range of “very rural” to “urban” used by BEST focuses on the asymptotic tail of the graph.

  16. Judith,
    You describe Steve McIntyre’s appraisal as “generally favorable” and I suppose that is probably fair, but it is perhaps more accurate to say Steve is more favorable to Muller personally than he is to the BEST analysis.

    For example, based on Steve’s own analysis of the data he writes: “Combining both stratifications, “MMTS rural good” had a post-1979 trend of 0.11 deg C/decade while “CRS urban bad” had a corresponding trend of 0.42 deg C/decade.”

    How then can BEST say UHI and siting issues do not bias the record? Steve writes: “Details of the BEST calculation on these points are not yet available, though they’ve made a commendable effort to be transparent and I’m sure that this lacuna will be remedied.”

    It seems apparent to me that BEST has an error or bad methodology here. Of all the criticisms I’ve read so far, Steve’s seems the most likely to result Muller publishing a corrigendum or clarification.

    Steve also points out how the BEST analysis will be disruptive to the RealClimate narrative because BEST shows a much cooler early 19th century than the tree rings show. When the Little Ice Age shows up RealClimate and the IPCC folks get nervous because it proves natural climate variation is much greater than they would like the public to know. Also, it shows a second Divergence Problem in the early 19th century. It is hard to support the claim the divergence in the late 20th century is due to anthropogenic effects when we have evidence of a divergence in the early 19th century. This is a very important point which informed readers will grasp immediately.

    • Of all the criticisms I’ve read so far, Steve’s seems the most likely to result Muller publishing a corrigendum or clarification.

      It always helps stating how you appreciated the guy’s friendship and support back in 2004, when not many first-rank scientists were stepping up to the plate. I’ve no doubt Steve’s critiques, which I agree were weightier than Judith’s wording implied, will receive due care and attention at BEST HQ.

      Also, [the LIA] shows a second Divergence Problem in the early 19th century. It is hard to support the claim the divergence in the late 20th century is due to anthropogenic effects when we have evidence of a divergence in the early 19th century.

      Just wanted to repeat that.

    • Ron

      I extracted that portion (1800-2000) of the Hockey Stick that relates to the BEST time period and stretched it to fit the BEST reconstruction.

      The resulting blink gif, 1800-2000 BEST-Hockey Stick comparison would appear to support your observation that BEST and the Hockey Team draw very different conclusions about the Earth’s recent temperature record!

  17. Singer responded to WaPo, as reported on WUWT:
    http://wattsupwiththat.com/2011/10/25/singers-letter-to-wapo-on-best/
    Excerpt:

    Unlike the land surface, the atmosphere showed no warming trend, either over land or over ocean — according to satellites and independent data from weather balloons. This indicates to me that there is something very wrong with the land surface data. And did you know that the climate models, run on super-computers, all show that the atmosphere must warm faster than the surface[?] What does this tell you?

    • The surface record seems to be indecipherable. Let’s go to the satellites.

    • Singer’s letter suggests he has some ambivalence about proxies. He said …

      “And finally, we have non-thermometer temperature data from so-called “proxies”: tree rings, ice cores, ocean sediments, stalagmites. They don’t show any global warming since 1940!

      The Berkeley results in no way confirm the scientifically discredited Hockeystick graph, which had been so eagerly adopted by climate alarmists. In fact, the Hockeystick authors have never published their temperature results after 1978. The reason for hiding them? It’s likely that their proxy data show no warming either.”

    • Singer is that dude that worked for the tobacco industry in trying to convince people that cigarettes weren’t harmful to your health. He also marginalized the ozone hole problem. The fact that he gets quite a bit of funding from the oil industry and other corporate concerns makes him fit the category of a lobbyist. Not that there is anything wrong with that. So Singer is a lobbyist. Now he is a really old lobbyist. Not that there is anything wrong with that, either.
      Background information is fascinating to behold, and you can deal with it on your own terms.

      • Please support your assertions with evidence of url links!

      • Jeez, you can read Singer’s complete report here:
        http://tobaccodocuments.org/lor/92756102-6120.html
        Those are his words and writing. I don’t know what else to say but he took up some weird crusade against people’s rights not to have cig smoke blown in their face within a confined space. The fact that Singer wrote that as a “Professor of Environment Science” is pretty depressing.

      • Thanks for the link.

        EPA’s major finding was that “ETS (environmental tobacco smoke) is a human lung carcinogen, responsible for approximately 3000 lung cancer deaths, annually in U.S. nonsmokers.” The question addressed by this section is whether or not that statement is justified. Crossing the Threshold It is well-established that “the dose makes the poison.” That is, almost any chemical substance will harm a person’s health if administered in sufficiently large quantities. Even substances which are necessary for life itself become deadly at high doses. Unfortunately, the EPA ignores this fact in most of its risk assessments by applying a “linear no-threshold” theory of environmental harm. In essence, the linear no-threshold theory holds that high-dose effects can be extrapolated back to a zero dose.

        He is stating the truth that environmentalists always ignore:
        the dose makes the poison

        That is why we now have the world wide panic with the claim that the observed an annual increase in CO2 concentration in the atmosphere of a miniscule 0.0002% causes global warming.
        http://bit.ly/sD7VuQ

      • That is why we now have the world wide panic with the claim that the observed an annual increase in CO2 concentration in the atmosphere of a miniscule 0.0002% causes global warming.

        Girma, you must suffer from an extreme case of innumeracy.
        Changes of composition of 0.0002% will cause havoc in many systems both natural or man-made. With that said, you are not even using the concept of percentages properly. This is not the first time you have done this, so I assume that you don’t want to learn anything pertaining to actual science, or mathematics for that matter.

      • WHT

        Sadly, Girma is a fully qualified engineer with a publication and an advanced degree.

        Which boggles the heck out of me.

        Either he’s pulling our legs for some Socratic purpose, or there is something very very wrong going on.

      • Let us see the numbers

        Annual increase in CO2 concentration is about 2 ppm

        2 ppm = 2*100/1,000,000 % = 2/10,000 % = 0.0002%

        http://bit.ly/sD7VuQ

        REFERENCE:
        #—————————————————-
        #Data from NOAA Earth System Research Laboratory
        #http://www.esrl.noaa.gov/gmd/ccgg/trends/
        #—————————————————-
        #
        #File: co2_mm_mlo.txt
        #
        #Time series (esrl) from 1958.21 to 2011.79
        #Selected data from 1995
        #Least squares trend line; slope = 1.93746 per year
        1995.04 359.461
        2011.79 391.913
        #Data ends
        #Number of samples: 2
        #Mean: 375.687

      • Annual increase in CO2 concentration is about 2 ppm

        2 ppm = 2*100/1,000,000 % = 2/10,000 % = 0.0002%

        Wrong, you putz. A percentage change is defined as

        deltaX / X * 100%

        So given the current CO2 value is around 392 PPM, the percentage change is:

        2 PPM / 392 PPM * 100% = 0.5%

        Do you not know standard mathematical definitions? Or do you just like to make stuff up?

      • WHT – a cowpoke has a question. The background concentration is 280 ppm. Each year additions and subtractions in that component net to ~0. The growth is in the anthropogenic component of atmospheric CO2. To me it is that number grew by 2 ppm. Why am I wrong?

      • WebHUbTelescope

        How is this hard?

        Year 2010 => 390 ppm => 0.0390%
        Year 2011 => 392 ppm => 0.0392%

        Annual Increase => 0.0392 – 0.0390 = 0.0002%

        You love ppm, but hate %. Why?

      • WebHUbTelescope

        How is this hard?

        Year 2010 => 390 ppm => 0.0390%
        Year 2011 => 392 ppm => 0.0392%

        Annual Increase => 0.0392 – 0.0390 = 0.0002%

        You love ppm, but hate %. Why?

        Engineering school takes the sensible approach and they flunk out half the class, realizing that some people will never be competent at math. How you got through, I haven’t a clue.

        one more time, correcting your words
        392 parts per million = 0.0392 parts per hundred
        Annual Increase = 0.0392 – 0.0390 = 0.0002 in parts per hundred
        Percentage Increase = (0.0392 – 0.0390)/0.0390 * 100% = 0.51%

        By convention, percentage is reserved for a dimensionless quantity. The fraction serves to cancel out the dimensions in the numerator and denominator.
        You are abusing the notion of percentage by conflating it with a “parts per” dimensional quantity.

        There is a reason that these conventions are used and established. Lets say that you are a pharmacist and a doctor wrote a prescription to increase the current dose given to a patient by 10%. You go and assume that this means that the dose is increased by 10 active parts per hundred.
        You pathetic miserable loser, you just killed a patient.

        Really, this is no longer funny, Girma, you really sicken and disgust me, and are just an impediment to advancement in our collective understanding.

      • You love ppm, but hate %. Why?

        What are you trying to prove here, Girma? We all know how to lie with statistics. You’re just getting started, you’re nowhere near the level of Don Easterbrook or Harry Huffman. Your time would be more usefully spent studying their methods.

      • WebHubTelescope

        There is a difference between the annual increase in the proportion of CO2 in the atmosphere and the annual increase in CO2. I am referring to the former, you the latter.

        Annual increase in the proportion of CO2 in the atmosphere = 0.0002%

        Annual increase CO2 in the atmosphere = 0.51%

      • There is no annual increase in 280 ppm. It was 280 ppm last year; it was 280 ppm this year; it will be 280 ppm next year.

        I guess it could go below 280 ppm. It would be cold as H.

      • WHT – a cowpoke has a question. The background concentration is 280 ppm. Each year additions and subtractions in that component net to ~0. The growth is in the anthropogenic component of atmospheric CO2. To me it is that number grew by 2 ppm. Why am I wrong?

        I think the only component that grows is that of the excess above 280 ppm. So I think you are correct, all the natural variations sum to ~0 over the course of the year as the carbon cycle breathes. That’s why it’s called a cycle.

      • WebHubTelescope
        Why the ad hominem attack by association? Fred Singer is a premier physicist, meteorologist, environmental and climate scientist who just happens to disagree with you. Even Wikipedia observes:
        Fred Singer

        Singer was named as the first director of meteorological satellite services for the National Weather Satellite Center, now part of the National Oceanic and Atmospheric Administration, and directed a program for using satellites to forecast the weather. . . .
        In 1967 he accepted the position of deputy assistant secretary with the U.S. Department of the Interior, where he was in charge of water quality and research. When the U.S. Environmental Protection Agency was created on 1970, he became its deputy assistant administrator of policy.

        May I recommend that you sink your teeth into Singer’s math and compare that to IPCC’s assumptions and see what the IPCC’s model uncertainties actually are, versus what they need to be to quantify climate change. See his draft:
        Overcoming Chaotic Behavior of Climate Models, S. Fred Singer

        Here we conduct a synthetic experiment, and use two distinct procedures to demonstrate that no fewer than about 20 runs (of 20-yr length of an IPCC General-Circulation Model) are needed to place useful constraints upon chaos-induced model uncertainties

        (PS this draft needs some editing)

      • WebHubTelescope

        Let us say the population of country X is 1 million. Let us say the population of a city Y in a country X is 0.039%. If city Y has 2 new children in 2011, what is the new percentage for the population of city Y in a country X? What is the percentage increase in the population of city Y?

        Percentage increase for the population of city Y in a country X => 2*100/1,000,000=0.0002%

        A) New percentage for the population of city Y in a country X => 0.039+0.0002 =>0.0392%

        B) Percentage increase in the population of city Y=>(0.0392-0.039)*100/0.039=>0.51%

        WebHubTelescope, I am referring to A, but you are referring to B

      • So you are doing it wrong, and I suggest you stop it.

      • There is nothing wrong in a correct statement!

        The percentage increase in the population of city Y in country X is 0.0002%

        The percentage increase in the population of city Y is 0.51%

        Similarly,

        The percentage increase in the concentration of CO2 in the atmosphere is 0.0002%

        The percentage increase in the concentration of CO2 is 0.51%

    • This all seems to me off-topic of a “technical thread,” but none of it has been deleted so I may as well join the party –

      Anthony Watts (and Judith agrees, btw):

      “The issue of “the world is warming” is not one that climate skeptics question.”

      Fred Singer:

      … the atmosphere showed no warming trend, either over land or over ocean — according to satellites and independent data from weather balloons. This indicates to me that there is something very wrong with the land surface data. And did you know that the climate models, run on super-computers, all show that the atmosphere must warm faster than the surface. What does this tell you?

      And finally, we have non-thermometer temperature data from so-called “proxies”: tree rings, ice cores, ocean sediments, stalagmites. They don’t show any global warming since 1940!

      OK – so let’s recap shall we?

      There is something very wrong with the land surface data that show warming, satellite data show no warming, and proxy data show no warming.

      But Judith and Anthony assure us that “skeptics” don’t doubt warming.

      Ya’ just gotta love those high expectations regarding quantification and verification of uncertainty. If only the IPCC had such high expectations in those regards.

      • Say – Judith,

        Singer’s going to be in Sante Fe with you, isn’t he? Maybe while you’re there you could ask him if he thinks that the “word is warming?”

      • more on the santa fe meeting soon, shaping up to be an extremely interesting conference

      • Judith –

        One of the BEST meals I’ve had all year:

        http://www.labocasf.com/location.html

        And if you get the time, check out Rio Arriba County to the north:

        http://t3.gstatic.com/images?q=tbn:ANd9GcRUHkfdCwrPCtijf_zb6tbOmpw9gkuHHN2MwFrqWD-bR6nTkQbXuQ

        I’m sure you could justify some “climate analysis” in that area.

      • I used to live in Santa Fe, I still have green chiles flown in frozen to Atlanta. thanks for the tips

      • Judith
        Speaking of uncertainties, you might enjoy discussing chaos and climate model uncertainties with Fred. See:
        Overcoming Chaotic Behavior of Climate Models S. Fred Singer

      • interesting, i think this may be the topic of Fred’s paper next week in Santa Fe

      • I read Fred’s paper and it is very interesting reading for someone used to modeling for the aerospace industry. Having to run the same model multiple times and then “averaging the results” in order to smooth out “orders of magnitude” variances seems very odd. It would seem to indicate a problem with how the model is applying certain criteria.
        I would love to ask Fred what performance criteria are expected for the models in order for them to be considered “successful?” The more I read about GCMs the more I doubt the conclusions reached by those who analyze their outputs. Ask Singer how well his model does in predicting observable, measureable criteria in the real world. If his model is no better than an almanac in predicting future temperature or rainfall, what good is it except to mislead people as to our knowledge of the future?

      • Judith – It is even better if you have a fresh bushel of chiles shipped out to you. then you get to enjoy the delicious aroma as you roast them on a grill.

      • too much work peeling and seeding, i’m a city gal now

      • Joshua, now you’re just being silly. You know perfectly well Singer is talking specifically about the trend for the last ten years.

        But for the sake of clarity, I’ll leave a note for Prof Singer to clarify if it will satisfy your critique.

      • You know perfectly well Singer is talking specifically about the trend for the last ten years.

        Not silly – but as has been noted many time on these pages, my background knowledge (and intelligence, of course) is somewhat lacking. Which parts of the excerpt I posted above were in reference to the trend for the last 10 years only?

      • Request for change and clarification posted.

        PS. I take back calling the statement silly. You are right to point that out. If Singer IS referring to the entire 150 year trend, then you have a point. But being familiar with the work of both Watts and Curry and myself officially being a skeptic since 1992, I know they, as well as the vast majority of climate science skeptics don’t fall into the “no warming at all” camp. That is more representative of the political pollution of this issue from the right. Most of us are simply fighting to see the science get better.

      • Michael –

        Here’s the problem as I see it. If you think that there is no valid evidence of warming for the past 40 years (and we might add that many “skeptics” question levels the evidential basis of ice-melting during that period as well) – then how can you think that “the world is warming?”

        Now, of course, different “skeptics” have different opinions on a huge variety of issues, and that’s just fine with me. It would be unrealistic to expect otherwise. But what isn’t fine with me is when I see categorical statements about what “skeptics” believe – like the one I quoted above from Anthony – and then I see abundant evidence that such categorizations are not valid. I would have a similar objection if someone said that “skeptics” are rabid-rightwingers who hate science. The problematic aspect of those kinds of statements is exacerbated when the people making them have the kind of high profile that Anthony does among “skeptics.”

        The statement that skeptics don’t doubt that “the world is warming” is a political statement made for effect and which can have no potentially beneficial impact, IMO. I believe that limited progress will be made until “skeptics” hold other “skeptics” like Anthony into account for statements like that (just as that “realists” need to hold other “realists” into account for speciously categorical statements).

      • BlueIce2HotSea

        Joshua –

        “Skeptics vs. Realists” is a bad analogy for “BEST vs. Hadley-GISS”. Remember, BEST only became a necessity due to the abuse of trust by certain Hadley and GISS scientists.

        For the principals, the antagonism is “Science vs. Politics”. For followers of BEST, it’s “Skepticism vs. Credulity”. For Hadley-GISS fans, it’s “Loyalty vs. Treason”.

        BlueIce

      • Josh-ua,

        So now you believe what Watts and Judy say? You are a case, Josh-ua.

      • Actually, Don, I “believe” much of what Judith says. Some things she says I disagree on. Anthony is another matter.

        And Don – you’ve been chasing me around like a puppy with a hard-on for the past dozen threads or so, but you still haven’t answered my questions.

        Why were you asking me whether my name comes from the New or Old Testament? And what’s up with the reference to “self-loathing” in the same series of questions.

        I might think that you were asking me about the etymology of my name to highlight the fact that I’m Jewish. And the comment about “self-loathing” could be considered a dog whistle to stereotyping about “self-hating Jews.” Now I’d hate to think that someone at Climate Etc. would stoop so low, and I certainly wouldn’t want to jump to any conclusions – but the fact that you never answered the questions does leave it open to speculation.

        Why were you asking me about the derivation of my name? What was up with the proximal comments about “self-loathing?”

      • You are a putz Josh-ua. I do it because you are hypocritical religion baiter and it makes you angry. Thanks for confirming that I have gotten into your little malevolent mind. I will keep doing it.

      • Josh… You asked:

        >Here’s the problem as I see it. If you think that there is no valid evidence of warming for the past 40 years

        Who made that statement about 40 years?

        I believe that limited progress will be made until “skeptics” hold other “skeptics” like Anthony into account for statements like that

        You have an issue with this statement I believe by AW:

        Anthony Watts (and Judith agrees, btw):

        “The issue of “the world is warming” is not one that climate skeptics question.”

        What is the problem with that statement? Most skeptics, myself included, don’t question whether the world is warming. Most recognize we are recovering from the little ice age. Most, including myself, don’t question that CO2 is a “greenhouse gas” (some have a problem with that term, I don’t). and that an increase in that can and probably does have some effect on the feedback system that acts as a thermostat for planet Earth. The issues skeptics have concerns about are the soundness of some of the science presented to prove the worst case scenario, the assumption of certainty in a very young field of science, the constant brushing aside of science and geologic records that show a low response of global temperature in relation to CO2 levels, and a concern that certain dominant scientists in this field have leveraged too much control over the type and direction of the research being done and ignore or brush aside valid criticisms of previous, potentially flawed research, and the political interference that steers the science in the prefered direction of those scientists. Finally, many, if not most of us, question the wisdom of proclaiming a level of certainty on the causes of the warming trend for the last 50 years, the proclamation that the effects of the extra CO2 in the atmosphere has become THE dominant force causing the warming, when there are still so many aspects of natural variability that we have yet to fully understand.

        You don’t have to respond to this. I just want to be sure we speak the same language when we refer to someone as a skeptic.

      • Joshua

        Can you tell us where and when Fred Singer made that statement you quoted about the satellite record showing no warming?

        Thanks.

        Max

  18. I agree in being very careful with filtering.

    For example, in the following IPCC graph, the smoothed series ignores the global mean temperature peaks for the 1880s completely.
    http://bit.ly/b9eKXz

    No wonder the predictions are all wrong.

    • Indeed. If they’d picked consistent start points, say at 25 year intervals, they’d have got totally different trends. Now I wonder why they cherry-picked those particular start dates? Hmmmm.

    • For example, in the following IPCC graph, the smoothed series ignores the global mean temperature peaks for the 1880s completely.
      http://bit.ly/b9eKXz

      Amazing. A post by Girma that I can actually agree with. I must be turning into a rhinoceros. :)

      • Vaughan Pratt

        It is extremely hard to disagree with me, as I just believe what the data shows:

        1) http://bit.ly/ocY95R

        2) http://bit.ly/nz6PFx

      • Wow, a second graph I can actually agree with, which shows a cooling trend for the period 2002-now. It helps to see the trends for that and related years in the same place.

        2000-now +.023 °C/decade
        2002-now −.077 °C/decade
        2004-now −.067 °C/decade
        2006-now +.030 °C/decade
        2008-now +.258 °C/decade

        Boy, you really lucked out there, picking 2002 as a starting year. ;)

      • Vaughan Pratt

        Is it not the case that we start trend calculation from a turning point?

        Is not year 2002 a turning point as shown in the following data?
        http://bit.ly/uXy8jw

        Approximate global mean temperature turning point years:
        Peaks=> [1880s, 1940s, 2000s]
        Valleys=> [1910s, 1970s]

      • Is it not the case that we start trend calculation from a turning point?

        I don’t know if I have ever seen a more blatant example of someone actually advocating the cherry-picking of data.

        Maybe you don’t even know what cherry-picking means:
        Cherry-picking: the act of pointing to individual cases or data that seem to confirm a particular position, while ignoring a significant portion of related cases or data that may contradict that position (wikipedia).

      • Is it not the case that we start trend calculation from a turning point?

        Yes, I fully agree with you, this is the standard place from which to pick cherries. You must read the same advice books I do. To prove a decline, start from a maximal turning point. To prove a rise, start from a minimal one.

        If the world changes its mind and decides the temperature is falling and you still want to disagree with the world, start your trend from a minimal turning point. Very simple as long as you don’t get them mixed up.

      • Gentlemen

        Please stop attempting to use irony on Girma.

        Either he’s putting on an act for some Socratic purpose, or he’s genuinely suffering a case of invincible ignorance.

        There are quite literally hundreds of refutations of Girma online, in the tens of thousands of distinct posts he’s put out about the internet, many from first rate experts in diverse fields of study dependent on graphical analysis.

        While he’s adapted to the criticism, he’s never indicated grasp of the fundamental errors he’s committing.

        It’s practically a pathology.

        One could almost write a paper on it.

      • Either he’s putting on an act for some Socratic purpose, or he’s genuinely suffering a case of invincible ignorance.

        I was assuming the former and playing along. If the latter I’ve been insensitive to a fault.

  19. Lucia also has a post looking at Keenan’s argument – http://rankexploits.com/musings/2011/best-data-trend-looks-statistically-significant-so-far/ – I think she’s made a mistake in using monthly data rather than annual data in her analysis but I could be wrong.

    • think she’s made a mistake in using monthly data rather than annual data in her analysis but I could be wrong.

      Well, that depends on what you mean by “mistake”. :)

      I intentionally used monthly data because Keenan quoted Briggs

      Unless the data is measured with error, you never, ever, for no reason, under no threat, SMOOTH the series

      How could I use 12-month smooth under after that!.

      Of course, it turns out that no one really believes you should “never, ever, for no reason, under no threat, SMOOTH the series”.

      The truth (which Muller observed in his response to the Economist) is that “”He [Keenan] is, of course, being illogical. Just because smoothing can increase the probability of our fooling ourselves doesn’t mean that we did. There is real value to smoothing data, and yes, you have to beware of the traps, but if you are then there is a real advantage to doing that.”
      Sometimes, one ought to smooth data– there can be very good reasons to do it.

      In fact, sometimes, if you don’t smooth data, some people will accuse you of making a mistake. Imagine. :)

      • You could always smooth over the accusations by explaining why or why you are not smoothing. In your blog you say you prefer not to smooth but you don’t say why. One could conclude its because of the result.

      • I consider it a no-brainer to moving-average smooth over 12 month intervals. This is a finite impulse response (FIR) filter that places a zero at the natural seasonal variations that extend over one year.

        I know this response will get nit-picked but this is real knowledge that one can apply to reduce uncertainty.

      • I have to agree with Kermit. That comment was an amazing tapdance around the question of what is a good reason to smooth.

  20. Muller is like the Bible, there’s enough writing of him lying around for others to be able to categorize him as “skeptic”, “denier”, “warmist” and “believer”.

    What we don’t actually know is Judith’s opinion. Seems established by now that the uhi paper isn’t fit for purposes, for example. No opinion on that?

    OTOH it’s hard to be fully truthful on the net about things close to one’s work, so perhaps we’ll never read here from JC anything but the BEST PR machine. It’s the reason why I don’t blog about my day job.

    • omnologos –

      What about skeptical warmist or perhaps apolitical scientist?

      Let’s see if Muller attempts to increase the quality of BEST’s work by addressing (and crediting) legit criticism across the full spectrum – denier to alarmist.

  21. http://www.realclimate.org/index.php/archives/2011/10/berkeley-earthquake-called-off/

    Eric Steig is unsurprised that the BEST results are in line with other reconstructions, and is surprised that it was presented as a surprise.

    He also has some beef with the decadal variations paper.

  22. Muller is like the Bible, there’s enough writing of him lying around for others to be able to categorize him as “skeptic”, “denier”, “warmist” and “believer”.

    Ha. That reminds me of the old children’s hymn:

    Wonderful things in the Bible I see
    Some put there by you
    Some put there by me.

    It wasn’t the original version – but it’s the one that stuck in mind!

  23. Pekka -you’re making BEST the enemy of the good. If a classification is unsuited for UHI, then it’s not good to conclude anything out of it even if it’s the best classification yet available.

    • One of my points is that major effects are almost always rather easy to identify. Looking at the data in many different ways leads someone to find a way that brings out the effect on the statistically significant level. Failure to find such classifications after so many years and so many people trying to find the effect is already rather strong evidence that there isn’t any strong effect at all.

      The kind of argumentation that I present above is never totally foolproof, but it’s clearly getting more and more certain with every new attempt that results in low upper limit of either sign for the effect. If the effect is, indeed, as weak as it seems to be, it’s not surprising that some of the analyses get even a opposite sign compared to most others. The real conclusion is only more evidence on the smallness of the effect.

      • To avoid misunderstandings: The basic UHI effect is rather strong, what seems to be weak is it’s influence on the estimates of temperature trends.

      • Steven Mosher

        Let me put it more specifically. You can find isolated cases where the peak effect is very high on certain days. But the average effect over time is modulated by a variety of factors. For example, depending on the location UHI can be entirely mitigated by wind speeds from 2m/sec to 7m/sec. So if you want to show a dramatic UHI effect ( to promote painting roofs white) you pick a still day in the summer with the sun blazing and you show a 5C delta with the rural countryside. You dont show the cloudy days, you dont show the windy days and not the rainy days and not certain seasons where UHI is weak or non existent.

      • Don’t forget enough heating on a still day can produce a thunderstorm with resulting UHI cooling for the day. Especially if the big cold downdraft happens just before time for the temp to be recorded.

      • steven,

        Doesn’t the wind carry the heat somewhere else? Maybe to those “very rural” areas within 11 clicks? Aren’t cities still warmer on cloudy days, compared to a cloudy day in the country? Aren’t rainy days in the city still warmer than rainy days in the country? Help me steve.

      • Wind tends to reduce UHI.

      • Don;
        The global heat is trivial and doesn’t impact “average temperature” worth mentioning. It’s only the biasing of local readings which are then given undue weight in calculating said averages which matters.

        In any case, as i mentioned above, UHI is a lot like CO2: the vast bulk of its effects occur in the first few steps. Thereafter it’s logarithmically diminishing returns all the way out. BEST’s and Jones’ UHI analysis are all done well out on the thin tail, ignoring the huge initial “hump”.

      • All studies on this topic use the distinction urban / rural which does not make sense.

        That distinction has been in existence since cities were invented thousands of years ago. Are you saying that after all these millennia the distinction still does not make sense?

        If you’re worried that oceans are neither urban nor rural, then perhaps “rural” should be replaced by “non-urban.” That way, whatever you think the U in UHI refers to, “non-urban” is everything else.

        The concern with UHI is the possibility that sensors are concentrated in urban areas, which could disproportionately weight the higher temperatures expected in those areas. There are other non-uniformities, but this has been the big one in many people’s minds.

        Kriging is simply a statistical technique for turning biased noise, such the bias in UHI, into unbiased noise. The risk with kriging is not that some of the bias may remain—-the technique is extremely effective at eliminating all bias—but that it can increase the absolute amount of noise, quite dramatically in some situations.

        In order to judge whether any given application of kriging has been successful, you have to ask whether the bias before was more or less of a problem than the (potentially greatly) amplified noise after. Kriging is not the only way to remove bias, and in some situations it is not the best way, easily illustrated with toy examples.

      • When we use always the same distorted scale to measure the same thing, we always get somewhat the same false result.

      • Why do you think that everybody does the same thing?

        I’m sure that many people have tried many different approaches, but failing to find anything they have never told of their work, or they have told in some blog that nobody has given notice, because finding nothing is not so interesting. Writing a proper scientific paper of that would be a major extra effort, and it could still be difficult to get the paper published unless the analysis is of particular interest for some specific reason as the BEST papers may be.

      • All studies on this topic use the distinction urban / rural which does not make sense.

        As for an idea of the UHI, the means are not lacking, you have TLT and all proxies.

  24. Pekka – if uhi is very strong in each individual site but disappears in classification based global studies then the classification isn’t good. Nothing else can be said.

  25. From the BEST papers (on decadal variation):
    We find that the strongest cross-correlation of the decadal fluctuations in land surface temperature is not with ENSO but with the AMO

    Is there more info available on line?
    If and when the authors can clearly demonstrate above claim, it will be easy to link the Loehle’s 2000-year global temperature variation with the geomagnetic changes, two parallel and independent outputs of the same input, as shown here:
    http://www.vukcevic.talktalk.net/LL.htm

  26. As a teacher of reading and thinking I must post this disturbing but highly accurate document.

    http://www.criticalthinking.org/pages/pseudo-critical-thinking-in-the-educational-establishment/504

  27. @Prof Curry…

    Anthony Watts is critical of the surface station quality paper [here].

    The link is wrong: pointing to the same place as the previous link.

  28. Am sorry David Young (and Judith) but the BEST way of publicizing their results is no good model or attempt at all. If they wanted to collect pre-pub comments they could’ve used their website and countless blogs including this one. Instead BEST went for a full PR massive attack and cornered thousand of journalists in parroting the One and True Interpretation (Muller’s).

    There’s simply no way Muller can go back in six months’ time telling the parrots they should sing a different song in their future articles, without losing a lot of credibility in the journalists’ eyes. Therefore all but a handful of comments coming from the web will be treated as comments to the IPCC (“rejected”).

    And that will be the end of “open science” by BEST. Expect Muller to complain soon little constructive came his way.

  29. There’s simply no way Muller can go back in six months’ time telling the parrots they should sing a different song in their future articles, without losing a lot of credibility in the journalists’ eyes.

    That’s not necessarily true. They could go back in six months and say “Well, because of our superior process we found some flaws in our prior results, and because of our exemplary open-mindedness, we have now revised our results and feel it is extremely important to publicize our revised findings.” They would be hailed as superior scientists.

    But it’s nice to know that you’re 100% certain that future events could only go in the way that you describe.;

    Judith – is this an example of what you were talking about regarding those expectations about uncertainty among your “denizens?”

  30. Joshua – you cannot stay on topic. About that, there’s 0.00000000% uncertainty. :)

    PS journalists write narratives not chronicles. It’ll take more than an “oops!” to make them swing the other way around.

    • omnologos –

      Joshua – you cannot stay on topic.

      I responded directly to the topic that was in your post – and noted in addition that your conclusion on that topic showed a willingness to disregard elements of uncertainty.

      The narrative I suggested (that there are a maverick group of scientists who are willing to rise above the partisan bickering and to restore the practices of science back to the noble days of yore) would be a compelling narrative – one of many potential compelling narratives that you selectively ignore in your prognostication about what is or isn’t possible.

    • And while we are at it, let us not forget that someday, all iron beams will be manufactured using ionizing plasma streamers too.

  31. Josh-ua,

    You are not amusing anymore. You are a deliberately unrewarding distraction with no redeeming qualities. And I don’t like you. This is a little OT, but an appropriate reply to the vast majority of your posts.

  32. Oliver K Manuel wrote:

    BEST is now “hopelessly stuck” to deceptive science, like the Norwegian Nobel Prize Committee, the US National Academy of Sciences and the UK’s Royal Society.
    What a sad, sad day for science nd for UC-Berkeley!

    Don’t forget the national science academies of:
    Australia
    Belgium
    Brazil
    Caribbean
    Canada
    China
    France
    Germany
    India
    Indonesia
    Ireland
    Italy
    Japan
    Malaysia
    Mexico
    New Zealand
    Russia
    South Africa
    Switzerland
    United Kingdom
    and many more…

    Keep circling those wagons. The global scientific community is restless.

  33. About UHI – since we know it’s real (Muller himself said so), he should have dug deeper, made a more throuough effort to get to the heart of the matter. Seems he grabbed the first plausible analysis that enabled him to dissmiss the issue and didn’t seek any further. That is lame.
    For example: he could have compared the 16k “very rural” to the 23k “not very rural” stations (in addition to comparing the 16k to the whole 39k). He tried very hard not to find a UHI influence.

    Muller himself seems not to understand the UHI effect. He states in his WSJ piece that the urban areas comprise only 0.5% of the land area, therefore they are not influential. This is wrong. Those 0.5% of land area contain about 55% of the thermometers (23 k out of 39). That is the UHI problem in a nutshell, and the problem with surface stations in general. Muller seems to be unaware of this. So, on this point, at least he’s unconvincing.

    • Those 0.5% of land area contain about 55% of the thermometers (23 k out of 39). That is the UHI problem in a nutshell,

      And why to show a spurious effect on trends, you’d have to show a discrepancy in the trends of the thermometers in urban areas and those not in urban areas. And that’s what the BEST study analyzed data to determine. How does that show a lack of awareness?


    • Seems he grabbed the first plausible analysis that enabled him to dissmiss the issue and didn’t seek any further.

      ‘Cause that’s exactly what I would do if every professional and amateur climate scientist on Earth were going to study the report…

      Since you are so convinced of Muller’s utter incompetence, and that your example is better, why not publish it?

      • I did not comment on Mullers competence.
        I said he hasn’t dug deep enough into the issue, maybe they will do so in further studies. I said, it seems to me, this study alone is unconvincing.

      • No – You merely insinuate that he is a complete scientific doofus.

        While your personal lack of conviction is obvious, it is also rather uninteresting.

    • Those 0.5% of land area contain about 55% of the thermometers (23 k out of 39). That is the UHI problem in a nutshell, and the problem with surface stations in general. Muller seems to be unaware of this.

      You may be misinterpreting the second paragraph of their introduction, which says “[Kriging] permits a random selection of stations to be made without giving excessive weight to heavily sampled regions.” This can be illustrated with a toy example consisting of a county with 3 rural stations R1, R2, R3, spread wide apart and evenly distributed around the county, and 1000 urban stations, U1-U1000 concentrated in the only city of that county (exaggerated to make the point clearer), for a total of 1003 stations in the county.

      Now suppose the temperatures for 1957 were as follows.

      R1 25
      R2 27
      R3 23
      U* 28.973 on average (total of 28973 when summed)

      Correct me if I’ve misunderstood, but you may be assuming that the average will be

      (25+27+23+28973)/1003 = 28.96.

      With kriging however the average will be

      (25+27+23+28.96)/4 = 25.99

      The point of kriging is to avoid giving excessive weight to heavily sampled regions, which is what happens when you simply average all the stations.

      • Thanks for the clear explanation.
        Seems to me that kriging does a good geographic interpolation, but does not address the problem of UHI.

      • Well, it’s supposed to address it. Why do you think it doesn’t? Urban heat islands are contributing to the overall heating of the planet and therefore should be included. The objection to them is only valid when you count them twice, or a thousand times. As long as you only count them once you are accurately describing how the planet as a whole is heating up.

      • The question is when do you count them?.Say for example as T decreases in mega cities such as Moscow or Tokyo etc at weekends or Holidays ( 1-2c) what effects does this have on the underlying trend ?

      • The question is when do you count them? Say for example as T decreases in mega cities such as Moscow or Tokyo

        That’s an excellent point. Kriging is just as applicable to time as to space (think of the whole data space as having two spatial dimensions on the sphere and one temporal dimension). However BEST’s discussion seems to be focused exclusively on space, and they appear not to have tried to exploit that additional degree of freedom available to kriging.

        What spatial kriging does do however is to make your question just as relevant to data from non-urban areas, in fact even more so since there are more non-urban than urban areas by a long shot.

        As with spatial kriging, temporal kriging can be far from a panacea, whether urban or non-urban. As an extreme example, suppose every sensor on the planet, whether urban or non-urban, was rigged to take one reading a day. If each sensor naively read the temperature when the day changed (according to local time), all temperatures would be recorded at midnight. Without kriging this would give a very different result from doing all readings at midday. But kriging can’t change that, since it can’t compensate for such extreme concentrations in either space or time.

        For temporal kriging to work you need some mix of day and night temperatures. Likewise at weekly instead of daily levels—you need some samples from the weekend as well as the week, assuming there’s the kind of difference you’re hypothesizing.

  34. Ole Willis at Wattsup had an interesting, to me, critique on the BEST studies:

    http://wattsupwiththat.com/2011/10/24/what-the-best-data-actually-says/#more-49905

    Including a link to a post on UHI at CA:

    http://climateaudit.org/2010/12/15/new-light-on-uhi/

    • Steven Mosher

      Not also that Willis does set a boundary of sorts ( at my nudging) on how big the actual effect can be.. that is, the bias is no more than .3C

      that is important for people to get for the following reason. We know by looking at SST and at Satellite data that if there is any UHI bias in the record it is less than .3C. Now, ask yourself, with the huge spread in trends of all the stations, with that huge variation, how easy is it to find a small effect?

      It would be easier if the UHI signal were 1C. but its not. Oh yes, on some day, at some city, you might see a peak value that is huge. But on average the effect is small. and hard to detect because of the underlying variance in the data series.

  35. Continuing about UHI.
    BEST could have done a sample study. Select – say – 100 definitely urban stations and 100 genuine rural ones, all with long records. Select the stations based on careful study of each one, to make sure (not using MODIS or such). Then see what the temp trend differences are in those two samples.
    That’s what I would do to learn if there is any UHI effect on trend or not. If I found no UHI, I would repeat the experiment a few times with other samples, and then maybe I could declare the UHI effect not influencial.
    Not only do I find the BEST study unconvincing, I find they didn’t try hard enough.

    • Concern noted.

      And you are the BEST climate science “decider” because…?

    • Jacob,

      You are correct. They admitted in the paper that they were not studying the UHI effect explicity. MOD500 did not allow them to do that. Was that their only choice of tools? They made it up as they went along, and the result is worse than useless, it is misleading.

      • They didn’t study it explicitly, still they declared it does not influence trends, i.e. drew conclusions without studying…
        Do they intend to study further or is the matter settled ?

      • The drew conclusions on the topic they studied – the effect of UHI on land surface temperature trends.

        In fact, they said that the UHI effect is real, but why would you expect them to focus on a topic that they didn’t set out to study – the UHI effect?

      • Do some research Joshy. Read the freaking title of the paper. But did they do a comparison of urban sites to unpopulated sites? I will help you, NO! They could not do that with the MODIS tool. So they winged it. They compared rural sites to allegedly “very rural” sites. What has that got to do with the title of the freaking paper?

      • Don –

        Apparently you missed my questions. Answers would be appreciated:

        http://judithcurry.com/2011/10/25/best-of-the-best-critiques/#comment-127971

      • Don –

        Is this the title that you’re referring to?”

        Influence of Urban Heating on the Global Temperature Land Average

      • Joshy,

        You have not read that paper. I am not surprised. I don’t have any more time for you.

      • Don –

        This is from the study:

        The urban heat island effect is locally large and real, but does not contribute significantly to the average land temperature rise.

        And Don – I would hope that you’d reconsider being “done with [me]” long enough to answer my questions. You seem to be ducking them, but I’m sure that with someone of your character, that couldn’t be the case. Here, in case you missed them:

        http://judithcurry.com/2011/10/25/best-of-the-best-critiques/#comment-127971

      • OK, Josh-ua. I looked at your questions. I am sure you can find the answer. You make me laugh.

    • You can of course do this and you will find a UHI effect. For example, if you compare the lowest population sites with the highest population sites you will find a difference. Here is the problem.

      1. You have no idea how to generalize this back to the global series. Lets make a silly example for illustration. BEST has 39000 stations. Suppose the
      most rural 10 had a trend of .05 per decade and the most urban 10 had a trend of .1C decade. What can you conclude about the bias in the full sample? not much.
      2. Such samples tend to be geographical heterogeneous. For example you most rural might come all from the US while your most urban might come from China. So are you comparing urban/rural or us/china. Simply, by picking from the extremes you lose both the ability to generalize back to the full sample and you lose spatial diversity

      So, the test you suggest has been done by others. yes we find a UHI signal. Its small even in these cases. If you remove the most urban from the entire sample your mean does not change appreciably and you’ve lost spatial coverage. In short, there are a lot of complications with any approach.

      • Simply, by picking from the extremes you lose both the ability to generalize back to the full sample and you lose spatial diversity

        Seems to me that this is an problem that runs throughout many of the criticisms. Picking extreme examples of how BEST’s classifications may be inaccurate tells you nothing about the influence of UHI on Global Temperature Land Average (which was the subject of their paper).

      • steven,

        You may not be able to generalize it back to the global series, but the same can be said of the BEST approach, yet they assure us in a media blitz that declares “climate skepticism” is dead, that UHI is actually a little bit negative. And if you remove the most urban from the entire sample your mean does not change appreciably, because perhaps you have not removed the UHI effect from the sample. I go back to asking if anyone can explain why the 1/3 of cooling trend stations are so well inter-mixed with the warming stations? Are the majority of the stations influenced by the UHI effect? Am I making any sense steve? I trust you on this. Help me out.

      • Don –

        “…yet they assure us in a media blitz ….that UHI is actually a little bit negative.”

        Can you quote where they said that? I thought they said that UHI is real (and positive), but that it has a negligible effect on global temperature land averages.

      • “If you remove the most urban from the entire sample your mean does not change appreciably and you’ve lost spatial coverage.”
        It should have been done at least as an exercise, to study the consequences.
        If we know the UHI is real, how come they completely ignore it?

        Moreover, by their calculation the UHI effect is negative. You can’t just dismiss the discrepancy by a wave of the hand and say “forget it, assume it’s zero”.
        This matter is treated sloppily.

  36. The Briggs critique is very good and certainly suggests that there is a lot more uncertainty than stated by Muller et al. However, Briggs merely says this in passing: “Kriging (a standard practice which comes with its own set of standard critiques with which I assume the reader is familiar) is used to model non-observed spatial locations.” That is, Kriging is used instead of grid cell averaging. In effect the entire global temperature field is based on a complex form of linear interpolation from the existing stations.

    I wonder what these “standard critiques” of Kriging actually are, and what they would tell us about the uncertainties in the Muller et al approach? Google Scholar lists about 50,000 articles that use the term Kriging, with over 3,000 that use it in the title, which means this is a well known method. It is widely used in actual applications, including mining, water resource estimating and, ironically, oil field reserve estimation.

    My impression is that Kriging is not considered to be especially accurate, more like a rough estimate, but others who are familiar with its use might know more about that.

  37. I’m a little skeptical about the claim that BEST is in line with the other records. It seems to me that it is in line with the hottest of the other records. But the “being in line” is partially imaginary. Those other records that it is in line with all do Time of Observation Bias (TOB) corrections. These are substantial corrections to the hot side. As far as I can tell, BEST does not do TOB. And it’s doubtful that their continuity algorithms will catch the change, since Time of Observation changes don’t really cause any identifiable discontinuities. So BEST is as hot as the hottest without TOB – meaning that they are actually hotter than the hottest by about .2C.

  38. Looks like Steve McIntyre’s suggestion that the proxies fail to match BEST is rather wide of the mark: http://imageshack.us/photo/my-images/830/slide1w.png/

    • I’ve just seen Steve has done a better job than me in showing the match between BEST and the proxies: http://climateaudit.files.wordpress.com/2011/10/muller_plus_wilson.png

    • The chart is a little busy to be able to tell. But when I look at it, it looks like McIntyre is right on the mark.

      • Tilo,

        Here’s what Steve wrote: “The decade of the 1810s is shown [by BEST] in their estimates as being nearly 2 degrees colder than the present. Yes, this was a short interval and yes, the error bars are large. The first half of the 19th century is about 1.5 degrees colder than at present. At first blush, these are very dramatic changes in perspective and, if sustained, may result in some major reinterpretations.”

        In fact as Steve’s replot of Wilson’s figure shows, the proxies already indicated a mean anomaly about 1.5 degrees colder than present in the first half of the 19th century. Some reconstructions (Wilson’s and Espers’) run colder than BEST. At second blush, hardly such a dramatic change in perspective and certainly not warranting much reinterpretation.

        A rather different point could be made with regard to proxy verification. From 1800 to 1850 these existing proxies generally show a rise in temperature, as does BEST. Given the criticism that proxies have not demonstrated much predictive power, the level of agreement in this new period of data can provide an updated level of confidence.

  39. When they want to scare you about global warming, Urban Heat Islands are hotter than Hell killers:

    http://www.nasa.gov/topics/earth/features/heat-island-sprawl.html

    But they have had a slightly cooling effect on the temperature record, according to the very BEST analysis that climate science can produce.

  40. Judth,

    The BEST team has drawn some heat for putting their drafts on line prior to peer review and publication, but I think that your selection of critiques displays the wisdom of their (and, by extension, your) approach. Whatever the results of peer-review might have been, this has been in fact a very severe review by talented experts who were not likely to have been selected as reviewers. You’ll recall that the paper in Annals of Applied Statistics by McShane and Wyner had critiques by 9 teams of colleagues as well as a rejoinder, that paper was put on line well in advance of publication, and the authors referred to the online critiques as well as published critiques in their printed rejoinder and supporting online material (the published version alone amounted to 110 pp.)

    I think that Dr. Muller had it right: this is a modern-day version of circulating a preprint while the paper is in review. The published version is likely to be much improved, even if not every critique can be resolved to the satisfaction of all involved.

    • Matt,

      But they also sensationalized their non-peer reviewed results in a media blitz that declared the end of climate skepticism. Stupid, and dishonest.

      • The spin of the AGW community that somehow BEST makes the idea of a CO2 climate catastrophe ‘settled science’ makes no sense.
        How do you get from confirming that temperatures have increased a bit over the period in question to a climate catastrophe except as a leap of faith?

      • They said that the role of humans in producing global warming may have been exaggerated.

        As to “media blitz”, I do not think it credible that they might have put up their 4 papers and their data and code and then said (as I would have recommended them to say) “No comment” to all media inquiries. Anything they might have said, and all of their silences, would have been attacked from left and right, from believers, skeptics and deniers. Of the zillions of imperfect ways to handle the media for this politicized topic, they chose one of the imperfect ways.

        Sometimes, like now, one just has to run the gauntlet and hope that the whips and scorns do not do too much harm on the way.

      • Matt,

        Have you read Muller’s article in the WSJ?

        http://online.wsj.com/article/SB10001424052970204422404576594872796327348.html#printMode

        “The Case Against Global-Warming Skepticism
        There were good reasons for doubt, until now.
        By RICHARD A. MULLER ”

        And the widespread coverage that the study has received in the so-called mainstream media carries the same narrative-Climate Skeptics Snuffed-with the statement in the last line of Muller’s article made similarly obscure, if mentioned at all. Most people don’t read that far down the page.

      • In my opinion, he would have been better advised not to write it. On the other hand, WSJ solicits opinion pieces on timely topics from informed people, and generally publishe pieces that, over time, amount to a debate. I don’t consider that a “media blitz”.

        That the mainstream and other media are covering the papers and selectively quoting from them is an unavoidable fact of life. Muller et al did not create it, and could not have prevented it had they tried. I do not see any course of action for which a credible claim can be made that the outcome would be better than what we have.

      • Matt,

        I don’t object to Muller writing an opinion piece in the WSJ. But the tone is wrong, and he writes checks with his mouth that his ass can’t cash. It’s garbage, and influential scientists who might be affecting public policy that could cause poor people a lot of hardship, should be more careful. And the Muller tone is reflected and amplified in the mass media coverage, which has been extensive. Muller looks to me like the band-leader of an orchestrated attempt to squelch debate and settle the science, again.

      • It’s hard to figure out what Dr. Muller considers a skeptic to be. His new, post-skeptical view is summarized by his concluding statement: “Global warming is real. Perhaps our results will help cool this portion of the climate debate. How much of the warming is due to humans and what will be the likely effects? We made no independent assessment of that.” So I am left with the impression that Dr. Muller, a self-described converted skeptic, doubted the empirical evidence of a temperature rise over the last century, despite the evidence of retreating glaciers and increasing ice-free periods in Arctic waters, prior to BEST serving as his epiphany. That makes his prior skepticism more extreme than Steve McIntyre’s who didn’t need BEST to acknowledge the empirical trend. The second part of Dr. Muller’s statement is reasonable, but also annoying in implying that all skeptics need the same epiphany as he did to focus on the questions that really matter. Many skeptics have been doing that for many years.

      • IIRC Muller felt betrayed by Climategate and suspected that they had actually reported false results, rather than just trying to shut the skeptics up. So, when he discovered that they hadn’t reported false results, he maybe went a bit overboard in the other direction. Not that he said anything untrue, but his statement offered room for AGW people to counteract much of the effect of his previous criticism.

        But it’s important to note that as we move from models trying to replicate/estimate global average temperature to models actually trying to get regional effects right, this new database is going to be an enormous asset. This, IMO, is much more important than squabbling over the wording of his press communications.

      • AK,

        Well, I don’t care much about your goofy speculation that Muller felt betrayed by Climategate, and that has something to do with something. And I don’t see how this new database, which contains the same old data re-warmed, is more important than squabbling over the insulting, dishonest, self-aggrandizing wording of his press communications. Sorry, I can’t just do it to Josh-ua, when he is silly.

      • Unfortunately the nuances of wording has consequences, although I agree that it may be too harsh to hold Dr. Muller responsible for all of the hootchie-kootchies and high-fives that some journalists with a predetermined narrative have been giving themselves with it. And I definitely agree that better data is never a bad thing, although I suspect that the end effect will be to refine statistical estimates without changing essential conclusions.

    • agreed

      • Thanks Judy :) Just kidding. What do you think of Dr. Muller’s article in the WSJ? Has BEST really made the case against climate skepticism?

      • PS: Please give Joshua his own thread. Or at least put it to a vote. Let’s see what our consensus is.

      • Working on a post now to address some of the PR-related issues

      • Thanks again, Judy. But what about Josh-ua :) Actually, I think you should warn us both of banishment, unless we stop the foolishness. It really messes things up for the serious intelligent folks.

    • What does that say about the quality of standard peer review?

  41. Here is a brief discussion from the Muller et al “Averaging” paper, of how
    interpolation is used in the big three surface statistical models to eliminate large changes in station averages, on the grounds that they are “spurious”, even though such changes may well occur naturally. Basically they force neighboring stations to synchronize. This
    process is generally known as “homogenization”. Their discussion of the other methods is more revealing than their discussion of their own method.

    Muller at al write:
    “Additionally, though the focus of existing work has been on long records,
    it is unclear that such records are ultimately more accurate for any given time interval than are shorter records covering the same interval. The consistency of long records is affected by changes in instrumentation, station location, measurement procedures, local vegetation and many other factors that can introduce artificial biases in a temperature record (Folland et al. 2001, Peterson and Vose 1997, Brohan et al. 2006, Menne et al. 2009, Hansen et al. 2001). A previous analysis of the 1218 stations in US Historical Climatology Network found that on average each record has one spurious shift in mean level greater than about 0.5 C for every 15-20 years of record (Menne et al. 2009). Existing detection algorithms are inefficient for biases less than 0.5 C, suggesting that the typical length of record reliability is likely to be even shorter. All three groups have developed procedures to detect and “correct” for such biases by introducing adjustments to individual time series. Though procedures vary, the goal is generally to detect spurious changes in a record and use neighboring series to derive an appropriate adjustment. This process is generally known as “homogenization”, and has the effect of making the temperature network more spatially homogeneous but at the expense that neighboring series are no longer independent. For all of the existing groups, this process of bias adjustment is a separate step conducted prior to constructing a global average.” (End of quote.)

    This is followed by the Muller et al discussion of how grid cells and interpolation are handled by the three models. GISS uses more interpolation than HadCRU and NOAA uses more than that. In fact NOAA also extrapolates over time, a questionable practice. See last sentence.

    The Muller et al method is somewhat different; basically it uses interpolation instead of gridding. That is, it assumes that all the temperatures anywhere between any existing stations line on a complicated linear mesh, as it were. I know of no physical reason to believe this. Ironically this method, called Kirging, is used in oil field exploration to estimate reserves. My understanding is that no one thinks these estimates are very accurate.

    But in no case is any of this normal statistical averaging, so conventional probability sampling theory just does not apply. For example, there are no confidence intervals.

    Muller at al write:
    ”After homogenization (and other quality control steps), the existing groups place each corrected time series in its spatial context and construct a global average. The simplest process, conducted by HadCRU, divides the Earth into 5 x 5 latitude-longitude grid cells and associates the data from each station time series with a single cell. Because the size of the cells varies with latitude, the number of records per cell and weight per record is affected by this gridding process in a way that has nothing to do with the nature of the underlying climate. In contrast, GISS uses an 8000-element equal-area grid, and associates each station time series with multiple grid cells by defining the grid cell average as a distance-weighted function of temperatures at many nearby station locations. This captures some of the spatial structure and is resistant to many of the gridding artifacts that can affect HadCRU. Lastly, NOAA has the most sophisticated treatment of spatial structure. NOAAs process, in part, decomposes an estimated spatial covariance matrix into a collection of empirical modes of spatial variability on a 5 x 5 grid. These modes are then used to map station data onto the grid according to the degree of covariance expected between the weather at a station location and the weather at a grid cell center. (For additional details, and explanation of how low-frequency and high-frequency modes are handled differently, see Smith and Reynolds 2005). In principle, NOAAs method should be the best at capturing and exploiting spatial patterns of weather variability. However, their process relies on defining spatial modes during a relatively short modern reference period (1982-1991 for land records, Smith and Reynolds 2005), and they must assume that the patterns of spatial variation observed during that interval are adequately representative of the entire history. Further, if the goal is to understand climate change then the assumption that spatial patterns of weather variability are time-invariant is potentially confounding.” (End of quote.)
    http://berkeleyearth.org/Resources/Berkeley_Earth_Averaging_Process Pages 5-6

  42. David,

    “The Muller et al method is somewhat different; basically it uses interpolation instead of gridding. That is, it assumes that all the temperatures anywhere between any existing stations line on a complicated linear mesh, as it were. I know of no physical reason to believe this. Ironically this method, called Kirging, is used in oil field exploration to estimate reserves. My understanding is that no one thinks these estimates are very accurate.”

    That goes back to the issue that I have brought up several times: the 1/3 of stations that exhibited cooling trends are well-intermixed with the warming trend stations. Continuity is not there.

    • Don Monfort

      “..My understanding is that no one thinks these estimates are very accurate.”

      Kirging, while vastly better than gridding in a number of ways, where you can afford the computational complexity and make wise choices, isn’t as good as if you had reliable measurements more closely spaced.

      BEST appears to have taken into account the error introduced by their approach, and still arrived at their 0.911C +/-5% with 95% confidence figure after Kirging.

      Or have I misread?

      Or, if BEST fails to account for introduced error by Kirging, what interpretation of error and confidence do you feel is more appropriate, and why?

      What alternative do you recommend that is better? Why?

      • Bart R,

        I am not a statistician, but I am also not stupid (in my very humble opinion) and I believe that there is something significantly wrong with the data, for reasons I have enumerated above. I am not suggesting that I know of a better method to deal with bad data. Maybe they have done the best they can with the garbage. But that doesn’t mean it is of much use.

      • Don Monfort

        I have an indifferent relationship with statistics, and a great many people opine about my stupidity, humility, opinion, beliefs, wrongness and reason.

        There are multiple methods to deal with bad data. BEST selected some fairly good ones, and demonstrated why and how they chose these means, and the limits and qualities of these methods (where general knowledge among statisticitians may be of uneven quality, the commonplaces of standard statistics, they pretty much treated as common knowledge).

        Did they do the best that can be done with the garbage? That will always be in question.

        Did BEST apply fit-to-purpose methods to create from the garbage a measurable utility?

        Absolutely, their statistics looks much better than I expected, and compares extremely well with standards across multiple disciplines. In several ways, the BEST analysis exceeds the needs of the use for which it was intended.

        BEST also accompanied its release of dataset with examples of purposes the dataset is fit for, too.

        While review and criticism will improve our understanding of the new dataset and its qualities, and this may take months or years, the dataset itself raises interesting and valuable new climatology questions to concentrate on, a worthy accomplishment in itself.

        Likely, we will see this dataset referenced in hundred or thousands of scholarly papers in future, which will be the telling measure of its utility.

      • I have suggested on occasion that someone should do a fit to an expansion in spherical harmonics.

      • I like music, but will that be the answer to lack of thermometers in vast areas, lost data, moving stations, bad siting, discontinuity in coverage, institutionalized confirmation bias…blah…blah…blah?

        If BEST is the best they can do with the temperature record, so be it. I am willing to stipulate that there has been some warming, but we don’t really know much about several confounding factors, like the UHI effect Let’s put it at up to .3 C, by calculation of steven mosher :) and make some other SWAGs, then we can recognize the uncertainty inherent in the data and the methods of analysis, and come to the conclusion that we don’t know enough to distinguish the warming, from natural variability.

      • Bartemis

        I have in mind ‘re-Kirging’, where sufficiently frequent observations for a number of sets of stations are added into the dataset by intensive new measurement efforts, and the trends for these actual observations are compared to the Kirged values.

        This may allow better estimation of error, and might (where new rules of interpolation based on site properties can be found and confirmed by testing) even allow better estimation of temperature globally.

        Wow, was that awkwardly said.

        Simplified:

        a) Select some representative stations and their Kirged data;
        b) Gather something like three orders of magnitude more new sensor data within the Kirged areas between the historic sites over a course of years;
        c) Compare the Kirged values with sparser stations to the Kirged values with ‘sufficient’ stations.

        Of course, this can be done in reverse, by removing stations from the current dataset and comparing the Kirged profiles to the sparser Kirged profiles.

      • The problem… well, a problem, is that the data are nonuniformly sampled, and the space is not Euclidean. These problems would be ameliorated using spherical harmonics.

      • “I have suggested on occasion that someone should do a fit to an expansion in spherical harmonics.”

        It’s done
        here for trends. There are lots of spherical harmonic fits for monthly averages – here is a recent example.

        These are with SST data. If you use land only and try to get reasonable resolution, you get weird stuff over the sea.

      • Nick Stokes

        Very nice.

        Do you have comments on the differences of the methods, in terms of what purposes each is suited to?

      • Bart,
        I’ve never found that the average from a spherical harmonic fit (the zero-order coefficient) is significantly different from the area-weighted least-squares average, which is what I otherwise use. I’m mainly interested in the spatial variation.

        The BEST method puts a lot into the weighting besides area density; in fact, that isn’t explicitly there at all, though their modified Kriging has an area aspect. That bothers me a bit, because basically the summation performed should represent an area integral, which the area weighting achieves.

      • It might be interesting to see someone cobble the BEST dataset into such a form.

        I’m still troubled that topography apparently isn’t explicitly considered, unless I missed it.

        I know it’s minor and taken into account implicitly in many ways, but I like complicated drawings of complicated things.

      • Nice, Nick. Can you plot the (0,0) component at sea level over the time period?

      • It could also be interesting if you could animate the global plots on some 2-d projection of the globe.

      • Are you using raw data from actual measuring stations only? I’m a little concerned when you say you use GISS. Does that mean you are essentially recreating GISS, with all its highly questionable extrapolations over the poles?

      • “I’ve never found that the average from a spherical harmonic fit (the zero-order coefficient) is significantly different from the area-weighted least-squares average, which is what I otherwise use.”

        Could you perhaps show us, so that we can see it for ourselves?

  43. Tamino has other concerns with the AMO paper [link], at the heart of his concern seems to be the additional autocorrelation introduced by the moving average filter. He does not buy the conclusion that the AMO signal is larger than ENSO.

    I read Tamino’s critique. It is worth reading — that is to say “I recommend it.” His comments are worth consideration, and I hope that the BEST team addresses them.

  44. Are they really using Raw Data.
    Or have they used “Quality Controlled” data that no longer shows the same Temperatures for the 30s, 40s, 50s, & 60s that were used prior to the so called” corrections made after 2000.
    Have you compared their charts to pre 2000 charts?
    I also haven’t seen anyone adequately explain the apparent “errors” found in their data showing winter months with higher temperatures than summer months and TAve with higher values than the TMax they originally posted and withdrew.

  45. Could someone please explain to me the physical significance of the global average of an intensive variable like temperature ?

    • Bartemis

      Could someone please explain to me the physical significance of the global average of an intensive variable like temperature?

      That is an excellent question, and one everyone attempting to understand climate ought understand before going further, I think.

      In a strict Thermodynamic sense, the best we can say about measuring temperature is the “Zeroth Law” (http://en.wikipedia.org/wiki/Zeroth_law_of_thermodynamics); a thermometer can tell us two bodies are in thermal equilibrium if they have the same temperature.

      Which tells us remarkably much, and remarkably little. In gases, different pressures, in fluids different speeds past apertures, in stoichiometry, and at phase-change in materials, and when speaking of radiative exchange, different absolute temperatures, temperature differences, and other variables besides may confound the meaning of temperature.

      Globally, there are many such confounding variables.

      A relative change of four degrees at the equator and at the poles is a very different thing, when speaking of the T^4th power relationship of radiative physics.

      A relative change of 4 degrees between -3C and +1C in an ice field is a vastly different matter than the same 4 degrees between -7C and -3C, or +1C and +5 C. Both the melting point of freshwater ice (0C), which takes 80 times the energy to change solid to liquid as to achieve the same temperature change without state transition, and the maximum density of sea water — which affects convection below the surface — are passed in that vital 4 degrees around zero.

      Plants and microbes are highly sensitive to small temperature differences, when taken in large quantities, and different plants react differently; so too do different microbes, both in the soil with effects on soil quality for botany, and in the air, with effects on contagion.

      To a lesser extent, animals too share this sensitivity, as can be seen from documented range pattern changes both in lattitude and altitude across multiple species, as well as size shifts.

      Painting any temperature change with so wide a brush as ‘global’ (or even regional) requires great care to identify each confounding variable and account for specific meaning to such elements of the environment.

      With CO2 as a confounding variable, global temperature is a somewhat useful measure, though there could be better ones.

      A CO2-impact-factor-weighted global temperature, for example, that identified and eliminated from the global temperature things like regional ocean oscillations, aerosols, clouds, water vapor, might be more useful if it could be produced and agreed on.

    • Consider, as another spatial average, the location of the center of gravity of an aircraft. It doesn’t tell you how the aircraft or anything on it works, but if you have a series of locations of it you can see where it is headed and how fast it is going. In fact, you can download terabytes of such measurements from the Federal Government and track all the aircraft flights in the US over a particular time.

      The time series of mean temperatures of the Earth tells whether it is cooling (radiating heat more rapidly than it receives radiant heat), warming (receiving more than radiating), or staying within a narrow range (inflow and outflow approximately equal over long periods of time.)

      This is not hard to understand.

      Whether we have it accurately estimated at any time is a different question, but the physical significance is clear enough.

      • I don’t think you understood the question. It’s an old controversy, whether an average of an intensive variable is as meaningful as an extensive one, such as enthalpy.

        http://en.wikipedia.org/wiki/Intensive_and_extensive_properties

        It’s a thermodynamic issue, not a mathematical one.

      • whether an average of an intensive variable is as meaningful as an extensive one, such as enthalpy.

        Reread the question: the physical significance of the global average of an intensive variable like temperature ?

        That is not a question about the rank ordering of the meaningfulness of two quantities. It is about the significance of the mean temperature. I answered the question as it was asked.

      • John Vetterling

        I still don’t think you understand the issue.
        Averaging temperatures is like averaging density of two heterogenous substances of unklnown mass.

        Mathematically you can calculate it, but physically it lacks any real meaning.

        You simply cannot take a temp in Colorado Springs at 6000′ and 30% rel humidity and add it to a temp from Birmingham Ala, El ~ 200, rel hum ~ 80%, divide by two and pretend that represents anything meaningful.
        This is elementatry thermodynamics.

        When you realize that a significant number of stations actually show cooling it could substantially alter the results if you were to actually estimate the enthalpy.

      • Right. It’s sort of like one of the things that sent Richard Feynman screaming out of his basement when he was reviewing textbooks for the LA school system: “what is the sum of the two temperatures?”.

        Just because you can do a mathematical operation on a physical variable doesn’t’ mean that it’s meaningful. Any more than counting green stars.

        I’m not taking a position one way or the other on the thermodynamic issue btw, just pointing out that it’s not a stupid or trivial question.

      • Since nobody else has bothered to answer it (although I’m sure they all know the answer)…

        You simply cannot take a temp in Colorado Springs at 6000′ and 30% rel humidity and add it to a temp from Birmingham Ala, El ~ 200, rel hum ~ 80%, divide by two and pretend that represents anything meaningful.

        They didn’t. They calculated a variance specific to that station from the raw data, then did their manipulations on that. (AFAIK.)

      • Brandon Shollenberger

        AK, your response is correct, but it doesn’t have any meaning. The fact anomalies are used doesn’t somehow make the results have a physical meaning. In fact, it makes the results less physically relevant.

        The question isn’t whether or not a global average as calculated by various groups has some physical meaning. It doesn’t. The question is whether or not it can be used as a reasonable metric. In that regard, it can, though it is important to remember what it actually represents.

      • @Brandon Shollenberger…

        The question isn’t whether or not a global average as calculated by various groups has some physical meaning. It doesn’t. The question is whether or not it can be used as a reasonable metric. In that regard, it can, though it is important to remember what it actually represents.

        I’d call it a myth. Not a lie, but a metaphor based on truth. The truth in this case is the integral of T(x,t). The metaphor points more towards the political and ideological agenda, IMO.

      • Brandon Shollenberger | October 26, 2011 at 6:33 pm |

        “The question isn’t whether or not a global average as calculated by various groups has some physical meaning. It doesn’t. The question is whether or not it can be used as a reasonable metric. In that regard, it can…”

        Because…?

        “…, though it is important to remember what it actually represents.”

        What does it represent?

      • John Vetterling: I still don’t think you understand the issue.
        Averaging temperatures is like averaging density of two heterogenous substances of unklnown mass.

        I was going to complicate the issue even further. Lots of the science is concerned with equilibrium states of the thermodynamic quantities, whereas the Earth is never in equilibrium. Equli8brium presentations (e.g. Pierrehumbert’s book “Principles of Planetary Science”) treats the troposphere as though it’s well mixed, while there are slight deviations from that. You could have the earth surface warming and the upper troposphere cooling, or vice-versa, and still have the equilibrium approximation reasonably accurate by most standards.

        The point that you raise pertains to the non-equilibrium case. Heat flows from high temperature to low temperature, not from high heat content to low heat content. At equilibrium, substances in contact have the same temperature, no matter what their densities and specific heats. For really interesting presentations of non-equilibrium thermodynamics and dissipative dynamic systems consult “Modern Thermodynamics” by Dilip Kondepudi and Ilya Prigogine, especially the last 2 chapters.

        I distinguished between the global mean an the estimate of it. If the global mean is increasing, that’s probably meaningful, though lots of details remain, such as how fast is heat flowing into the deep ocean?

        You haven’t shown that the global mean isn’t meaningful, you have shown that it isn’t everything.

      • Just because you can do a mathematical operation on a physical variable doesn’t’ mean that it’s meaningful. Any more than counting green stars.

        That is a very facile answer in that the exact answer may be more difficult to come by but a qualitative result is easy to understand.

        Say we had two boxes filled with gas at different temperatures around room temperature. T1 = T+dT and T2 = T-dT.
        If we bring these together keeping it insulated from the surroundings, the temperature will be somewhere between T1 and T2.
        You can think of some degenerate or pathological cases, involving vastly different pressures, volumes, and molar count of gas molecules, but that is not what we are talking about here. The value will more than likely be around T, which happens to be the average of the two temperatures.

        The first time I remember hearing about this “temperature averaging issue” article was in the Essex & McKitrick book “Taken by Storm” which was about 10 years ago. People read that stuff and get it in their head and then it spreads and infects a generational mindset. Is that what we are seeing?

      • Brandon Shollenberger

        Bartemis, I hope your first question wasn’t serious. You ask why a global average of temperature could be used as a reasonable metric. The answer, quite simply, is it’s a global average of temperature. The higher the average is, the higher the energy in the system must be. There are some confounding factors, but they are quite small. For example, the heat capacity of air is not constant across the globe, so anomalies aren’t directly comparable. However, the difference in heat capacity is, at its most extreme, 2-3%. Even if you include all the possible confounding factors, you still wind up with far less uncertainty than is introduced through spacial weighting. All in all, there just isn’t enough of a difference in locations to make a global average be a bad metric.

        As for your second question, the global average temperature represents an approximation of the amount of energy at the Earth’s surface. It isn’t perfectly precise, but it is close enough to be a useful metric.

      • Brandon Shollenberger | October 27, 2011 at 12:10 am |

        “The higher the average is, the higher the energy in the system must be.”

        That is utterly false. I won’t even argue it. This is freshman level.

        “There are some confounding factors, but they are quite small.”

        More assertion?

      • WebHubTelescope | October 26, 2011 at 10:04 pm |

        “If we bring these together keeping it insulated from the surroundings, the temperature will be somewhere between T1 and T2.”

        Remember, we are looking at delta-temperatures. There can be both negative and positive values. Even if there were no change at all in the average, the global heat content could have gone up, or it could have gone down.

        “You can think of some degenerate or pathological cases, involving vastly different pressures, volumes, and molar count of gas molecules, but that is not what we are talking about here.”

        How so? Are you suggesting that the Earth itself is overall at uniform temperature and surface level heat capacity? I think not.

        “The value will more than likely be around T, which happens to be the average of the two temperatures.”

        This is mere assertion.

      • Brandon Shollenberger

        Bartemis, if what I said is so obviously false, it would behoove you to indicate why it is false. You won’t convince anyone to agree with you by telling people they are wrong, but saying you won’t explain how they ware wrong. This is especially true if you claim they are wrong in some obvious way.

        All your current approach can possibly do is alienate anyone who doesn’t agree with you wholeheartedly.

      • Brandon Shollenberger | October 27, 2011 at 11:39 pm |

        “it would behoove you to indicate why it is false.”

        I did so in the post to WebHubTelescope just above. But, if you want an explicit example…

        Suppose a temperature metric proportional to actual delta-energy content were given by dT = 0.1*dT1 + 0.9*dT2. Suppose dT2 = -1 and dT1 = +1.1. Then, dT = -0.79, but the average (dT1 + dT2)/2 = +0.1. Total energy content went down, yet the straight average of temperature went up.

        Yes, it is an extreme example, but shows clearly that an increasing average of nonhomogeneous temperature data points does not necessarily indicate increasing retained energy. Moreover, without knowing the actual weighting which would give a proper measure, there is no simple or certain way to relate the average temperature to total energy content.

        This is precisely why Pielke, Sr. has long been advocating ocean heat content as a more appropriate measure of climate change.

      • (dT1 + dT2)/2 = +0.05, obviously, I meant to say.

      • Other Bart

        But, that is the average of a homogeneous data set. The heat capacity of a single specific location should be effectively constant over a short span of time, and equal weights in a temporal filter for data points covering a uniform daily time interval are appropriate.

        It is true homogeneity is valuable. It’s the gold standard. We measure reliability of results as compared to homogeneous conditions.

        Other conditions are going to have issues, and one must be careful to make valid choices. However, it is faulty logic to say that without homogeneity we have nothing meaningful.

        But, what does the spatial average of nonhomogeneous data with varying inter-measurement relationships over the time span of interest mean?

        By itself, if that were all we had or used, it could mean nothing.

        However, we have a hologram of information about conditions and we have statistics that do reveal strong indications — too strong by far to be random — of consistent and significant underlying meaning.

        If we had this nonhomogeneously grouped data and varying relationships and a coincidental statistic, that could mean nothing, too, if we express invalid choices in interpretation.

        That’s always the case, in data interpretation.

        That’s why investigators integrate body of knowledge, observation, and theory, why an underlying mechanism is sought.

        Sure, some might pick out that one line of warning from a first year Physics textbook of what not to do with data, for instance to not use unphysical numbers and not use grouped data where the grouping is meaningless, and apply it whenever they see anyone using data that produces a result they don’t like, but then that doesn’t reflect understanding of the material, or of science.

        Science doesn’t reveal the working of the world, it reveals the working of models of the world constructed and communicated between people.

        That’s as close as we can get, we can get no closer, there will always be doubt in the knowledge gained of the world through science.

        It’s bad science to think otherwise, or to lament when offered the appropriate, doubtful, workings of models instead of inappropriate exhortations to believe one has tapped the workings of reality.

      • Bartemis (other Bart)

        Bart R | October 29, 2011 at 2:14 am |

        “However, it is faulty logic to say that without homogeneity we have nothing meaningful…However, we have a hologram of information about conditions and we have statistics that do reveal strong indications — too strong by far to be random — of consistent and significant underlying meaning.”

        You keep dancing around the issue without giving an answer. What is that meaning? Is it purely binary – something is/is not changing? Is there any real world consequence we can quantify with any assurance?

      • Bartemis (other Bart)

        You keep dancing around the issue without giving an answer. What is that meaning? Is it purely binary – something is/is not changing? Is there any real world consequence we can quantify with any assurance?

        The world is full of signals we know by easy measures carry meaning, when the meanings themselves are unclear.

        I’ve given, or iterated, the answer as I interpret it, and will clarify by saying global temperature rises longer than 16 years duration appear to be a reliable proxy for energy rises in the shell between the bottom of the biosphere and the top of the atmosphere, ceteris paribus.

        The complex climate system appears to use feedbacks to roughly maintain the ratio of temperature (globally) to energy in the climate, given enough time for mixing and process transformations.

        This total energy of the climate is impossible to directly measure, however thermodynamics tells us it’s there (along with telling us things about its enthalpy and entropy and other information about distribution and tendencies and so forth), so we find a proxy useful.

        Globally averaged temperature acts statistically in a manner consistent with that proxy function, if the time scale is in the right span (not too little, not too long), and ought by the principles of Physics act this way, once variances due other known mechanisms are accounted for.

        Fortunately, the globe is so huge and the amounts of energy so profoundly large compared ‘tiny’ influences like solar variability, volcano and tectonic incidents, and so forth, it takes only about thirty to sixty years to reliably obtain a proxy signal.

        We can quantitatively claim reliable evidence based on the BEST 0.911C rise figure that double (accounting for thermomechanical effect) that 0.911C heat energy is now distributed in the dynamical biosphere/climate system, driving processes at a higher energy state overall, although of course this higher energy level is not homogeneously distributed.

        What that means? No clue. I imagine it’s like trillions of (but not all) little feet stepping on invisible accelerator pedals of the worlds’ bumpercar gridlock in weather and biology on the schedule of the CO2 emission of the fossil industry.

      • Bart R | October 29, 2011 at 10:04 pm |

        “…appear to be a reliable proxy for energy rises in the shell between the bottom of the biosphere and the top of the atmosphere…”

        Appear to be? Based on what? Do you have an independent energy measure? No, because you say: “This total energy of the climate is impossible to directly measure”. So, how do you know? Recall my example above. It is entirely possible to have the global mean temperature measurement go UP, whilst the total energy content goes DOWN, so the statement “thermodynamics tells us it’s there” smacks of hubris, and is generally false.

        You say “it takes only about thirty to sixty years to reliably obtain a proxy signal”. Says who? I and many others maintain that there is a readily discernible approximately 60 year natural cycle in the global temperature pseudo-metric. If you have a 60 year cycle, the worst possible time span to look at data is 30 years, as you are alternatingly trending along the maximum, and subsequently the minimum, natural variation. And so, we have had alternating Global Warming and Global Cooling scares since the late 19th century every 30 or so years, as relentlessly documented in newspaper headlines over the period.

        Globally averaged temperature says nothing definitive about global heat content. The statement that “We can quantitatively claim reliable evidence … that 0.911C heat energy is now distributed in the dynamical biosphere/climate system” is quite simply false. If you continue to disagree, then we are at an impasse, and there is no point in further discussion. I only hope I have made some inroads with some people reading this. I commend to them Pielke, Sr.’s discussion of this matter on his blog, and his recommendation that ocean heat content is a more reliable metric for assessing climate change.

      • Bartemis

        re: http://judithcurry.com/2011/10/25/best-of-the-best-critiques/#comment-130130

        “Appear to be? Based on what?

        That’d be based on analyses of the data and comparing them to the various hypotheses. The hypotheses that best fit the data is what appears to be the explanation.

        Do you have an independent energy measure? No, because you say: “This total energy of the climate is impossible to directly measure”. So, how do you know? Recall my example above. It is entirely possible to have the global mean temperature measurement go UP, whilst the total energy content goes DOWN, so the statement “thermodynamics tells us it’s there” smacks of hubris, and is generally false.”

        This example? You mean, you were serious?

        “Suppose a temperature metric proportional to actual delta-energy content were given by dT = 0.1*dT1 + 0.9*dT2. Suppose dT2 = -1 and dT1 = +1.1. Then, dT = -0.79, but the average (dT1 + dT2)/2 = +0.1. Total energy content went down, yet the straight average of temperature went up.”

        Congratulations.

        By blasting past Occam’s Razor (and Einstein’s amendment) you have successfully proven that temperature — which is always a proxy for energy (generally temperature itself conventionally itself a proxy derived by measuring the volume of a liquid) — is meaningless. Your argument works as well at any scale and in any system as in climate.

        I’ve seen it before, in a second year Physics course concentrating on Thermodynamics, in the cautionary sense of “don’t fall into this trap.”

        So, don’t fall into that trap.

        Also, hubris. That word. I do not think it means what you think it means. Perhaps you’re thinking of Cassandra Complex?

        You say “it takes only about thirty to sixty years to reliably obtain a proxy signal”. Says who? I and many others maintain that there is a readily discernible approximately 60 year natural cycle in the global temperature pseudo-metric. If you have a 60 year cycle, the worst possible time span to look at data is 30 years, as you are alternatingly trending along the maximum, and subsequently the minimum, natural variation.”

        Yeah.

        I’ve looked at the 60 year discernable natural variability claims. And their 30 year variants, (and 11 year, and 22 year, and 4 year) and I think the 60-year patterns are pretty good (about 70% confidence) if taken as ensembles of about three ergodic trends that appeared to synchronize end-to-end as a larger ergodic pattern. At least, that fits the data on the past 150 years better than any other hypothesis I looked at.

        Whatever else, the ergodic trends don’t prevent separating signal from noise (starting at about 17 years long), and progressing to about 30 years before the signal allows us to begin ignoring those 2-15 year natural ergodic trends that are there and concentrate on the multidecadal-to-century-or-more trend. Since that is essentially only rising on the modern record, and we can dismiss the prior record for now as too imprecise, we don’t have a length of ‘cycle’, and the data matches acyclic hypotheses on the timespan better so far as we can tell.

        ” And so, we have had alternating Global Warming and Global Cooling scares since the late 19th century every 30 or so years, as relentlessly documented in newspaper headlines over the period.

        So, your evidence is newspaper headlines?

        Globally averaged temperature says nothing definitive about global heat content. The statement that “We can quantitatively claim reliable evidence … that 0.911C heat energy is now distributed in the dynamical biosphere/climate system” is quite simply false.”

        And we come full circle.

        I look at a thermometer outside my window, it doesn’t tell me what the temperature in my neighborhood is .. but after some time, I’d find that pretty useless and move it and check it against experience, and repeat until I had confidence in it. At that point, I’d be able to tell someone that my thermometer could tell me nothing about the temperature in my neighborhood that there’s some evidence for his views, in my experience.

        I look to Physics, to Thermodynamics, Stoichiometry, Optics, and a dozen other disciplines to inform me about what the temperature may mean, although it is a challenge and I find many things I never expected, and end up somewhere different than I started.

        You, Other Bart, it seems look to people who agreed with you from the start, and are comfortable holding fast to your beliefs regardless of the answers offered to the questions you pretend to pose.

        “If you continue to disagree, then we are at an impasse, and there is no point in further discussion. I only hope I have made some inroads with some people reading this. I commend to them Pielke, Sr.’s discussion of this matter on his blog, and his recommendation that ocean heat content is a more reliable metric for assessing climate change.”

        Well, good luck with that. It seems a fabric of confirmation bias to me, but if it’s what keeps you happy, science advances one funeral at a time.

      • Bart R | October 31, 2011 at 12:26 am |

        What a mess.

        “The hypotheses that best fit the data is what appears to be the explanation.”

        You have no data! You admitted as much yourself when you said “This total energy of the climate is impossible to directly measure”.

        “Your argument works as well at any scale and in any system as in climate.”

        We’re not talking about any scale. We are not talking about your neighborhood. We are talking about nothing less nor more than the entire Earth!

        From dictionary.com:

        hu·bris
           [hyoo-bris, hoo-]
        noun
        excessive pride or self-confidence; arrogance.

        You can look words up yourself, you know. It means precisely what I meant it to mean.

        “…I think the 60-year patterns are pretty good…”

        “Since that is essentially only rising on the modern record…”

        The above two sentences are in conflict with one another.

        “…we can dismiss the prior record for now…”

        And, make policy decisions affecting billions of people based on hear no evil, see no evil? Really?

        “So, your evidence is newspaper headlines?”

        So, my evidence is evidence? I don’t even know how to respond to that.

        “I look to Physics, to Thermodynamics, Stoichiometry, Optics, blah, blah, blah”

        Translation: I claim the mantle of Science. Cringe and scrape before me, varlet. What meaningless chest thumping…

        “…are comfortable holding fast to your beliefs regardless of the answers offered to the questions you pretend to pose.”

        I am distinctly uncomfortable, because our best and brightest have abandoned the Scientific Method, and are reduced to handwaving and “proof” by assertion, with a hefty does of ad baculum. Like your response here.

      • Other Bart | October 31, 2011 at 2:28 pm

        You have no data! You admitted as much yourself when you said “This total energy of the climate is impossible to directly measure”.

        Oh. Postma’s blind spot.

        I have no data? We all have reams of data, now with BEST and WoodForTrees and, y’know, I hear that Google thing is coming along nicely.

        However, no one, not a single one of us, not with BEST nor WoodForTrees, nor Google has the direct measure of the real total energy of any system. It doesn’t exist. It can’t be done. It’s all proxy and calculation, supposition and theory.

        It’s a real, physically meaningful quantity we just cannot have direct knowledge of.

        Claiming the proxy and calculation and supposition and theory of climate — with exactly the same methodological approach and in most ways significantly higher care as of BEST as in most proxies for energy of systems used across multiple fields, whether engineering or architecture, industrial chemistry or farming is somehow meaningless asserts by that selfsame logic that all such fields, the very mold and weave of the modern world, are baseless.

        So, I have to consider your argument failed on this foundation. We accept the world around us within our common experience is real, or we babble.

        “Your argument works as well at any scale and in any system as in climate.”

        We’re not talking about any scale. We are not talking about your neighborhood. We are talking about nothing less nor more than the entire Earth!

        Yeah, what special pleading for the case and sake of only and exactly temperature in climatology are you making, again? (Which, by the way, invalidates your natural variability claim just as much as it invalidates any other claims, if allowed to stand.)

        Why should your arguments be restricted to special application to climate?

        If you make a claim of a general principle of Physics, it applies to all temperatures in the physical world, where conditions are the same, or it doesn’t apply to the single case you tailor it to.

        I regard that your argument, as it is applicable to only the one case that matches the needs of your agenda, is invalid.

        Alternatively, if you do wish to apply it over the entirety of all temperature measurements everywhere, I believe that properties of such measurements would, over time, reveal whether your hypothesis were valid or not.

        Do you have any proof of your needlessly complicated assumptions about temperature from any field other than climate?

        Because otherwise, I’ll be happy to be guided by Occam’s Razor.

        “…I think the 60-year patterns are pretty good…”

        “Since that is essentially only rising on the modern record…”

        The above two sentences are in conflict with one another.

        Well, exactly.

        60-year ‘natural variability’ patterns made up of strings of 2-15 year (apparently ergodic) patterns that each and together amount on that 60 years to an order of magnitude smaller in amplitude than the longer-than-15-year ‘non-natural variability’ rising signal that can be detected in the data and separated from the oscillation are in conflict.

        As the periodic-like trends are an order of magnitude smaller, and non-cumulative, the order of magnitude larger, cumulative, rising signal is winning on periods that are longer than 15 years.

        We specify century-scale because, well, all we have is 200 years, and the first 70 or so part of the years in the dataset is pretty weak soup.

        “…we can dismiss the prior record for now…”

        And, make policy decisions affecting billions of people based on hear no evil, see no evil? Really?

        That’s called Occam’s Razor, if you want to put it that way.

        I generally commend making policy decisions based on sustainably delivering the most good to the most people in the least time.

        I generally see policy decisions based on unsustainably taking the most good to the few, all the time, unless caught. Is that the system you prefer?

        “So, your evidence is newspaper headlines?”

        So, my evidence is evidence? I don’t even know how to respond to that.

        Uh, newspaper headlines like in the Daily Mail comprise your evidence?

        Yeah, respond to that by getting some perspective, Other Bart; if you’re going to be calling newspaper headlines evidence and be dismissive of BEST, you’re looking through the wrong end of the telescope.

        “I look to Physics, to Thermodynamics, Stoichiometry, Optics, blah, blah, blah”

        Translation: I claim the mantle of Science. Cringe and scrape before me, varlet. What meaningless chest thumping…

        Ah yes, the “Books Bad! Burning Good!” argument.

        You’ve been seen to claim the mantle of Science plenty, Other Bart.

        Or do you and others who believe in natural variability disavow G&T’s discredited Thermodynamics, Lord Monckton’s disgraced Stoichiometry, Lord Lawson’s disingenuous Economics?

        Please decide, are you anti-science, or just against it when it says something that runs counter to your preconceived notions?

        I am distinctly uncomfortable, because our best and brightest have abandoned the Scientific Method, and are reduced to handwaving and “proof” by assertion, with a hefty does [sic] of ad baculum.

        The cudgel?

        Other Bart, we’re all of us mortal, and will all succumb to old age or whatever else the future unfolds for us.

        I have no need to cudgel, only to know that in time, you Bart, and I, Bart, will be worm food, and posterity will kick off the confirmation biases dragging science down on our watch.

      • You are just babbling incoherently, now. I cannot debate such irrationality.

        Bottom line: Temperature is effectively a measure of heat energy density. It is a differential quantity and, as the differential quantity departs from covering a differential volume, it becomes less and less accurate. At best, an average can give you a bound on the properly weighted integration, which becomes progressively larger as the elemental volume increases. By the time you have reached the scale of the entire Earth, it has become virtually meaningless.

        You cannot see this, because you do not really understand the concept. OK. You’ve made that clear. Now, run along and go play in your own sandbox.

      • I’ve looked at the 60 year discernable natural variability claims. And their 30 year variants, (and 11 year, and 22 year, and 4 year) and I think the 60-year patterns are pretty good (about 70% confidence) if taken as ensembles of about three ergodic trends that appeared to synchronize end-to-end as a larger ergodic pattern.

        Misusing the word ergodic. Ergodic means that your are visiting each of the states in the state space. If we want to see something closer to the ergodic limit of natural variability, you would refer to the data from the Vostok ice cores. In that case we seem to follow a range of 11 degree C temperature swings and 120 PPM CO2 concentration swings over a long time span. Today we have changed the state space by adding another 100 PPM to the CO2 concentration and we have no idea what the new excursions will be as we have not experienced this for the past several hundred thousand years. Methane is also up in concentration, and it is a serious scientific question as to what the outcome of elevated levels of these gases are and not something that you can cover with FUD.

        Jeez, what would you say if the average amount of cloud cover had gone up by 40% in the past 150 years and it was rising with no end in sight? I wouldn’t sweep that under the rug by pointing instead at natural variability.

        Bottom line: Temperature is effectively a measure of heat energy density. It is a differential quantity and, as the differential quantity departs from covering a differential volume, it becomes less and less accurate. At best, an average can give you a bound on the properly weighted integration, which becomes progressively larger as the elemental volume increases. By the time you have reached the scale of the entire Earth, it has become virtually meaningless.

        You cannot see this, because you do not really understand the concept. OK. You’ve made that clear. Now, run along and go play in your own sandbox.

        Temperature is an absolute quantity that measures the amount of disorder in the system, and is defined solely in the context of statistical physics probability distributions (i.e. MB,FD,BE). At 0K, all particles in the system are perfectly ordered as nothing gets agitated via thermal energy. At the scale of the earth, certainly we can make claims to the amount of disorder that exists; the fact that it is not spatially homogeneous is a completely different issue. We can’t make blanket statements that an average amount of thermal disorder can’t be estimated because of some definition of temperature. It is a convenient shorthand that works well for intuition, and as long as the defiition is stable, you can use it to measure relative differences. That is where the relative or differential aspect fits in.

        The point where you tried to fit in Simpson’s Paradox is very very weak, and I pointed out that involving vastly different pressures, volumes, and molar count of gas molecules does make a difference in computing an average but we are talking about known characteristics of the atmosphere and ocean here so we should be able to maintain a stable metric for monitoring change.

        I am sorry if this is sounding like a battle between concern trolls (myself included), but I think it is a serious scientific study which will exist independent of the politics involved.

      • I do not know what you mean by “absolute”, but temperature is an intensive variable. And, you are still nowhere near saying what the average means. Pace Bart R., a 1 degK rise in average surface temperature does not mean there is a 1 degK equivalent increase in heat energy near the surface. You have to ask, equivalent to what? What is the scale factor which gives you units of energy? Is there a unique, physically meaningful one which holds broadly across the Earth’s surface? (answer: no)

        You can worry that there is likely a change from the past, but this measure alone does not tell you so in definitive terms. More importantly, it has no predictive value as to the consequences of that change.

      • And, I am not saying this is all unknowable, so we should just shrug and forget about it. I am saying it is a poor metric, and there are better ones available which should be the focus. Pielke’s recommendation of ocean heat content is one.

      • I do not know what you mean by “absolute”, but temperature is an intensive variable.

        You are kidding right? I thought you knew something about physics. In any math physics interpretation, the absolute value of temperature is all that matters. The changes relative to that absolute value are what are important in any statistical mechanics or thermodynamics formula. It boggles my mind that you can demonstrate an understanding of the distinction between an intensive and extensive variable but miss the idea of an absolute scale.

        And, you are still nowhere near saying what the average means. Pace Bart R., a 1 degK rise in average surface temperature does not mean there is a 1 degK equivalent increase in heat energy near the surface. You have to ask, equivalent to what? What is the scale factor which gives you units of energy? Is there a unique, physically meaningful one which holds broadly across the Earth’s surface? (answer: no)

        The scale factor is the Boltzmann constant. Like I said we are measuring the degree of disorder and temperature plays into the Boze-Einstein statistics; this shows the relative abundance of energy states of radiated photons (i.e. bosons) and since photons have wave properties via Planck’s constant this leads to the Stefan-Boltzmann relation. Voila, we can estimate the energy budget balance for the earth. Absolute temperature plays a huge role in this and the fact that it is an intensive variable makes the first-order energy balance come out so cleanly. There is no “total heat content” in the S-B equation!

        You can worry that there is likely a change from the past, but this measure alone does not tell you so in definitive terms. More importantly, it has no predictive value as to the consequences of that change.

        This is the prediction path, as CO2 modifies the S-B.

        Bart R has a much better grasp on physics than this Bart.

      • WHT

        “Misusing the word ergodic. Ergodic means that your are visiting each of the states in the state space. If we want to see something closer to the ergodic limit of natural variability..”

        I didn’t invent the usage as applied to oscillations, however it is used correctly here.

        It may help if you replace “state” with “wavelength” or “frequency” instead of “amplitude”, and remember words like “eventually” and “as Time approaches infinity.”

        ie, “Ergodic means that your are visiting each of the wavelengths in the state space eventually as Time approaches infinity.”

        As for our perceived trollishness, I assure you, I have high regard for Other Bart and see our exchange merely as frank (perhaps moreso on my part, as I didn’t start out by playing some Socratic game of asking a question in the hopes of trapping and teaching poor fools who were silly enough to disagree with me) and energetic.

        Look at “babbling incoherently,” a phrase not meant as trollish but only an incidental insult if one happened to be caught up in the practice. I may even be babbling, possibly incomprehensibly, but for me to be incoherent, I’d have to be saying that I believed two things that were not mutually coherent, for example that measuring Temperature is impossible because I can construct a far-fetched scenario, and also that Temperature exhibits Natural Variability (how could I believe the second, if I subscribed to the first?)

      • WHT

        “Bart R has a much better grasp on physics than this Bart.”

        Aha! I’d missed this gem.

        You _do_ understand incidental insult, after all!

        Excellent burn, sir.

        Well done.

      • “The scale factor is the Boltzmann constant.”

        Good lord, you have no idea what you are talking about. The Boltzmann constant is only used at the atomic level. Read up on the concept of thermal mass before you… say something regrettable.

      • ” I assure you, I have high regard for Other Bart…”

        Well, thank you, even if the compliment is somewhat nullified by later words. I would return it, but I really do not know your commentary well enough to say one way or the other. But, you and WebHub at this time appear to be playing into the old joke about spherical cows. You guys are extrapolating concepts dealing with isolated, homogenous, idealized systems to something which is entirely other.

        “I didn’t start out by playing some Socratic game of asking a question in the hopes of trapping and teaching poor fools who were silly enough to disagree with me) and energetic.”

        Neither did I. I came hoping someone could give me logical answers, and explanations of why rigorous treatment was being bypassed, and perhaps some estimate of the error bounds in doing so. What I found was lack of understanding, hand waving, misapplied concepts, and misinformed arrogance. Gaia help us if this is really how these concepts are perceived at the highest levels.

      • “The Boltzmann constant is only used at the atomic level.”

        Or, at the microscopic level anyway, with monatomic materials.

      • “There is no “total heat content” in the S-B equation!”

        Yes, there is. Do you know how the S-B constant is derived?

      • Bartemis, you are really out of your league here. The Boltzmann constant is directly related to the gas constant which is clearly a macroscopic concept
        http://en.wikipedia.org/wiki/Gas_constant

        “There is no “total heat content” in the S-B equation!”

        Yes, there is. Do you know how the S-B constant is derived?

        Yes I do, and total heat content is not involved because the units are in power per area. You could have a black body the size of a bowling ball or smaller, and the derivation would come out the same.

        I will act like Fred Moolten and recommend you read a good book on statistical mechanics. I had the good fortune of learning the topic via Reif’s classic textbook Statistical and Thermal Physics.

      • WebHub – I am sorry to be the one to tell you, but you are utterly clueless and have no idea what you are talking about. I am embarrassed on your behalf.

        The Boltzmann constant is, in rough terms, the amount of energy per degree of freedom in a particle’s structure. It is not a uniform constant of proportionality between heat capacity and temperature for any and all substances. At all. That is why we have tables of heat capacities.

        I can see arguing is no good, because you are so smug in your ignorance. Good luck with that.

      • Other Bart

        “You have to ask, equivalent to what? What is the scale factor which gives you units of energy? Is there a unique, physically meaningful one which holds broadly across the Earth’s surface? (answer: no)”

        Well.. you’re right. Sort of.

        You have to ask, equivalent to what and on what scale and span of time? What is the scale factor which gives you units of energy and what is the span that gives you your ratio? Is there a unique, physically meaningful one which holds broadly across the Earth’s surface? (answer: no, for instantaneous measurements; yes, over timespans ‘long enough’.)

        I get that you’re very capable at application of commonplace Physics on conventional spans.

        Adjusting your methods to deal with questions that do not fall within “near instantaneous” and “confined to very local spaces” seems a problem for you.

        It is excellent to be a skeptical inquirer and to challenge ideas; it requires evenhandedness, willingness to abandon one’s own biases, and excellent observation of the principles one seeks to use to interrogate the subject ideas to carry it off successfully.

        You’re missing some of those here and there.

      • “Adjusting your methods to deal with questions that do not fall within “near instantaneous” and “confined to very local spaces” seems a problem for you.”

        It is not a problem for me, per se. It is a problem for the concept of temperature.

        Energy is the quantity of interest. Energy is what allows work to be done. Temperature is merely a convenient means of bookkeeping it under particular circumstances. Globally averaged temperature, over regions as disparate in heat capacity as deserts, prairies, forests, jungles, bodies of water, ice shelves, mountain tops and valley floors… how do you relate such a quantity to energy content?

        And, I don’t mean in terms of handwaving, airy generalities. I mean in rigorous form of equations and error bounds. I do not speak any other language when dealing with science, because science only communicates through rigorous mathematics.

      • WebHub – I am sorry to be the one to tell you, but you are utterly clueless and have no idea what you are talking about. I am embarrassed on your behalf.

        The Boltzmann constant is, in rough terms, the amount of energy per degree of freedom in a particle’s structure. It is not a uniform constant of proportionality between heat capacity and temperature for any and all substances. At all. That is why we have tables of heat capacities.

        I can see arguing is no good, because you are so smug in your ignorance. Good luck with that.

        The guy Bartemis has forgotten about the Ideal Gas Law,
        PV = nRT = NkT where R is the gas constant, is equal to the product of Boltzmann’s constant and Avogadro’s constant, N/n (number of particles per mole). That connects the micro to the macro.

        I also made a remark on how k is part of Bose-Einstein statistics, which last I looked also has ideal properties, since photons are indistinguishable. This also ties the micro to the macro as we solve for the Stefan-Boltzmann law.

        So you try to bring in liquids and solids which I agree clearly has variable heat capacity, but I don’t know how you got on that detour because we all know that the Stefan-Boltzmann derivation for black bodies has absolutely nothing to do with the physical properties of the black body, such as heat capacity and density. S-B is purely a radiation balance argument.

        We can try to solve for some response time effects with the ocean serving as a buffer and use the heat capacity in that fashion, but the first order derivation is really the energy balance at the top of the atmosphere, where we have no solids or liquids to speak of. That is the point at which the steady-state temperatures will be established. Consider another analogy, a few miles down into the earth, the temperatures get very hot indeed, but how does that affect the steady-state temperature at the top of the atmosphere? It doesn’t because the diffusion of heat is very slow.

        So you are basically suggesting that Pielke Sr’s idea on the heat content of the ocean provides a capacitive buffer as the temperature of the atmosphere tries to reach the steady-state. Fine, I can buy that just like I can buy that CO2 will set the steady state. As everyone realizes, something in the ocean is generating the multidecadal oscillations, but eventually this will have to catch up to the radiation balance warming.

        I still don’t understand why temperature is such a foreign concept to use as that is what the biota responds to; we don’t respond to heat concentration, and at some point the results have to be turned into a measure of temperature.

        BTW, I can take the insults. The one-liners are actually pretty funny.

      • No single quantity tells about everything. Average temperature is a very good variable for some purposes, but less so for others. It’s really pointless to argue on that, when the question considered is not uniquely known and agreed upon.

        Just a small comment on physics, although it’s out of scope of the thread. Bartemis wrote:

        The Boltzmann constant is, in rough terms, the amount of energy per degree of freedom in a particle’s structure. It is not a uniform constant of proportionality between heat capacity and temperature for any and all substances. At all.

        Just checking the units tells that the above is not right. The quote could be right only if the Boltzmann constant had the unit of energy, but it doesn’t. It has the unit of energy divided by temperature, i.e. the unit of heat capacity. It could be described as the basic microscopic unit of heat capacity of one degree of freedom, not as the amount of energy per degree of freedom.

        Here is another way of looking at the same issue. There’s a natural microscopic unit of temperature, which is obtained, when the Boltzmann constant is set to have the value of one. With that choice the temperature would have the unit of energy and the relative occupation of two quantum mechanical states would be exp(-1) (= 1/e) when the temperature is equal to the separation between the energy levels of the states. In these terms the basic meaning of the Boltzmann constant is the ratio of that unit of temperature to the normally used unit of temperature.

      • In these terms the basic meaning of the Boltzmann constant is the ratio of that unit of temperature to the normally used unit of temperature.

        Everyone in a while and especially for derivation purposes, somebody will dispense with the Boltzmann constant and just refer to the multiplied kT as a normalized temperature.

      • “Just checking the units tells that the above is not right. The quote could be right only if the Boltzmann constant had the unit of energy, but it doesn’t. It has the unit of energy divided by temperature, i.e. the unit of heat capacity.”

        Yes. I know. I didn’t correct it because my meaning was evident in the subsequent sentences, and I find it annoying when people feel a need to correct obvious typos which anyone who can follow the conversation in the first place can fix themselves. As the French say, n’enculons pas des mouches.

        Remember, we are talking about surface temperatures here, with most of the measurements being made on land. The heat capacity of cities or rain forests, for example, is generally much greater than that of deserts.

      • “…we don’t respond to heat concentration…”

        That’s effectively what temperature is, so I agree. We respond to available energy. Energy is everything. It’s like when lay people hear someone describe electricity at 100,000 volts, and they think that is likely to kill them. Well, it is, but only because in standard human engineered power systems, there’s a whole lot of available charge to back it up. But, 100,000 volts in and of itself, of course, does not generally refer to an electric potential capable of stopping your heart.

        Why do you suppose 90 degF temperature in LA is so much more comfortable than 90 degF temperature in Atlanta?

      • Bartemis

        ..And, I don’t mean in terms of handwaving, airy generalities. I mean in rigorous form of equations and error bounds. I do not speak any other language when dealing with science, because science only communicates through rigorous mathematics.

        Awfully big talk from a guy who pulls Yule-Simpson as his first argument.

        Globally averaged temperature, over regions as disparate in heat capacity as deserts, prairies, forests, jungles, bodies of water, ice shelves, mountain tops and valley floors… how do you relate such a quantity to energy content?

        I relate such a quantity to energy content by the conventions of ceteris paribus, reversion to the mean, and comparing the size of marginal changes to the error bars on the temperature observations.

        If you have a hypothesis that explicitly and validly argues these standard conventions are inapplicable, isn’t there any onus on you to prove it showing the rigorous mathematics?

        (Answer: no, there isn’t; based on your recent use of mathematics in this thread, it’s not very likely to be something I’d use as a starting point.)

        Where many “deserts, prairies, forests, jungles, bodies of water, ice shelves, mountain tops and valley floors” are roughly in the same configuration over any particular 17-year period globally, within the scale of error bars on temperature, the nonhomogeniety is a wash.

        We’re comparing like 17-years ago to like now, in such a case. Globe as it was in 1994 to globe as it is in 2011.

        Where there are differences in distribution of “deserts, prairies, forests, jungles, bodies of water, ice shelves, mountain tops and valley floors”, they’re either directly anthropogenic changes or demonstrably feedbacks from climate change, on the 17-year scale, and the temperature change reflects the influence of humans on the energy content of the biosphere.

        I’m not making claims about the actual energy quant; I don’t need to produce equations to represent it. I’m confident that rate of change, change of profile, and to some degree direction of change, are more important. Which I can take directly from the extant temperature records such as BEST, as they are patently a proxy for what I seek.

        So, while you’re seeking to answer an important question of what the physical meaning of the measure is.. you’re asking it about the wrong measure, or at least about too few aspects of that measure. It seems this is intentional.

        Like many things, temperature isn’t only a proxy for energy content. It also is a measure of disorder, and it’s temperature itself, for temperature-dependent reactions and equations, so my particular answer to your question isn’t the only one.

        As we’re speaking of climate, not weather, while we use units based on limit of temperature as time approaches instantaneity, we still rely on minimum durations spanning well over a decade, just as it is meaningless to measure the surface area of a field on the sub-millimeter scale counting every fractal blade of grass, for the purposes of real estate.

        Likewise, temperature isn’t the only measure of human influence on climate, and I’m not only interested in energy changes.

      • “Awfully big talk from a guy who pulls Yule-Simpson as his first argument.”

        A) Yule-Simpson is a valid statistical phenomenon. I had to look it up, myself. You seem to think I am regurgitating some talking points from somewhere. Paranoia, the destroyer.

        I read a comment on WUWT that treating average temperature this way was not kosher, so I started thinking about it and realized the guy had a point. I figured, well, it could be used perhaps to bound the energy, but search as I might, I could not find that anyone had addressed this. And, I started thinking about the extremely wide variations in heat capacities of various regions of the Earth, and became very concerned that the error bars would, in fact, be huge. So, I asked a question in this forum where I thought someone might have more information. The results have been underwhelming, to sat the least. Or, perhaps, more along the lines of , “Holy Toledo! Does anybody have a justification for this?”

        B) Y-S is inapplicable here. I am not talking about weighting data by sample size, but by heat capacity to get a measure of energy, which is a real, absolute quantity.

        Energy is the fundamental quantity. Temperature only tells you which way it flows. Temperature alone is generally meaningless. It is not sufficient to calculate entropy, either, so it is not an objective measure of “disorder”. In specific situations, you can use temperature to bound these variables, but only if you know the range of heat capacities.

        It does not matter if the heat capacities are constant over time (which, incidentally, they aren’t). You still have to factor them in to get a measure of energy.

      • Thread continues here.

      • Other Bart

        Here two Barts begin to converge.

        Have some past treatments on both sides been underwhelming? Including sophomoric approaches to key questions?

        Absolutely.

        And it is proper to be skeptical of such inadequacies where pertinent.

        However, there is plenty of valid reasoning and plausible evidence too for the pieces of the puzzle that can be put together.

        Those pieces in collection, the partial picture they form, suffice to move the state of our knowledge from ignorance to sufficient to recognize there have been changes, and to have some sense of the probable shape of those changes.

        Perhaps there is, somewhere out there I am unaware of, a clearer compilation of the body of knowledge of climate including most or all of what you ask.

        If so, it’s more than I need to know, though I’d like to see it in any event, pertinent or not.

        Still, a complete accounting of all energy in the whole system by temperature and heat capacity of every component is a very strict demand, given there are ways to statistically affirm the results we have for the purposes we need.

        Just because we can ask and answer the question of physical meaning –and ought — doesn’t mean we need to know the exact measure of it.

        Science is a poor name for the field. Real knowledge exists only in fiction. “Aestimo” (reckoning) is a better word for it.

      • P.E.

        “It’s an old controversy, whether an average of an intensive variable is as meaningful as an extensive one, such as enthalpy.”

        I’ve never really though of this as a controversy so much as an elementary dilemma presented in the first week of introductory courses on measurement as a precept to be remembered in every application of measurements.

        Are your units expressed in terms that are fit to the use you put them?

        The concept of “unphysical” measures (not restricted to thermodynamics, by the way, but also applicable in almost every field where measurement is used) might be brought up in criticisms and controversies in specific cases from many fields, but it can hardly be controversial what units a measurement is expressed in, or whether those units are system dependent.

        The criticism itself is only warranted where measurements are used in ways that are not fit to their physical meaning, and also where they are not a valid proxy for a meaningful (but perhaps more difficult to measure) effect.

        If you asked someone how hot his vacation was, and he replied by adding the daily temperatures and telling you that figure with no other information, that would be an unphysical number. Take the average by dividing that unphysical number by the length of the vacation in days, however, and you may find that average of use in addressing your question.

        If you are told the global temperature has risen 1.5C, with no other information, that’s an extremely unphysical number. Which is simply not what you’re being asked to consider in isolation.

        Supplement the figure with a start date, and use it for comparison with like lengths of time, and correlate or eliminate confounding variables, because temperature can be a proxy for energy if handled correctly, and you may have a physically meaningful number, a representation of how rapidly compared to the past the energy balance of the atmosphere has changed, and in what direction, and — with enough additional information — come to some understanding of the extremity of this change and its probability based on past estimates of happening absent a new cause, such as anthropogenic influence.

        There might be better measures, but they aren’t really available to us, and the measures we have are not nothing. They amount to a level of confidence generally accepted in most hypothesis testing, and are consistent broadly across multiple types of tests with AGW, while being far less consistent with natural variability.

      • “Take the average by dividing that unphysical number by the length of the vacation in days, however, and you may find that average of use in addressing your question.”

        But, that is the average of a homogeneous data set. The heat capacity of a single specific location should be effectively constant over a short span of time, and equal weights in a temporal filter for data points covering a uniform daily time interval are appropriate.

        But, what does the spatial average of nonhomogeneous data with varying inter-measurement relationships over the time span of interest mean? See Bart | October 28, 2011 at 1:15 pm | above for more elucidation.

      • “Consider, as another spatial average, the location of the center of gravity of an aircraft.”

        Apparently, you would compute that CG as the average position of all components, regardless of mass distribution. That is effectively what averaging temperature does.

        This is not hard to understand.

      • No, I would compute it with respect to the mass distribution. Averaging the temperature, a la Rhode et al, adjusts for the density of the thermometers.

      • Density of thermometers is only part of the equation. The heat capacity of a given region is the mass analog. And, if those values are changing, you have an unholy mess to determine an appropriate baseline weighting.

    • The BEST team have focused on mean land temperatures. If the global mean land temperature were to increase by 4K in 30 years, that would put a severe strain on the human ability to maintain an adequate crop supply. It would mean that every year there would be a severe drought in a major agricultural region, and for a few years now and then in all of them simultaneously.

      Despite the complications introduced by the thermodynamicists, it isn’t an empty question. The claim that it isn’t going to happen is a different statement from the claim that it would be meaningless to contemplate.

      Other people like the analogy with rectal or oral temperature: read all the thermodynamics you want, it matters whether it is rising,falling, or staying the same.

      • MattStat

        Charming though your metaphor (and apt), I suggest the metaphor of the Stirling Engine.

        The climate of the globe is full of inequalities due latitude and rotation and difference between solid and liquid surface and composition and altitude, all inducing those principles that seek equilibrium or lowest energy state or whatnot in competition.

        Each of these inequalities and resultant flows (from high pressure to low, hot to cold, high energy state to low, etc.) is an engine, and in one way or another they are driven by temperature.

        The higher the temperature in this complex of Stirling Engines, the more powerful the flows.

        Some of these flows drive temperatures lower in some regions than they had been.

        Some, drive cycles more rapidly or more powerfully or to take longer routes or to fold upon themselves or to go deeper or to be more shallow.

        All of these taken as just an arbitrary group of things with measurements that can be averaged might give the confounding impression of lower overall temperature growth than is real.

        The thermomechanical principle suggests the efficiency of an engine at such translation between heat and motion approaches its limit at 50%.

        The packets (for lack of a better term) of heat or cool, moisture or dry, carried by the motion of these engines will become more variable than otherwise, so there is a second type of confounding variable.

        Both of these effects make global temperature (or regional temperature, or weather, or cloud, etc.) much harder to interpret with regard to the interesting variables.

        The interesting variable is CO2 level above the natural range of 230 +/- 50 ppmv.

        From that one vector, all Risk profiles flow. With increasing CO2 level, Risk increases in every problem space, if only because there is so much ignorance about how complex systems will act over time as CO2 level increases.

        Global temperature is a proxy for the power applied to the Stirling Engines of climate.

        Some of these engines in some of their range of operation act as negative feedbacks.

        More appear to act as positive feedbacks.

        None of them are only one thing.

        Hurricane frequency or power increase causing temperature drop may be a negative feedback, for instance, but it’s also a pretty catastrophic outcome in itself, and it’s also entirely unpredictable, so we can’t even ask “if real” about it. Well, we can ask. We can’t answer it. It’s an Uncertainty in the Risk profile.

      • “If the global mean land temperature were to increase by 4K in 30 years…”

        Nobody is talking about this kind of movement.

      • No they are not. The question was whether such a change would have any meaning. The warning that mean temperature will increase 1 K in 50 years is as meaningful, just not as threatening.

      • As a hypothetical, it produces a vague sense of unease. As to what its physical significance would be, though, there is still no answer. Your prediction of severe droughts as a consequence is really nothing more than guessing.

      • ..increase by 4K in 30 years..

        If you’re not restricting us to consecutive years, I believe you could find plenty of combinations of three or more spans of years adding up to 30 that accomplish a 4K rise in shorter and shorter spans of time since 1800, perhaps as little as a 45 year span lately.

  46. Bart R,

    “A CO2-impact-factor-weighted global temperature, for example, that identified and eliminated from the global temperature things like regional ocean oscillations, aerosols, clouds, water vapor, might be more useful if it could be produced and agreed on.”

    Very well stated. I think that is what most “climate skeptics” are still waiting for, despite Dr. Muller’s bold statement that us dummies got no more good reasons for doubt.

  47. Richard "Heatwave" Berler, CBM

    Out of curiosity, when determining how urban an area is with the aide of satellites, are they looking for pixels of city lights (this wouldn’t work to well in some built up areas such as North Korea, or are they looking at skin temperature (which can be greatly different from the temperature 2M above the surface)? I question the literature that equates a thermometer site next to a manmade surface such as a parking lot (Anthony Watts refers to this as a class #5 site) with a systematic +5C (+9F) in temperature over a properly sited true air temperature. I gave an example at the AMS Broadcast Meteorology Conference in Miami (2010) of measuring a parking lot surface temperature of 143F with an infrared thermometer while 3/8″ above the parkinglot surface, an unshielded Cooper thermometer (in full sun !!) read 105F. My NCDC coop thermometer sited at the edge of the TV station parking lot (class #5 siting) read 92F at the time (the 92F was in excellent agreement with the readings at that hour across south Texas). I would agree that pre heated air coming out of an air conditioning exhaust that advected to a thermometer with little mixing would contaminate the reading, but the presence of a man made surface that is above the ambient air temperature several feet away from a thermometer’s location does not seem to be a very important factor. In fact, last Sunday, I was surprised at how discernable the warmth felt on my skin was as I walked past a wall of my house that faced the afternoon sun shortly after sunset. 20′ away from the wall, I measured 86.1F on my Fluke thermocouple. 6′ out, I measured 86.1F. 6″ out from the wall, I measured 86.7F. My infrared thermometer showed a surface temperature of 106F for the wall. My skin is obviously much better at absorbing heat being radiated from the wall (and heating up) than the free air between me and the wall. Where do such high (i.e. +9F) estimates of contamination of measured temperature related to radiative heat from heated surfaces several or more feet away come from?
    I am not convinced that radiative heat sources have introduced a large enough signal into the record to place the quantity of detected worldwide uptick in temperature in question (be it due to natural variability, GHG, solar causes, feedbacks within the complex climate system or some other cause or combination of causes).
    Another thought… a clear distinction should be made regarding the intent of a temperature measurement. The siting requirements of thermometers intended for climate change studies is a different animal than thermometers that would be useful for applied climatological purposes (urban energy usage, architecture, high resolution model runs where a grid square may be urban in nature, ect.).

  48. Brandon Shollenberger

    I raised two issues with a paper by BEST in the last topic, and I’ll repeat them here. Both are more minor than the criticisms from Keenan and others, but I think both ought to be addressed. The first issue I have is in regards to table 1 of the UHI paper. In it, it says says there are 38,898 total sites used, 16,068 of which are “very rural.” However, the paper gives these numbers as 39,028 and 16,132. The discrepancy is small, only 130 and 64, but as far as I can tell, the paper doesn’t explain it. That is obviously unacceptable.

    The second problem is more troublesome to me, though it also has no real impact on the final results. The UHI paper says:

    The analysis presented here is based on merged monthly average temperatures from the Berkeley Earth Surface Temperature Study dataset. This dataset consists of measurements from 39,028 unique stations, which are merged from 10 preexisting data archives (Rohde et al., 2011).

    Unfortunately, this citation is bogus. The Rohde paper simply doesn’t discuss the dataset the UHI paper relies upon. It uses the GHCN dataset and only refers to the dataset used by the UHI paper by saying:

    In another paper, we will report on the results of analyzing a much larger data set based on a merging of most of the world’s openly available digitized data, consisting of data taken at over 39,000 stations, more than 5 times larger than the data set used by NOAA.

    Neither of these issues is likely to change any results by BEST, and I imagine both came from careless oversights. The first issue probably arose from some step which didn’t get mentioned in the paper. The second issue probably arose from the author not understanding the actual scope of the Rodhe paper.

    • Brandon Shollenberger

      I just noticed something relevant to my comment here. The UHI paper makes it clear it is based upon 39,028 unique stations. As I mentioned, it refers to the Rodhe paper as the source of it’s dataset, but was is wrong to do so. I thought it would be prudent to check BEST’s website to see what it has published about the actual dataset. I saw this page, and thought it would be an alternative source. Of course, the page says this:

      The Berkeley Earth team has already started to benefit from feedback from our peers, so these figures are more up-to-date than the figures in our papers submitted for peer review.

      This means it isn’t directly comparable, but it should be mostly the same. Unfortunately, the file with the data for the figures says:

      Results are based on 37633 time series

      That’s 1,395 fewer series than the UHI paper says there should be, and it’s well below what the Rodhe paper says as well. As far as I can tell, there is no explanation provided for the discrepancy. This means not only is the UHI paper’s citation for its dataset wrong, there is no obvious alternative for it.

  49. Siting of thermometers is more important than UHI. A properly located thermometer in a city could avoid much of any UHI effect but a poorly sited rural thermometer could be greatly affected by nearby heat sources. Furthermore, The most important factor is any changes that may have occurred in the microclimate surrounding specific sensors that may have resulted in localized artificial heating. (It’s interesting to note that there probably would not be an artificial cooling source near a weather station thermometer. So, localized artificial heating would not be offset.)

    The BEST data (fig. 5 of the Averaging Process paper) show what can only be a non-CO2-related warming trend from 1800 to about 1940, then no warming form 1940 to 1980, then a steeper trend from 1980 to present.

    Most weather stations probably changed to digital thermometer at some point in the past thirty to forty years. If digital thermometers require relatively short signal cables, that could have resulted in more thermometers being located closer to artificial heat sources, and that could explain much of the temperature rise from 1980 to present.

    The BEST data show that nearly all the “global” warming since the the 1980’s occurred in the arctic, the northern part of Eurasia, and eastern Antarctica. A proper analysis of any artificial heating effect should look at the sensors in those locations. BEST’s paper on U.S. station quality is not helpful because, according to their own data, the U.S. has not warmed as much as other areas. (See fig. 7 of the Averaging Process paper).

    • East Antarctica
      Where dwells the wild warming thing.
      Thermometer maze.
      ==============

    • Most weather stations probably changed to digital thermometer at some point in the past thirty to forty years. If digital thermometers require relatively short signal cables, that could have resulted in more thermometers being located closer to artificial heat sources, and that could explain much of the temperature rise from 1980 to present.

      This seems backwards. The signal from an analog thermometer degrades in proportion to the length of the cable when the reading is in proportion to the voltage on the cable. A pulse-width-modulated (PWM) sensor such as the TMP05B (google it) suffers no such degradation until the cable is so long the pulse edges can no longer be sensed. (I buy these 25 at a time, they’re only $1.80 each from Digikey in those quantities.)

      With the advent of temporal encoding of data of this kind we should be seeing sensors located further from the building. not closer.

  50. We know it all:

    Parallel air temperature measurements atthe KNMI observatory in De Bilt (the Netherlands) May 2003 – June 2005

    http://www.knmi.nl/publications/fulltexts/wr2011_01.pdf
    (LONG LOADING TIME)

    The results indicate that changes in surroundings complicate or impede the use of present-day parallel measurements for correcting for sites changes in the past. In a few years time, the growth of bushes close to the thermometer screen may seriously disturb the temperature measurements. We quantied the possible effects of sheltering on temperature measurements at five sites. It appeared that, especially in summer, these effect on the the monthly mean temperatures may have the same order of magnitude as the long-term temperature
    trend (about 1.0 C/100yr in De Bilt). However, for most sites the inter-site temperature differences for maximum and minimum temperature have opposite signs. The net effect on the daily mean temperatures is, therefore, small (< 0.1 K).
    In practice, the largest inhomogeneities in mean temperature series may be anticipated in case of relocations from very enclosed sites to more open sites. The renovation of the wasteland area, close to the operational site DB260, had a signicant effect on the temperature of DB260. Both the location and shape of the distribution of daily temperatures differences are affected.
    The results indicate that the magnitude of the inter-site temperature differences strongly depends on wind speed and cloud cover. In the case of homogenization of daily temperature series, it is important to take this into account. A complication is that for wind speed the largest effects on inter-site night-time temperature differences occur in the range 0.0-1.0 m/s at screen level, thus strongly affecting the
    minimum temperature. In practice (a) wind speed is mostly not measured at screen level but at heights of 10-20 m (during stable nights, wind speeds at these heights are largely uncoupled from those at screen height), and (b) the measurement uncertainty for small wind speeds is large and often increases with the time during which the anemometer is in the field.
    Another complication for the modeling of daily temperature series, is the
    homogeneity of the time series of the explanatory variables wind speed and cloudiness. More research is needed in this area.
    Improvement of our understanding of inter-site temperature differences may enable the modeling of them. In the case of De Bilt there are certain aspects that are likely important and should be studied further. First, the non-uniformity of the KNMI-terrain may affect downstream sites by daytime advection and may cause temperature differences to be dependent on wind direction. It is recommended to study this further by measuring the sensible and latent heat uxes at several
    locations at the same time. Second, during night-time conditions, there are two main mechanisms that affect temperature differences between sites: (a) local stability differences, and (b) differences in sky-view-factor. Both mechanisms have an opposite effect on night-time temperature differences between sites and the net
    result may be a cooling or a warming. The interaction of those two mechanisms is not fully understood and needs to be investigated further to enable the modeling of them. Finally, local
    differences in soil type and groundwater levels between the sites may affect (apart from advection) the energy balance and may cause differences in observed temperatures.

    • (From the “Parallel air temperature measurements at the KNMI observatory in De Bilt (the Netherlands) May 2003 – June 2005”:)

      In a few years time, the growth of bushes close to the thermometer screen may seriously disturb the temperature measurements.

      That’s an excellent point. I’ve noticed that on very hot days my cat tends to spend more time under bushes, where it seems able to stay cool better, to a surprising degree I might venture to say. So if temperature sensors are being overgrown by bushes we should expect any temperature record based on sensors overgrown in this way to show a decline in temperature.

      Particularly vigorous bush growth over the past decade might explain why the global temperature has plummeted so precipitously over the past decade, leading many to forecast an imminent ice age.

      I am suspicious of such forecasts, which may well be unwarranted alarmism brought about by a failure to compensate for the cooling effect of these bushes that are overgrowing our temperature sensors.

      Can we afford the 47 trillion dollars it will take to ward off an ice age? This money would be more effectively spent on blankets and radiators for the 7 billion inhabitants of the planet. That much would buy nearly $7000 worth of warmies for every man, woman and child on Earth.

      And if as I suspect the forecasts are indeed based on a failure to recognize the cooling effect of bushes, the surprised and delighted recipients of their $7000 windfall will surely find other uses for the cash, rapidly bringing the economy back to the even keel that so pleasantly characterized the latter half of the previous century.

  51. ““The Boltzmann constant is only used at the atomic level.”

    Or, at the microscopic level anyway, with monatomic materials.”

    Not that following this, but perhaps it’s meant:
    Stefan–Boltzmann constant:
    “The Stefan–Boltzmann constant (also Stefan’s constant), a physical constant denoted by the Greek letter σ, is the constant of proportionality in the Stefan–Boltzmann law: the total energy radiated per unit surface area of a black body in unit time is proportional to the fourth power of the thermodynamic temperature. ”
    http://en.wikipedia.org/wiki/Stefan%E2%80%93Boltzmann_constant

  52. Bart R | November 6, 2011 at 3:03 am |

    “I’m not only interested in energy changes.”

    Then, you are not interested in whether rising CO2 levels have affected the climate? Because, to show that, you have to show that more energy is being retained.

    • *shrug*

      Strictly speaking, every physical change is an energy change. I’m not particularly interested in expressing these changes in units of Watts or Joules or degrees of temperature.

      Indeed, those units are fairly difficult to work with at the level of policy, of evaluating the big picture questions, for anyone except an engineer, and engineers are typically overwhelmed by such technical detail so much they often miss big pictures.

      I remember an engineer trying to explain once why he missed that his calculations indicated the SUV he was responsible for the design of would have triple the average rollover casualty rate of vehicles in its class. He said, when forced to acknowledge, it never occured to him the unit of measure of his calculation was a proxy for “dead babies”.

      To show CO2 levels affect climate, I could use an engineering approach, sure.

      Others have.

      If you’re truly ignorant of the work done on absorbtivity and energy balance calculations and truly doubt the validity of multidecadal comparisons, nothing I say now will dislodge that entrenched refusal to see.

      As to questions of policy, I’m satisfied CO2 levels are rising (aren’t you?), temperatures are rising (aren’t you?), the effects on weather of higher temperatures are happening (aren’t you?), and the complete logical chain from prime mover to effect has been demonstrated by reliable evidence (aren’t you?), that contrary claims of benefit are overblown and unreliable and not really pertinent (aren’t you?), and the Uncertainty is both small enough and asymmetrical in the direction of harms enough (aren’t you?), by enough to state current American policy is just plain in error (aren’t you?).

      Knowing is fine. Doing something is what matters.

  53. “If you’re truly ignorant of the work done on absorbtivity and energy balance calculations…”

    Is there another reason I would have, you know, asked the question?

    You seem to think you have answered my question, and my continued intransigence is evidence I came with an agenda. But, all you’ve done is wave your hands around and say, “oh, you know, it kinda’ just works out, somehow.”

    “…nothing I say now will dislodge that entrenched refusal to see.”

    See what? Probably not. You’ve given me nothing to see so far.

    “I’m satisfied CO2 levels are rising (aren’t you?)”

    According to particular measurements, yes. “Why” is another question entirely. I do not believe there is enough evidence right now to pin the blame on humans beyond a reasonable doubt. It all depends on the residence time, and I have not yet seen evidence that anyone has a very good handle on this. Purported incriminating evidence, e.g., isotope ratio analyses, have been discredited.

    “…temperatures are rising (aren’t you?)”

    Not in any particularly alarming or unprecedented way. Plus, the meaning of the global mean temperature metric is precisely what we have been discussing.

    “…the complete logical chain from prime mover to effect has been demonstrated by reliable evidence”

    Not so reliable. And, the argument is essentially ad ignorantiam. “We don’t know any other reason it could be therefore it has to be this” which only works in cheap detective novels. It is easy to construct a narrative which fits a selected set of data points, yet has no relationship to reality. Richard Feynman’s famous quote is worth reiterating: “The first principle is that you must not fool yourself, and you are the easiest person to fool.”.

    “…that contrary claims of benefit are overblown and unreliable and not really pertinent (aren’t you?)”

    In a word, no. Some may be. Others are compelling.

    “Uncertainty is both small enough and asymmetrical in the direction of harms enough (aren’t you?)”

    Hardly. Everything we know about history says higher temperatures are good. Today, none of the claims of impending disaster are showing any signs of being fulfilled. For example, hurricane and tornado activity are down. Sea levels only rise if adjusted for isostatic rebound. But, island are not inundated by accounting entries. Real sea levels, in fact, are not rising appreciably, and may even be declining.

    Doing something is fine, if it creates no harm. Demolishing the global economy falls under the heading of “harm”. To justify it, you have to show that the cure is better than the disease. To date, there is no indication that is the case.

    • It all depends on the residence time, and I have not yet seen evidence that anyone has a very good handle on this.

      It is not the residence time, it is the adjustment time that is critical. The only way that the relatively inert CO2 can leave the carbon cycle is by finding a deep sequestering site through a random walk. Since this is a diffusive process, the mean time actually diverges, which is why an adjustment time is only hinted at. It could be hundreds or thousands of years because of the long tail of a diffusion process.

      The following page is the only place you will find for a derivation of the adjustment time. It looks like the guy you feel is utterly clueless about statistical mechanics will have to school you on this matter:
      http://theoilconundrum.blogspot.com/2011/09/derivation-of-maxent-diffusion-applied.html

      • We’ve argued that one before on another thread, WH. It all depends on the distribution chosen, and there is no empirical evidence supporting yours, if I remember correctly.

        “…will have to school you on this matter…”

        Is this really necessary? Or, helpful? Does it stimulate your hypothalamus? Is there a reason?

      • It all depends on the distribution chosen, and there is no empirical evidence supporting yours, if I remember correctly.

        I presented the solution to the Fokker-Planck master equation, so no distribution is involved. One who has some knowledge about statistical mechanics knows that this dies off as 1/sqrt(time). The empirical evidence is in the link I just gave you.

        If you need more evidence, take a look at the fit from the textbook “Introduction to Organic Geochemistry”. Every chart of the CO2 impulse response I have encountered shows excellent agreement with this very characteristic fat-tailed decline.
        http://1.bp.blogspot.com/-ygMmpK0ZK78/TrS7f0wvXsI/AAAAAAAAAlM/q200Zy40Ky0/s1600/IntroductionToOrganicGeochemistry.gif
        Here is the explanation of the adjustment time:
        http://img269.imageshack.us/img269/6417/tracegases.gif

        I love presenting these kinds of models because they are based on the kind of conventional statistical physics that a Huffman, Salby, Postma, or one of the sky-dragons would find offensive (because it is correct). It is great tweaking the skeptical views with some solid physics.

      • You know, WH, you apparently have a bad habit of accepting simple models based on particular simplified assumptions, which you learned perhaps in introductory coursework, and accepting them as universally valid and applicable.

        Firstly, this is not the solution of the F-P equation, it is a solution, under very constrained assumptions. It has to be, of course, because no general closed form solution to the F-P equation exists. There is an old observation/joke/wry commentary about reading scientific literature that goes, when the authors say the equation is “of great practical and theoretical importance,” it means, “it’s the only equation I have been able to solve.”

        The author himself says his diffusion parameter is “not very well known,” which he then attempts to compensate using a “maximum entropy” approach when there is no particular reason given to support the notion that the particular measure is maximized in nature. The proffered probability distribution for “x” is an arbitrary, time invariant exponential density. There is no empirical evidence whatsoever to be found here.

        em·pir·i·cal
           [em-pir-i-kuhl]
        adjective
        1. derived from or guided by experience or experiment.
        2. depending upon experience or observation alone, without using scientific method or theory, especially as in medicine.

        Show me something, anything, which demonstrates experimentally that the result is valid, and not just mathematical conjuring, and I assure you I will pay it heed.

      • And, if you can show such evidence, it must scale. The situation in which it is observed must, in some non-trivial way, reflect the global conditions of active biota, weathering of exposed minerals, the temperature dependent sink of the oceans, etc… You cannot just shrug these off and assume that what holds in a small lab under particular conditions extrapolates to the complex arena of the entire Earth’s CO2 exchange system. Science does not take shortcuts, and many a promising theory has foundered on the shoals of scalability.

      • It’s a kernel solution to the F-P and you can use it to generate impulse response solutions to various forcing functions.

        Show me something, anything, which demonstrates experimentally that the result is valid, and not just mathematical conjuring, and I assure you I will pay it heed.

        You got nothing and it looks like the keys to the kingdom belong to me. Some of the stuff I have come up with in terms of dispersion analysis is incredibly useful. The book I wrote available at http://TheOilConundrum.com is loaded with this kind of analysis. I doubt you will look at it because you do not appear to be that intellectually curious.

        If on the other hand, you actually gave me something to look at rather than just spouting off, I would probably look at it and then rip it off if it was any good.

        And, if you can show such evidence, it must scale. The situation in which it is observed must, in some non-trivial way, reflect the global conditions of active biota, weathering of exposed minerals, the temperature dependent sink of the oceans, etc…

        Excellent schooling. Referring to the book again, most of the dispersion/diffusion analysis I do covers a dynamic range of several orders-of-magnitude. The transport models for disordered semiconductors is often 8 orders both in time and in current measure.

        I do realize that in climate science we don’t have these kinds of ranges to deal with, and we are stuck with whatever data we have. Can’t set up an electrical circuit under controlled conditions and substitute that for a climate model.

        The other issue is multiple factor scaling. Why MaxEnt works is not perfectly understood but everyone realizes that the more disorder that exists via “active biota, weathering of exposed minerals, the temperature dependent sink of the oceans, etc”, the more likely this will effect the main factors we are interested in. That means the particular measure we are interested in gets smeared by those factors as well.

      • “I doubt you will look at it because you do not appear to be that intellectually curious.”

        More diddling of you ego. This kind of thing really is unseemly in public. Not to mention tiresome. I was propagating probability distributions using the F-P equation for complex, time varying systems, for the purpose of formulating robust nonlinear estimation filters, while you were still in grade school.

        You have a solution to a restricted, stationary time model using ad hoc assumptions which appear to work at limited scales in specific circumstances, and you think I should accept your extrapolation to the whole of the Earth because, well, because you think the math is kinda’ cool. Sorry, Junior. You’ve got a lot more work to do.

      • I was propagating probability distributions using the F-P equation for complex, time varying systems, for the purpose of formulating robust nonlinear estimation filters, while you were still in grade school.

        Yeah, sure, I bet.
        Lots of big talk and nothing to show for it, how humiliating for you Gramps.

        You have a solution to a restricted, stationary time model using ad hoc assumptions which appear to work at limited scales in specific circumstances, and you think I should accept your extrapolation to the whole of the Earth because, well, because you think the math is kinda’ cool. Sorry, Junior. You’ve got a lot more work to do.

        Kind of jealous, eh? What can I say, I’m on a roll.

      • “What can I say, I’m on a roll.”

        Apparently, you’re on something.

    • Fine summary. The models reliably project CAGW, but that’s explicitly what they were told to do. The real world is not so complaisant.

    • Bartemis

      Fair enough. We have a framework for a dialogue.

      Is there another reason I would have, you know, asked the question?

      No one familiar with the academic tradition, and you sir appear to bear all the hallmarks of a man of advanced western education, can think of only one reason to ask questions.

      Perhaps it is best if we divide the first chapter of our dialogue into two smaller chapters:

      1. Absorbtivity of GHGs and the Tyndall Effect
      2. Energy balance calculations
      3. Radiative transfer

      To establish where we need to go, it is best to know where we are. I’ve expressed disbelief that you are ignorant of these topics, so I’ll express that disbelief more formally, Bartemis:

      What is your current state of knowledge, understanding, and opinion of the above topics 1-3 of Absorbtivity and GHG’s, Energy balance equations, and Radiative transfer?

      Without a place to start, this will get very longwinded indeed.

    • Bartemis

      “I’m satisfied CO2 levels are rising (aren’t you?)”

      According to particular measurements, yes.

      Good enough. Well enough?

      Do you doubt the particular measurements or find them unsatisfying?

      Which of these measurements, and in which ways, please, if so?

      “Why” is another question entirely.

      “Why” might be a question, for some purposes. For my purposes, the treatment of CO2 rise as a perturbation of a chaotic system, “Why” is a very distant tertiary or quaternary or lower order question compared to, “By too much?” or, “By how much does the perturbation increase Risk?” or “Can anthropogenic means reduce the perturbation?” or the like and related questions. I’m not sure “Why” is the right question, or even useful. Are you?

      I do not believe there is enough evidence right now to pin the blame on humans beyond a reasonable doubt. It all depends on the residence time, and I have not yet seen evidence that anyone has a very good handle on this.

      Reasonable doubt is a term of the courtroom. Indeed, in civil law, the reasonable doubt threshold has been exceeded several times in actual court trials up to the level of the US Supreme Court and the UK’s Court of Appeals, and the case for human influence on CO2 upheld.

      If you mean to a sufficient sigma level criterion for acceptance, that may be a different matter. By all means, please feel free to express using the mathematic terms of your choosing what exactly you mean, unambiguously.

      I promise to ask for clarification if I find your math too advanced for me.

      Purported incriminating evidence, e.g., isotope ratio analyses, have been discredited.

      I’ve seen a lot of purported discrediting of human contribution to CO2 levels; although largely for me a moot question, given the perturbation argument above, I’m interested enough in trivia to have looked closely at many of these cases, and found them unsatisfying.

      Please, as one skeptic named Bart to another, could you set out your case for refutation, using the same mathematical standard you expect of me, so I will both know what it looks like when one isn’t handwaving, and so I can judge your claims on their merits, not on say-so?

    • Bartemis

      “…the complete logical chain from prime mover to effect has been demonstrated by reliable evidence”

      Not so reliable.

      I don’t use evidence in my chain of reasoning below 95% CI, and always with at least two independent foundations. What’s your standard of reliability?

      And, the argument is essentially ad ignorantiam. “We don’t know any other reason it could be therefore it has to be this” which only works in cheap detective novels.

      Indeed, not.

      1. Occam’s Razor is not Argumentum ad Ignorantiam;
      2. We know plenty of alternate hypotheses (other reasons); some of them mitigate, some of the aggravate, but the case I make in my chain of reasoning from prime mover to ultimate cause (well, to cycle of interdependent causes) relies on the strongest of the hypotheses at each juncture, and where there is substantial question, I’ve always acknowledged it (for instance, until BEST, I credited global warming as no higher than 10:1 likely, despite all the evidence and argument available);
      3. How did you conclude from anything I’ve ever said, “well, we can’t imagine it as anything else, so it must be the first idea that popped into Hansen’s head?”

      It is easy to construct a narrative which fits a selected set of data points, yet has no relationship to reality. Richard Feynman’s famous quote is worth reiterating: “The first principle is that you must not fool yourself, and you are the easiest person to fool.”.

      I treat Feynman as I treat McLuhan: most people have no idea what Feynman meant, and the rest mostly get him wrong.

      Of any investigator into climate, the one I am most skeptical of is myself. (Well, possibly Girma, more than me).

      Since me interrogating my own doubts would take too long, let’s investigate your doubts instead:

      Where would you start, and what steps would you take, to construct a hypothesis from human activity to global climate for the purposes of deciding what changes, if any, to human activity would be beneficial?

    • Bartemis

      “…that contrary claims of benefit are overblown and unreliable and not really pertinent (aren’t you?)”

      In a word, no. Some may be. Others are compelling.

      At the risk of being potkettled, isn’t that a bit vague?

      What claims do you find compelling, and why?

      (For the claims you find overblown, it may help me to know which, and why, too.)

    • Bartemis

      “Uncertainty is both small enough and asymmetrical in the direction of harms enough (aren’t you?)”

      Hardly. Everything we know about history says higher temperatures are good.

      Sorry, Other Bart; I’ve wintered in Minneapolis and enough other places where trucks fall through the ice and kill the driver, the passenger, the tailgaters and sometimes even the dog when higher temperatures are at play.

      I’ve seen the studies that claim more damaging hurricanes when the oceans are warmer.

      I’ve read the statistics comparing heat-related deaths to cold-related deaths across the world, and know that heat kills two orders of magnitude more often than cold.

      I just don’t believe your assertion of history. It brings to mind a quote of Feynman…

      Today, none of the claims of impending disaster are showing any signs of being fulfilled.

      Orly?! (Not that I’d be surprised, I’m more of the mind that we’re looking at half-millennial spans for the real fun to start, and we’re only a bit more than a quarter millennium into the Industrial Age.)

      For example, hurricane and tornado activity are down.

      Could you give me a timescale and a cite for that claim? It contradicts the ones I’ve seen.

      Sea levels only rise if adjusted for isostatic rebound. But, island are not inundated by accounting entries. Real sea levels, in fact, are not rising appreciably, and may even be declining.

      Huh. That’s a pretty bold statement, as it also appears to contradict claims I’ve seen elsewhere. Could you provide cites?

      And for two limited and questionable examples, I have to ask what do two examples prove, even if they’re true, and even if they make sense to look at on timescale?

      If there’s some hurricane step-function, and say a half-century lag between acting to mitigate and avoid GHG level rise, why do we want to risk a half century of elevated cyclone activity again?

      If there’s a sea-level step-function, why do we want to submerge and then reclaim major port cities for half a century?

      Even the complexity of isostatic rebound and the indecideability of sea level tends to argue for asymmetric Uncertainty, and greater associated Risk.

    • Bartemis

      Doing something is fine, if it creates no harm. Demolishing the global economy falls under the heading of “harm”. To justify it, you have to show that the cure is better than the disease. To date, there is no indication that is the case.

      Demolishing the global economy?

      Wow.

      So, worse than now?

      Worse than the Soviet Union?

      Worse than WWII and WWI?

      Worse than the Great Depression?

      I gotta ask, what are you claiming is the something I want done?

      Sure, those Malthusian crackpots who want to see the world population dropped to 300,000,000 are nuts, and that sort of course of action would demolish the world economy.

      Because what I’m proposing, as has been effectively argued by Dr. Ross McKittrick in his PhD thesis, would only strengthen the world economy.

      So, that’d be an indication for the case.

      Ask him about that sometime.

  54. Bartemis

    “…temperatures are rising (aren’t you?)”

    Not in any particularly alarming or unprecedented way. Plus, the meaning of the global mean temperature metric is precisely what we have been discussing.

    Could you clarify what it would take to alarm you, and how long before one might reach the point you are alarmed one would need to begin to act to minimize the economic cost of mitigation, avoidance and adaptation?

    Also, as to precedent, I keep hearing people speak of precedent, and then offer into evidence hearsay and speculation, narrow regional histories or ambiguously interpreted paleologies.

    Is this the sort of precedent you speak of, or do you mean precedents of the Earth as it was tens of millions of years ago?

    Because, how is this not a double standard, handwaved to the hilt and packaged up in a black box of untraceable mythicism?

    The best of the reconstructions appear to indicate nothing like a precedent for the current CO2 or temperature rises in the past 800,000 years, and the best extrapolations of the sub-million-year record implies around 20 million years since the last era with comparable CO2 levels and only sparse periods of like temperature levels until that same 20 million year ago epoch.

    The meaning of the global mean temperature metric may be what you’re still discussing.

    Me, I’ve moved on to the meaning of the change in the global mean temperature metric.

    You’ve made a lot of claims that you say invalidate this usage, to do with heat capacity and so forth. I’d like to hear more details of those claims.

    Specifically, “..treating average temperature this way was not kosher, so I started thinking about it and realized the guy had a point. I figured, well, it could be used perhaps to bound the energy, but search as I might, I could not find that anyone had addressed this. And, I started thinking about the extremely wide variations in heat capacities of various regions of the Earth, and became very concerned that the error bars would, in fact, be huge.”

    Since I avoid WUWT like the plague (and for very similar reasons), could you express in some units or by some analogy the degree of your “very concern” level and your reasoning for this concern? Is it the concern of a phobic, or of a battle-hardened veteran with a steely eye?

    Also, too, some measure of the hugeness of the error bars? How huge, and specifically why?

    See, the asymmetry of this Uncertainty only adds to the Risk of anthropogenic change, due any change large enough to produce huge error bars, even if in the opposite direction of the perturbation in question, itself represents a large perturbation too. So, I’d want to know more about that.

    • Didn’t see this stuff all at the bottom, Bart R. In the words of Roseanne Roseannadanna, you ask a lot of questions. I’m sorry that I do not have the time to respond to all of them, nor the desire, since they would only spawn more and more, and a blog thread is just not the appropriate forum.

      My core belief is simply that complex systems often behave contrary to intuition, and that the standard of proof which would be required to undertake extraordinarily painful measures has not been met.

      I do not believe the establishment climate scientists are yet qualified to render judgment. I am seeing very poor analysis methodologies being used e.g., to determine feedback from clouds. I am seeing a stark unfamiliarity with basic feedback concepts in general. I am seeing confirmation bias and cherry picking, if not outright deception, in many places. I am seeing unprofessional behavior, calumny and innuendo against dissenters, secrecy and disdain of scientific openness, and barely disguised propaganda from advocacy organizations cited copiously in what are supposed to be scientific documents.

      I don’t even know which data to take seriously, based on the never-ending series of “adjustments” which always seem to “adjust” in a particular direction. I am a rocket scientist, but it hardly takes one to see that the fix is in, and you don’t have to be a weatherman to know which way the official wind is blowing. But, of that data, there is this in particular: a complete and utter stall in upper ocean heat content since the early part of the past decade, coincidentally or not at just the time that reliable measurements started to become available. Clearly, Nature is refusing to adhere to the narrative.

      • Bartemis

        In other words, you’re dismissing all views that disagree with your own with a wave of the hand.

        You ask questions that inherently require lengthy exposition, or if they do not, when brief answers are offered demand exposition, and then excuse yourself as the matter is too long. That’s an obvious foul belonging in the disinformation thread.

        “complex systems often behave contrary to intuition”

        Oh really? Then why do you advocate a position that tacitly calls for increasing perturbation of complex systems whose short spans of reliability we depend on?

        As for propaganda, one can scarcely think of a more propagandistic term than “establishment climate scientists”, a clear ‘books bad, burning good” watchphrase.

        You are seeing an awful lot of things, some of them even real. It’s of use to any field to have constructive criticism and to reveal its flaws.

        That’d be fair game. Well done for that, as far as it goes.

        But I see no hint of evenhandedness, no trace of acknowledging your own errors or accepting ideas that would open you to disconfirmation of your biases.

        Let’s look at your link: about 1960 and 1980, there were the start of decadal pauses in the graph larger than the “complete and utter stall” you somehow think is there on a clearly rising period. The 1960-current trend? Rapidly rising.

        How can a rocket scientist be so bad at reading graphs?

        It ain’t rocket science.

  55. “Let’s look at your link: about 1960 and 1980, there were the start of decadal pauses in the graph larger than the “complete and utter stall” you somehow think is there on a clearly rising period. The 1960-current trend?”

    A) Previous pauses were well before the era of accurate ARGO measurements – we don’t even know if they were pauses. We don’t even know if the heat actually rose prior to ARGO.
    B) Signal to noise: there was nearly 25% less CO2 in the atmosphere in 1960. Yet, this natural stalling process still trumps its effect.

    You can rationalize it all you like, but the bottom line is, it isn’t rising, and you don’t know why.

    • Bartemis

      The bottom line, there’s no statistically valid argument that the current multidecadal rising trend can be definitively said to have paused based on the current decadal trend.

      Bayesian analysis suggests the probability of a falling multidecadal trend based on the current decadal trend in ARGO is very low, depending how you choose your prior perhaps as low as 10%.

      Considering the BEST land record with the same analysis and far, far higher CI gets us to only 5% chance of falling trend, and the influence of the sea on the land is so significant, we can again resort to Bayes to argue against your falling hypothesis.

      You _might_ be right, but in the same situation as you are now, others have been wrong 24 times in a row.

      If we started with a $0.01 bet, and went double or nothing every year, by now I’d be owed $83,886.08.

      Does that begin to express how bad your argument is?

      The bottom line, the line varies up and down all the time, but the multidecadal trend is consistently rising and correlates well with CO2 (especially when measured other influences are removed from the trend lines), which is what has been predicted by the AGW hypothesis.

      Is that absolute proof?

      *shrug*

      I don’t need absolute proof.

      I need actuarial proof, because I deal in the real world, not the world of the lab.

      In actuarial terms, where ideas are measured in Risk, BEST shifts most of the major temperature-related metrics way over to the AGW end of the scale.

  56. “…the multidecadal trend is consistently rising and correlates well with CO2…”

    Define “well”. The multi-decadal trend has risen in both? Does flipping a coin twice and getting two heads mean you’re always going to get two heads?

    Or, are you actually claiming that the coincident rise is predictable and quantifiable based on fundamental principles (i.e., no fudge factors, no alternative likely interpretations, no aerosol forcing or hidden heat or positive feedbacks – more along the lines of f = ma)? If so, you are wrong.

  57. That dilemma is particularly useful from
    the realistic point of view that the majority of known twisted gamblers achieve
    extraordinary achievement using the scenario, of which web watchful shuffling
    associated with charge cards actually is not such!

    5) a couple of MMMR’s ($150) to earn $100 = $50 damage

  58. This article will assist the internet visitors for setting up new website
    or even a blog from start to end.

  59. After going over a handful of the blog articles on your web page, I seriously appreciate your technique of blogging.

    I bookmarked it to my bookmark website list
    and will be checking back in the near future. Take a
    look at my web site as well and tell me your opinion.

  60. Thank you for sharing your info. I truly appreciate your efforts and I will be waiting
    for your further post thank you once again.

  61. you are really a good webmaster. The site loading speed is amazing.
    It kind of feels that you’re doing any distinctive trick.
    Also, The contents are masterwork. you have performed a magnificent task on this topic!